EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Rockefeller Philanthropy and Modern Social Science

Author(s):Seim, David L.
Reviewer(s):Critchlow, Donald T.

Published by EH.Net (December 2014)

David L. Seim, Rockefeller Philanthropy and Modern Social Science. London: Pickering and Chatto, 2013.  ix + 265 pp. $120 (cloth), ISBN: 978-1-84893-391-0.

Reviewed for EH.Net by Donald T. Critchlow, Department of History, Arizona State University.

In this age of excessive wealth, the Rockefellers, John D. and his son John D. Jr., in the early twentieth century provide an example of how great wealth can be used to better the world.  Through the establishment of the Rockefeller Foundation, huge sums of money were given to philanthropic causes.  The Rockefeller Foundation’s greatest contribution arguable lay in the advancement of medicine, but its efforts in education and the social sciences were notable.

Historian David Seim focuses his short book on the Rockefeller philanthropy in the social sciences from 1900 through 1920.  Seim eschews deep analysis for a straight-forward narrative of Rockefeller involvement in a wide-range of projects to support individual social scientists, advance social science research and education, and institutionalize the social sciences within universities and inter-disciplinary research institutions.  His book reads like a lengthy institutional report on a dizzying array of projects, but the wealth of information contained in his study is rewarding for any scholar interested in the history of the social sciences, university education, race relations, and public policy in the twentieth-century.

The period from the late nineteenth century up to the Great Depression starting in 1929 can be described as the “Golden Age” of the American social sciences. The emergence of the modern social sciences in this period, so ably described by historians such as Thomas Haskell, Barry Karl, Lawrence Cremin, Mary Furner, and others, projected an optimism that empirical social science research could better the world. The accumulation of empirically derived knowledge about human behavior and nature, these early social scientists maintained, was critical to reforming society, ensuring progress, and overcoming what they believed was a lag between scientific and technological advancement and traditional culture and customs. The confidence of early social scientists in their role in advancing society manifested hubris, but in the process American higher education was transformed and the social sciences became institutionalized. John D. Rockefeller, his son, and a brilliant staff played a critical role in this transformation.

Having earned a fortune in oil, John D. Rockefeller, a devout Baptist, believed that his wealth should be put to use in bettering the world.  At first he directed his charity toward mostly missionary organizations, educational institutions, and projects. From the outset he gave significant funds to African-American and Native American causes, including black seminaries and Indian schools. Overwhelmed by requests for support — sometimes reaching hundreds of letters each day — Rockefeller hired Dr. Frederick T. Gates, a Minneapolis minister, to organize his philanthropic activities. After retiring in 1896 from business, John D. Rockefeller joined with his son, John D. Junior, to direct his philanthropy. In 1901, they decided to establish the Rockefeller Institute for Medical Research. This was followed by the establishment of the General Education Board, which directed much of its money toward the South and black education. In 1913, they established the Rockefeller Foundation. With the specific goal of serving “The Well Being of Mankind throughout the World” (Seim, pp. 58-59). The Rockefeller Foundation collaborated with the Carnegie Institution and the Russell Sage Foundation in promoting the social sciences.

The first efforts of the Rockefeller Foundation were small, providing financial support to the Bureau of Social Hygiene, a Division of Industrial Relations, and an Institute of Economics (1922), which later developed into the Brookings Institution.  The Bureau of Social Hygiene provided support for research into the “prostitution problem,” eugenics, and the establishment of Margaret Sanger’s American Birth Control League.

The turning point in Rockefeller’s involvement in the social sciences came with the establishment of the Laura Spelman Rockefeller Memorial Fund in 1918, named after Rockefeller’s late wife. With an original endowment of $13 million, later extended to $74 million, an extensive program developed providing funds to assist the well-being of women and children and providing major resources to an effort to promote the broad advancement of knowledge, methods and application in the social sciences. The first years of the Spelman Memorial Fund focused on women and children, including support for the East Harlem Health Center, the Maternity Center Association of Manhattan, the YMCA and YWCA, the Boy Scouts and Girl Scouts, the Salvation Army, and the American Child Health Association. Headed by Beardsley Ruml, a University of Chicago trained Ph.D., who had studied with James R. Angell, the Memorial Fund turned its attention to the advancement of the social sciences in 1923. Key advisers such as Abraham Flexner, Raymond Fosdick, and Henry Embree played important roles in shaping the Memorial Fund program.

Seim details the multiple activities of the Spelman Memorial Fund through specific grants to educational institutions, individual research projects, the creation of research centers, and areas of research.  Seim ably outlines the full extent of these projects, showing how Ruml and his associates carefully developed and directed a program to fund the social sciences in America.  The major focus of this program was to redress what was seen as a cultural lag in American society, and to develop knowledge useful to maintaining what was described at the time as “social control” in human behavior. By social control, as Seim observes, Rockefeller people meant social advancement. This was a reform agenda that sought to distinguish between narrow business and class interests and empirical research by non-partisan expertise.

As these research programs developed, Ruml and his advisers expressed particular concern that funds be targeted toward institutional advancement within the universities and interdisciplinary organizations. Ruml did not limit funding to only American universities. In 1923, the London School of Economics began a long-term relationship with the Rockefeller Foundation.

In America, Ruml targeted funding major institutions, including the University of Chicago, which was founded largely with John D. Rockefeller money in 1892. Spelman Memorial funds provided vital in developing what became known as the Chicago School in Sociology. Much of the Chicago school of sociology focused on studies of ethnic and race relations. This focus on race relations was evident as well in funding to the University of North Carolina, where major research was conducted on the state and the means of bettering race relations in the South. At Columbia University in New York, Rockefeller funded major research on black southern migration to the North. Major Spelman Memorial grants went to Harvard University, especially to support the pioneering work of G. Elton Mayo.  Other funding — also on race relations — went to Western Reserve University in Cleveland, and Charles S. Johnson at Fisk University. A graduate student of Robert E. Park at University of Chicago, Johnson published in 1930 The Negro in American Civilization.

Spelman Memorial funds were directed to China, the Soviet Union, Sweden, and Western Europe, often toward research in what now would be called economic development. Seim notes that one of the black marks on Spelman Memorial funding during this period was support of eugenics research in the United States, as well as in Australia and Germany, where funds were used to support the Kaiser Wilhelm Institute for Psychiatry and the Kaiser Wilhelm Institute for Anthropology, Eugenics and Human Heredity. At the same time, Ruml supported research in international relations with a particular goal of aiding the League of Nations. Major funding helped launch the Social Science Research Council, under the direction of University of Chicago political scientist Charles Merriam. Less attention was given to the humanities, although the fund directed some funding toward historians, especially in France.

Seim ends his study with the merging of the Spelman Fund into the Rockefeller Foundation in 1929.  In accomplishing his intent to explain “the creation of the ideal of neutral, public-oriented social scientists (p. 239), Seim does not evaluate more fundamental questions raised by the rise of specialized, empirical social science research. The mindset of Ruml and the Rockefeller Foundation assumed that empirical social science research would improve the world. In many ways, it did and continues to do so today. Yet the mindset of early Rockefeller Foundation officers often precluded larger fundamental questions that had been explored by earlier philosophers and political thinkers: The ancient Greeks, Plato and Aristotle, asked basic questions as to the meaning of truth, justice, and a good society?  Adam Smith and David Hume examined what makes for a well-ordered society?  Alexis de Tocqueville, less than a century before the founding of the Rockefeller Foundation, asked about the relationship of equality and liberty in a democratic society, while warning of a “soft-despotism” that comes with a breakdown in civil society and the rise of a bureaucratic state. Already in the 1920s, political thinkers such as Ludwig von Mises and F.A. Hayek were challenging the hubris of economic planners and regulators. Earlier thinkers may have reached wrong conclusions, but debate over these fundamental issues rests generally outside the realm of narrow empirical social science research, as envisioned by the “new” social science in the early twentieth century.

The new social scientists in this golden age rejected the deductive reasoning of the past –the ancient Greeks and Christian theologians. The new social scientists found such debate maddening and ultimately irresolvable.  Yet, without dismissing the importance of the contributions that empirical modern social science can impart to our understanding of the world — often funded then and today by philanthropic foundations — the question that should have confronted the promoters of the new social sciences was simply: Are we too narrow, too exclusive, and too confident as to the ultimate contribution which we can make to what makes for a just, well-ordered, liberal society in our often facile dismissal of previous thinkers?

Donald T. Critchlow is Director of the Arizona State University Center for Political Thought and Leadership. His most books include The Brookings Institution: Expertise and the Public Interest in a Democratic Society; When Hollywood Was Right: How Movie Moguls, Film Stars, and Big Business Remade American Politics; and A Very Short Introduction to American Political History (forthcoming).

Copyright (c) 2014 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (December 2014). All EH.Net reviews are archived at http://eh.net/book-reviews/

Subject(s):History of Economic Thought; Methodology
Social and Cultural History, including Race, Ethnicity and Gender
Geographic Area(s):North America
Time Period(s):20th Century: Pre WWII

American Railroads: Decline and Renaissance in the Twentieth Century

Author(s):Gallamore, Robert E.
Meyer, John R.
Reviewer(s):Brown, John Howard

Published by EH.Net (December 2014)

Robert E. Gallamore and John R. Meyer, American Railroads: Decline and Renaissance in the Twentieth Century. Cambridge, MA: Harvard University Press, 2014. xiii + 506 pp. $55 (hardcover), ISBN: 978-0-674-72564-5.

Reviewed for EH.Net John Howard Brown, Department of Finance and Economics, Georgia Southern University.

This book presents a historical overview of the American railroad industry through the course of the twentieth century.  The central thesis of the volume is that many of the ills suffered by the rail mode in this century were a product of the regulatory environment they faced.  In particular, in the post-World War II environment where highway transportation became an increasingly effective alternative to both freight and passenger rail service, regulation hampered effective responses by rail firms.  In the early 1970s the rigor mortis induced by regulation combined with the collapse of manufacturing in the Northeastern region of the United States thrust much of the rail capacity in the region into bankruptcy.  Salvation was delivered to the rail industry by the Staggers Act of 1980 which dramatically broadened the scope of business decisions that railroads could make without regulatory monitoring.  This thesis will be uncontroversial for economists and business historians who have studied the industry and particularly its post-1980 resurgence.

The first chapter is a paean to the “Enduring American Railroads.”  The second chapter discusses the ills of regulation in the context of the peculiar economics of the rail industry.  Curiously, given the pivotal role which sunk costs have assumed in economic theories of competition over the last forty years, the term is unmentioned in the text and index, even though the first topic discussed in the chapter is whether railroads are natural monopolies.  Since the expense of roadbed and right-of-way are both substantial and sunk, entry into a market is quite difficult where an incumbent is established, since they have already sunk their costs.  The incumbent can credibly threaten to force rates below average costs making entry unprofitable.  This dynamic often leads to monopoly.  Where entry actually occurs, the alternative is often “destructive competition” featuring rates that cannot recover the full costs of service.  A railroad’s system is thus often a checkerboard of competitive and monopolized routes.  This very characteristic of railways was responsible for attempts to regulate the industry in the nineteenth century, culminating in the Interstate Commerce Act.  The chapter concludes with a box laying out Ten Principles of Transportation Economics.  These are uncontroversial in themselves.  However, they do not offer so much as a hint of the role of sunk costs.

The third chapter summarizes the history of government control of railroads in the first half of the twentieth century.  The authors divide this era into three sub-periods: the antitrust episode, the period of direct government operation, and the period following the Transportation Act of 1920.  The antitrust episode was initiated by the Northern Securities case and represented a strike against what was viewed as excessive concentration of control in the transcontinental roads.  The authors view this episode as ill-conceived since after the Staggers Act of 1980 essentially the same systems were created in a new round of mergers.

The experience of government control during the First World War is in contrast reviewed positively.  Although it is treated as the product of special circumstances and the peculiarity of the design of the American railroad system, the authors do draw some lessons from the events.  These lessons consisted largely of the virtues of eliminating wasteful duplication of efforts.  This was the same reasoning that J.P. Morgan employed in snuffing out competition in many industries during the 1890s.  After the war, however, the roads were rapidly returned to private control.

This leads to the third epoch of the century where public rail policy was determined by the Transportation Act of 1920.   Official policy favored substantial consolidation of rail systems.  However, consolidation was never achieved.

The fourth chapter purports to examine the role that competition from alternative freight transportation modes played in the evolution of the rail industry.  In fact, the chapter is poorly organized, shifting between transport modes, policy recommendations, and historical eras haphazardly.  Three points stand out regarding the topic of intermodal freight competition.  First, prior to about 1950, waterborne carriage provided a limited competitive check on railroads in some regions of the United States, i.e. the Atlantic, Pacific, and Gulf Coasts; the Great Lakes; and the Mississippi-Ohio-Missouri basin.  Next, after 1950, highway transportation and air transport increased competitive pressure on the rail industry in freight markets.  Finally, government policies resulted in substantial subsidies to these alternative modes.  In particular, unlike railroads where right-of-ways are privately owned and maintained, internal waterway improvements, highways, and airports are usually built and maintained at government expense.

The following chapter tells a similar story regarding rail passenger traffic.  Once again, competition from competing modes was ineffectual prior to World War II.  Afterwards, both highways and air travel supplanted railroads in passenger service.  Some of this was attributable to the improved comfort and safety of these modes due to technical improvements, particularly the development of passenger jet aircraft.  Additionally, as was the case for rail freight, the implicit subsidies provided by governmental investments in highways, airports, and the air traffic control system placed the rails at a disadvantage.

The trends related in these two chapters paint a picture of steadily mounting pressure from competing and partially-subsidized modes.  The classic response of railroads to competition has always been deferral of maintenance.  Thus the rail system entered the nineteen sixties with substantial portions of its trackage functionally obsolete or at least in sore need of maintenance.  One possible solution, merger, is the topic of the succeeding chapter.

The sixth chapter discusses the dynamics of railroad mergers during the 1950s and 60s.  As always, public policy makers were ambivalent about the issue of rail consolidation.  Two different approaches to railroad mergers were available.  Parallel mergers joined firms whose route structures were largely overlapping.  On their face, such mergers reduce competition while perhaps reducing costs by elimination of duplicate functions.  End-to-end mergers joined roads which already had a collaborative relationship due to their interchange of freight.  These mergers might improve services available to shippers but were not anticipated to result in substantial cost reductions. Having pointed out that regulators generally opted for parallel and cost saving mergers, even though they threatened to reduce competition, the authors engage in a detailed rehearsal of the details of rail mergers.  The chapter concludes with the wreck of the Penn Central.

The seventh chapter discusses the rather muddled public policy response to the crisis induced by Penn Central’s collapse.  The 1970s featured more Congressional policy making for railroads than any prior decade.  The initial responses in the form of the so-called 3R and 4R bills can be classified as attempts at muddling through.  The most consequential result of these acts was the creation of the United States Railway Administration (USRA).  This body was charged with nothing less than the reorganization of the entire railway system.  The acts also created Conrail, an empty vessel into which the USRA was to pour its reorganization plan.  The Staggers Act concluded the decade with a radical departure from the prior eighty years of American public policy, although it was a logical extension of the deregulatory fervor which seized official Washington and the Carter administration.

The following chapter discusses the Frankenstein creation which was Conrail.  The chain reaction bankruptcies induced by the failed Penn Central merger threatened to exterminate rail freight service throughout much of the northeastern United States.  The creation of Conrail was the ad hoc response to the perceived crisis.  Like Frankenstein’s monster, Conrail took on a life of its own, far outlasting the crisis that birthed it, in the process besting a cabinet secretary determined to privatize it by merger.  Instead, it prospered as an independent, private firm up to the 1990s.

The ninth chapter backtracks chronologically to discuss the development of the Staggers Act and the consequences of deregulation.  Two features of the act were essential: streamlining the process by which railroads could abandon legacy lines that could not achieve economic traffic density and phasing out common carrier obligations.  Class I railroads aggressively pruned their route systems. All railroads took advantage of their newly acquired freedom to enter into private contracts.

The Staggers Act permitted the Interstate Commerce Commission, and its successor, the Surface Transportation Board, to “protect” captive shippers from exploitation by railroads.  The balance of the ninth chapter comprehensively covers the struggles between shippers and railroads over rates which could adequately compensate the railroads without exploiting shippers.  No very satisfactory result was ever achieved in squaring this circle.  Nor is it clear to the authors that any regulatory solution would be desirable.

The tenth chapter takes up the story of the post-Staggers Act consolidation of the U.S. rail industry into five Class I roads and innumerable short lines erected on tracks abandoned by the Class I roads.  The different mergers and their competitive logic (or occasionally lack thereof) are discussed in detail.  One flaw in this chapter is recurrent name dropping about rail executives who are mentioned positively without providing detailed evidence to support the judgments.

The final result of the flurry of mergers was paired duopolies, Burlington Northern-Santa Fe and Union Pacific-Southern Pacific west of the Mississippi and Norfolk Southern with CSX in the east.  Kansas City Southern remains an anomaly operating routes predominantly north and south while the other Class I operators’ traffic flows are east to west.  The authors conclude with a report card of the final four.

Chapter eleven returns to passenger rail and the Amtrak experience.  The dilemma of rail passenger service policy is twofold.  On one hand, Congress has sought to have Amtrak succeed on a commercial basis, i.e. cover expenses from passenger revenues.  This policy implies minimal federal subsidies.  However, only a few high-density corridors in the United States, particularly between Boston and Washington and some very large metropolitan areas, could commuter traffic support rail service on such a basis.  Once subsidies are required, the logic of legislative deal making demands passenger routes that can never be successful on a commercial basis in order to build legislative majorities.  For example, Amtrak’s transcontinental routes are not economically viable.

Chapter twelve documents the remarkable technological progressiveness of American railways in the twentieth century despite their not infrequent economic distress.  Broadly speaking two sources of technical progress can be identified, innovations embodied in physical capital and innovations in business practices.  The first is represented for railroads by the replacement of steam locomotives with diesel at mid-century.  The development in the post-Staggers era of the unit train illustrates the second category of technical change.

In addition, some improved technology was developed specifically for railroads, such as the diesel-electric locomotive and improved systems for braking and train control.  Other technologies were generated as a part of the twentieth century’s remarkable technological efflorescence but were readily adapted to the needs of railroads, e.g. computers and communication systems. The cumulative, synergistic effects of these changes was the transformation of a labor- and fuel-intensive industry in the late nineteenth century into an industry of unmatched productivity with respect to both in the early twenty-first.

The thirteenth chapter provides a summing up, consisting of ten propositions regarding the experience of the railroad industry during the twentieth century.  The first is that regulation was hugely damaging to the industry in the first eight decades of the century.  At the same time the authors concede in their second point that railroads represent an industry “affected with the public interest.”  Financial crises and bankruptcy under the competitive pressure from alternative modes constitute their points three through five.  Points six, seven, and ten take up the story of the chaotic 1970s, the belated regulatory reforms, particularly the Staggers Act, and the resurgence the railroads experienced in the final two decades.  Propositions eight and nine cover the notable progress of railway technology and the travails of passenger rail service.  There is nothing to criticize in this summary.  Although, as is too frequently the case, the organization of the presentation leaves something to be desired.

A final chapter closes the book with some observations about further regulatory reforms which might reinforce the current good health of the American rail industry.  They also highlight some post-millennial developments and their implications for the future of the industry.   These recommendations and observations are of particular interest in light of recent proposals of further Class I mergers.

This book is comprehensive and the authors clearly quite knowledgeable.  John Meyer, now deceased, was a professor at Harvard University and Robert Gallamore, was a former student, now retired from the rail industry and teaching at Michigan State University.  However, this book disappoints.  The authors have chosen a narrative structure that is neither fish nor fowl.  The chapter organization is largely topical.  However, the organization of the topics follows an imperfect chronological ordering.  The internal organization of chapters leaves much to be desired also, since they tend to leap from subject to subject with little connective logic.

The book appears also to suffer from sloppy editing.  Thus in one chapter we read that, “no comparable period in U.S. history had less success in actual railroad consolidations than the forty-year span, 1920 to 1940” (p. 66).  In several other places, tables displaying economic values over substantial time periods are reported in nominal, rather than real terms (see Figure 5.1, p. 121 and Figure 11.2, p. 333).  The latter figure distorts the levels of support Amtrak has received over its lifetime by understating subsidies in the 1970s and overstating the subsidies of the twenty-first century.

In summary, this book imperfectly fills the need for a comprehensive historical treatment of the American railroad industry in the last century.  Given the inherent interest and importance of the subject, this is unfortunate.

John Howard Brown is an Associate Professor of Economics in the Department of Finance and Economics at Georgia Southern University.  His article, “The ‘Railroad Problem’ and the Interstate Commerce Act” was published in a special issue of the Review of Industrial Organization on the 125th anniversary of the Interstate Commerce Act.

Copyright (c) 2014 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (December 2014). All EH.Net reviews are archived at http://eh.net/book-reviews/

Subject(s):Transport and Distribution, Energy, and Other Services
Geographic Area(s):North America
Time Period(s):20th Century: Pre WWII
20th Century: WWII and post-WWII

Fragile by Design: The Political Origins of Banking Crises and Scarce Credit

Author(s):Calomiris, Charles W.
Haber, Stephen H.
Reviewer(s):Rockoff, Hugh

Published by EH.Net (September 2014)

Charles W. Calomiris and Stephen H. Haber, Fragile by Design: The Political Origins of Banking Crises and Scarce Credit.  Princeton, NJ: Princeton University Press, 2014.  xi + 570 pp. $35 (cloth), ISBN: 978-0-691-15524-1.

Reviewed for EH.Net by Hugh Rockoff, Department of Economics, Rutgers University.

Charles W. Calomiris and Stephen H. Haber, two of America’s leading financial historians, have written an ambitious and in my view a largely successful book to provide an explanation for the political economy of banking through history and across nations. The central question is why some banking systems provide both abundant credit and financial stability over long periods while others, including unfortunately the financial system of the United States, fail to do so.

Summary

Calomiris and Haber develop their analytic framework in chapters 1 through 3. They do so in words. There are no equations to deter the mathematically challenged and prevent their book from reaching a wide audience. Their main point is that banking systems are always and everywhere a political construct: the outcome of what they call the “game of bank bargains.” The players in this game are the government, the public, and various interest groups including, of course, bankers and would be bankers. Governments want the revenues that can be extracted from the banking system and political support. Interests groups in turn want favors from the banking system, typically cheaper credit. The public wants abundant credit and a stable banking system. The outcome of the game of bank bargains depends on the underlying political system. The most important distinction is between democracies and autocracies.  In some democracies, but by no means all, the outcome of the game of bank bargains is a system that provides stable and abundant credit. A democratic government must provide a system that works in some measure for the general public if it is to stay in power. Sometimes, however, the tendency for the game of bank bargains to end favorably in democracies is undermined by what Calomiris and Haber refer to as populism and in particular agrarian populism. To secure the support of agricultural interests governments may impose restrictions on banks — restrictions on where they can locate, who they can lend to, how much they can charge on loans, on when and how much they can collect on debts, and so on. These restrictions may benefit the agrarian interests that sought them while reducing the overall supply of credit and the stability of the system. But an alliance with agrarians may help the ruling party to stay in power. The story with autocratic governments is different. Here there is a tendency for the government to extract as much revenue as possible from the banking system, even at the cost of the overall growth and stability of the economy.

This thesis (which is developed in considerably more detail) is illustrated with historical studies of banking in three democracies, Britain, the United States and Canada, and two autocracies (during much of their history) Mexico and Brazil, with briefer looks at other countries. Britain is covered in chapters 4 and 5. There is a great deal of historical material in these chapters. Along the way one learns about the gradual evolution of democracy in Britain, the disruptive economic and financial effects of wars, and the gradual and fluctuating transformation of the banking system from one at the service of the state to one responsive to the private sector. Professors of economic history may find themselves skipping parts of the narrative here, but the non-specialist can learn a great deal by reading straight through. In general, the earlier history will be less likely to provoke controversy than the more recent history. Perhaps it is simply the clarity of hindsight. Their story of how Britain relied on inflationary finance during the Napoleonic wars, for example, will raise few eyebrows. The authors’ apparent enthusiasm for Margaret Thatcher’s economic revolution (pp. 147-48) is likely to meet more resistance. The recent crisis, unfortunately, is not analyzed in detail. We learn that British banks were vulnerable to an international crisis because of the boom in the housing market and the high leverage of the banks, but not how these vulnerabilities were the outcome of the game of bank bargains.

The American experience is covered in chapters 6 through 8. Here they address one of the great mysteries of financial history: Why has the United States, a world leader in business, education, and technology lurched from one financial crisis to another through so much of its history? The answer for Calomiris and Haber is, to simplify a complex argument, agrarian populism. Indeed, chapter 6 is called “Crippled by Populism: U.S. Banking from Colonial Times to 1990.” The key for Calomiris and Haber is that farmers, particularly prosperous farmers, did better with local unit banks than they would have with a system of nationwide branch banking. In the short run the rates influential farmers paid might have been higher than they would have been with a nationwide branch banking system, but these farmers knew that the local bank would always be willing to lend to them, even in hard times, because it had no alternatives. The populists, who drew their strength from farmers, then formed an alliance with the unit bankers. Deposit insurance is an important outcome of that alliance. It was pushed by the populists as a protection for the common man, but helped make the unit banks competitive with the large urban banks. The resulting system, with its myriad of local unit banks, was “fragile by design.” Eventually, however, the system of unit banks was broken because of the declining economic role and political power of agriculture.

The crisis of 2008 in the United States is covered in chapters 7 and 8. As with the case of Britain their account of recent events will be more controversial than their account of earlier periods. Chapter 7, “The New U.S. Bank Bargain: Megabanks, Urban Activists, and the Erosion of Mortgage Standards” describes the origins of the subprime mortgage mania. The story they tell draws on a number of accounts, for example Raghuram Rajan (2011), but it fits well with their earlier emphasis on political bargains. As Calomiris and Haber see it, the crisis began with a bargain between regional banks seeking permission from regulators to merge and urban activists seeking credit for people who were too poor to qualify for home mortgages under traditional standards.  By making subprime loans the banks got the approval of activist groups which in turn meant approval by regulators for mergers — the banks were being good citizens — and the activist groups got what they thought they wanted, more credit for low income borrowers. But that was just the beginning. Fannie Mae and Freddie Mac were drawn into the coalition and then subprime loan originators such as Countrywide. Politicians benefitted directly through campaign contributions, and indirectly because they could claim to be helping the urban poor without having to levy higher taxes on the middle class.

Chapter 9 then attacks the question of why regulators didn’t force financial institutions to hold capital appropriate for the risks they were taking. In the opinion of Calomiris and Haber that would have been the key to preventing the inevitable losses on bad loans from becoming a crisis. They argue against the view that the problem was deregulation, particularly the often cited removal of the separation of commercial banking from investment banking that had been in place since the1930s. Here I was completely persuaded, although to be honest, this was my belief going in. Ending the separation of commercial banking and investment banking they show, permitted mergers and conversions that helped ameliorate the crisis. The real problem was that prudential regulators didn’t do their job of demanding capital to match risky lending. In part, the reason they failed was that raising capital requirements would have discouraged subprime lending and that would have meant taking on a powerful political coalition. Not all readers will be convinced that higher capital ratios would have prevented the crisis, but most will agree that this was a part of the story.

In chapter 9 Calomiris and Haber turn from the bad boy, the United States, to the good boy, Canada. Although there have been bank failures in Canada, including large institutions, there has never been a financial crisis, not even during the Great Depression. Why? The answer, according to Calomiris and Haber, is a system of large banks with nationwide branching systems, and the resulting efficiency and diversification of risk. That happy outcome was the result of the authority to charter banks being located, from the very beginning, at the national level. I found few things to disagree with in their discussion of Canada and the contrast with the U.S. Michael Bordo, Angela Redish, and I reach a similar conclusion in a paper forthcoming in the Economic History Review.

In section three, chapters 10 through 13, Calomiris and Haber turn from democracies to authoritarian regimes. Here, not surprisingly, things turn out worse than in the democracies. Chapters 10 and 11 describe the history of banking in Mexico. Haber has written extensively about Mexico and these chapters are wonderfully detailed. Here I will just summarize a few observations that are particularly striking. Under General Porfirio Diaz (1877-1911) the banking system was controlled by favored industrialists closely tied to the government. The industrialists benefitted and the government benefitted by extracting revenues from the banking system, but the resulting system failed to provide abundant credit to fuel widespread economic development. It provided, however, at least a modicum of stability.  During the period of the Mexican Civil War (1911-1929), the banking system deteriorated as rival warlords tried to extract resources from banks in regions they controlled. Under the Partido Revolucionario Institucional (PRI), the government established investment banks that helped finance the coalition that supported this authoritarian regime. Commercial banking remained depressed, even when compared to the Diaz era.

Chapter 11 covers Mexico after 1982, a tumultuous period. Budget problems led to reliance on inflationary finance that undermined the banking system and support for the PRI. This was followed by a misguided privatization of the banking system, with the purpose of raising revenue for the government, a run up of bank credit, and finally a crash and a bailout that created further political problems. This story sounds much like the story of the savings banks in the United States. Since the bailout, a new partnership has arisen between the government and foreign banks that have entered to fill the void left by the collapse of the older system. Although Calomiris and Haber see some positives in the new system, they point out that the amount of bank credit relative to GDP — their favorite measure of the abundance of bank credit — was about the same in 2010 as it was in 1910.

Chapters 12 and 13 cover Brazil. Chapter 12 covers the period up to 1889, and chapter 13, the period after, focusing on the transition to democracy. Much of Brazil’s financial history was characterized by heavy reliance on an inflation tax. Weak autocratic regimes couldn’t tax the wealthy oligarchs who supported them. An inflation tax was the easiest alternative. The transition to democracy has, after many steps forward and backward, produced a system that is more responsive to the general interests of Brazil. Even left-of-center governments, however, have faced the problem that it is hard to tax the wealthy elite in Brazil because of their high international mobility. Calomiris and Haber end on a cautiously optimistic note, but warn that populism in Brazil, like populism in the United States, will produce a banking system that subsidizes influential interest groups at the expense of the public.

Chapter 14 looks briefly at banking in other countries – via cross country empirical studies, and short narrative histories of China, Germany, Japan, and Chile — to test the viability of the conclusions reached on the basis of the detailed case studies. Again the ability of Calomiris and Haber to master and organize a huge amount of material is impressive.

Chapter 15, the concluding chapter, wrestles with the dispiriting implication of their argument that has been growing in the background since the first chapter. If banking is always and everywhere the result of a “game of bank bargains” played by the government and powerful interest groups, what role is there for ideas? Can an economist or historian make a difference? Calomiris and Haber struggle mightily to end on an upbeat note. They argue, for one thing, that there are windows of opportunity: economic crises so severe that people are willing to turn to someone with a new set of ideas. They suggest Alexander Hamilton and Margaret Thatcher as examples. But as these examples illustrate, most of the time economists and financial historians are likely to be chroniclers of events rather than makers of history.

Comment

Calomiris and Haber blame America’s banking troubles before 1990 on “agrarian populism” and its support for unit banking. But I think there was another, albeit related, factor that needs to be added to complete story. After all, although unit banks were popular in some parts of the United States, Americans often showed themselves willing to support branch banking. Before the Civil War many southern states, as Calomiris and Haber note, had branch banking systems (pp. 171-73). And Ohio, Indiana, and Iowa had mutual support systems that Calomiris and Haber (pp.174-75) celebrate. The unique weakness of the American banking system was that branching, even when permitted, ended through much of our history at the state line. But why were state governments able to keep their control over banking for so long? Support for state control of banking was an outcome of the larger battle between the states and the federal government for power. And that battle, of course, was to a great extent about race: the South was always the strongest advocate of state power. Keeping the right to charter and regulate banks at the state level, in other words, was simply one more battle in an ongoing war. The fight over the Second Bank of the United States is a good example. The bill to recharter the Bank passed the House and Senate only to be vetoed by Andrew Jackson. That vote, in itself, shows that there was strong support for nationwide branch banking. Recall that the Second Bank was not simply a banker’s bank on the Federal Reserve model. The Second Bank had branches in all parts of the country that made commercial loans. This was by any definition nationwide branch banking. Was there any opposition to rechartering the Second bank? Or was it just Andrew Jackson who was opposed? New England, the Western States, even the slave states that would remain within the Union in the Civil War all voted to recharter in both the House and Senate. The future Confederate States were different. With the exception of Louisiana, they voted overwhelmingly against recharter (Wilburn 1967, 9). Racism and populism, tragically, became entwined in the South. But the battle over states’ rights and racism, I believe, needs to be brought into the story as one of the reasons for the long delay in the adoption of nationwide branch banking. Racism also helps explain the desire in the United States to find a way to help poor people that did not involve higher taxes and transfers that Calomiris and Haber discuss when they explain the origins of the subprime crisis.

Calomiris and Haber use the term populism to refer simply to all politicians and parties who put great store in the will of the common man. By their definition Thomas Jefferson, Andrew Jackson, Abraham Lincoln, and William Jennings Bryan (p. 150) were all populists. But what about populism more narrowly: the People’s Party and its charismatic leader William Jennings Bryan? The Bryanites, as Calomiris and Haber point out, eventually supported deposit insurance which protected private unit banks. But the main goals of the populists, as can be seen in their party platforms, were nationalist and socialist, which would have ultimately undermined the local unit banks. The populists wanted to end the National banking system and replace its bond-backed currency with fiat paper issued by the federal government. They wanted a postal savings system to provide a safe haven for the deposits of farmers and the urban poor, and they wanted the federal government to provide low interest rate loans to farmers by issuing paper money based on deposits of excess grain: the subtreasury plan.  These goals were all achieved in some measure: the postal savings system was established in 1910, the Federal Reserve with its government-issued currency in 1913, and various agricultural programs that provided federal loans to farmers were enacted in the 1920s and 1930s.

Finally, I would add that Calomiris and Haber focus on only two outcomes for the banking system: abundant credit and stability. These are clearly the most important. Much of the support for unit banking, however, was based on other considerations. One argument, although I have never seen much evidence for it, was that locally owned banks provided and continue to provide credit differently from branches of large national chains. Local bankers know the background of potential borrowers. So a borrower with a sterling character but few assets to put up as collateral would be more likely to get a loan from a locally-owned bank than from a branch of a big chain. There was also the stability and continuity of the local community to think about. A local bank, it might be argued, would be more likely to provide ongoing community leadership than a branch filled with managers hoping to be promoted to the main office in New York or San Francisco as soon as possible. Perhaps it was all a fiction — Jimmy Stewart in It’s A Wonderful Life — but nevertheless it’s a possibility that we shouldn’t dismiss out of hand. Economic progress is not just about real GDP per capita.

Bottom line

This is a beautifully-written book. Calomiris and Haber are always thoughtful, always clear, and they have an eye for the telling metaphor and the thought-provoking fact.  More importantly, the book reflects the authors’ mastery of a vast amount of material on the history of banking. No one will be persuaded by all of their analyses, and there will be some pushback when it comes to their analyses of more recent and controversial events. Nevertheless, Fragile by Design is a must-read for economic historians, a book to be put on the shelf with O.M.W. Sprague’s History of Crises under the National Banking System, Bray Hammond’s Banks and Politics in America from the Revolution to the Civil War, and similar classics.

References:

Bordo, Michael, Angela Redish, and Hugh Rockoff (forthcoming), “Why Didn’t Canada Have a Banking Crisis in 2008 (or in 1930, or 1907, or . . .)?” Economic History Review.

Hammond, Bray (1957), Banks and Politics in America from the Revolution to the Civil War, Princeton: Princeton University Press.

Rajan, Raghuram G. (2011), Fault Lines: How Hidden Fractures Still Threaten the World Economy, Princeton: Princeton University Press.

Sprague, O. M. W. (1910), History of Crises under the National Banking System, Washington: Govt. Print. Office.

Wilburn, Jean Alexander (1967), Biddle’s Bank: The Crucial Years, New York: Columbia University Press.

Hugh Rockoff’s most recent book is America’s Economic Way of War: War and the U.S. Economy from the Spanish-American War to the Persian Gulf War. New York: Cambridge University Press, 2012.

Copyright (c) 2014 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (September 2014). All EH.Net reviews are archived at http://www.eh.net/BookReview

Subject(s):Financial Markets, Financial Institutions, and Monetary History
Geographic Area(s):Asia
Europe
Latin America, incl. Mexico and the Caribbean
North America
Time Period(s):19th Century
20th Century: Pre WWII
20th Century: WWII and post-WWII

Technological Innovation in Retail Finance: International Historical Perspectives

Author(s):Bátiz-Lazo, Bernardo
Maixé-Altés, J. Carles
Thomes, Paul
Reviewer(s):Wardley, Peter

Published by EH.Net (November 2013)

Bernardo Bátiz-Lazo, J. Carles Maixé-Altés, and Paul Thomes, editors, Technological Innovation in Retail Finance: International Historical Perspectives. New York: Routledge, 2011.
xvi + 319 pp.  £85/$125 (hardback), ISBN: 978-0-415-88067-1.

Reviewed for EH.Net by Peter Wardley, Department of History, Philosophy and Politics, University of the West of England (Bristol).

Until recently there was a largely unbridged historiographic divide between monetary historians, interested largely in stories about monetary aggregates, economic performance and financial policy, and bank historians, authors of scholarly texts that recount the origins, growth and, on occasion, the demise, of specific financial institutions. However, over the last three decades this separation has been to some extent diminished, prompted in part by research that focuses on the application and organization of new technology in financial institutions. In part, this is a result of a widespread familiarity with personal computers along with a growing awareness of the importance of the role of information technology throughout the modern economy. More specifically, it has become increasingly apparent to those outside the financial sector that the processing of information within financial institutions is a major and dynamic factor that should not be neglected. If one of the earliest and most advertised events in this process was the “Big Bang,” that transformed the nature of business on the London Stock Exchange in October 1986, then the “Global Financial Crisis” of 2007-08 demonstrated the extent and significance of international networks that knit together the world’s financial institutions. In this environment, academic interest in the internal structures of banks has increased and analysis of the adoption and organization of information technology in the financial sector, previously almost unheard of, is now well-established. It is in this context that this edited collection makes a novel and valuable contribution by providing a comparative study of technological change in European and North American retail finance.

An editorial introduction, entitled “In Digital We Trust,” provides an enthusiastic and detailed justification for the study of the introduction and usage of information and telecommunication technology (ICT) in the financial sector. This stresses the evolutionary and contingent nature of the diffusion of ICT within financial institutions which varied according to the greatly differing environments that differed by the market segment they operated in, the economic circumstances they faced, and the social, legal, political and cultural settings in which they developed. However, a general pattern emerges in the largely hard-headed and realistic approach adopted by the managers who were responsible for the acquisition and productive use of new machines and applications; often the technology led to the adaptation of existing methods rather than the adoption of completely new managerial practices. As the twentieth century saw a sequence of technological innovations, which were largely incremental and adaptive in nature, for the most part a long term perspective is taken here. This recognizes the initial adoption of mechanical aids in the banking parlor (for example, the typewriter and telephone), then the employment of data recording instruments (adding machines) which was followed by increasingly complex data recording and processing machines (accounting machines, first powered manually and then by electricity). Computers of various types, from mainframe to micro, have been the most recent manifestation of this technological progression that depended on co-developed theoretical and scientific advances which have both sped up processing and increased memory capacity, approximately according to Moore’s law, at costs that have correspondingly diminished. The nature of this technological progression, as evident in the banking industry, is competently discussed in Lars Heide’s concluding chapter which might usefully be read as a guide to much that appears in the preceding chapters.

First, though, a note of caution. There was a major characteristic of retail finance, one that is really important in the first half of the twentieth century, the pre-mainframe era, which might not immediately strike readers who are more accustomed to the banking system of the United States which should have been more emphatically indicated here. In the U.S. the relatively small, single unit bank adopted mechanization in order to process the widest range of functions, from recording personal statements associated with the accounts of individuals to the assessment of the aggregate financial position of the bank. In Europe, and notably in England’s “Big Five” High Street banks, the relatively much larger, multi-unit branched bank tended to undertake these two distinct operations at different locations; the former at the branch and the latter at the bank’s head office. Once recognized, this fundamental distinction explains a great deal about the different rates and patterns of technological adoption and diffusion experienced across retail banking systems in different countries. Different technologies could be used for different purposes but similar technology could also be adopted in other environments to do different tasks. And, of course, there was some path dependency within systems that, for example, linked the adoption of mainframe computer systems after 1950 to the prior implementation of pre-World War II bank mechanization. Here it is interesting to note, at least in the British setting, that the staff associated with computerization do not appear to have learned as much as one might have expected from the experiences of their predecessors, the generation of then recently retired senior bank managers, who it might be argued had been more successful in the interwar years in introducing new technology. The evidence suggests that these pioneers appear to have a much clearer understanding of one the themes of this book: the machine had to serve the bank rather than dictate a radical recasting of organizational practices. These themes appear prominently in chapters here that provide national case-studies.

Martha Poon’s account of the transformation of the credit risk calculations provided by the Fair Isaac Scorecard system highlights the significance of gender, a factor that is less prominent in the chapters presented here than it might be. In her story, before the introduction of electronic system, credit applications were “put-out” as raw data to local housewives who produced coded reports which were then processed by female operators of key punching machines to be coded. Even with the consolidation of a more completely office-based, machine-driven system of data processing, important “heritage” aspects, residual remnants of past practices, now deeply embedded in the calibration of the credit of individual consumers, persisted even as credit scoring became increasingly digitalized and automated. By contrast, women play no visible role in Joakim Appelquist’s study of “Technical and Organizational Change in Swedish Banking, 1975-2003,” which begs the obvious question about the nature of gender (in)equalities in Nordic society. This chapter provides an explicitly specified model whereby the adoption of different generations of ICT, from mainframe to internet via PC-based LANS, correspond with stages of a shift from Tayloristic bureaucracies to post-Taylorisitic organizations “characterized by flattened management structures, outsourcing, etc.” (p. 74). However, rather than the de-skilling of the labor force in a period of new technology adoption, à la Braverman, Appelquist argues that Swedish banks re-skilled their staff either by re-training existing workers or by substituting existing employees with replacements who were better equipped to deliver personal banking services of a more skilled nature.

Joke Mooij provides an exemplary account of the adaptation of new managerial structures and adoption of novel technologies that accompanied the consolidation in 1972 of the Rabobank Nederland (Coöperatieve Centrale Raiffeisen-Boerenleenbank), one of the world’s largest banks; this was achieved by the consolidation of two agricultural co-operative banks that had used earlier machinery within distinctive corporate cultures. Both had shared an initial commitment to Raiffeisen principles, one of which stipulated unsalaried management that made them very different from contemporary Anglo-Saxon co-operative enterprises, and faced challenges with respect to this defining characteristic caused by the technological and organizational changes narrated here.

Bernardo Bátiz-Lazo and J. Carles Maixé-Altés provide a comparative assessment of ICT adoption in a different “non-standard” corporate institution, the Savings Bank, by providing an historical evaluation of their development in Britain and Spain between 1950 and 1985; here they contrast the achievement in Spain of economies of scope, in the form of product portfolio, with the search for economies of scale in Britain. In both stories political factors are demonstrated to be significant though different in character. In Spain collaboration in the acquisition and operation of ICT shared by locally orientated, and sometimes relatively small, members of the Confederation of Spanish Savings Banks (La Confederación Española Cajas de Ahorros, or CECA) allowed the associated development of distinct regionally-based institutions, each of which grew a clearly defined profile in its community.  This was far removed from the British experience. There, after over one hundred and seventy-five years of relative success in providing deposit facilities for inhabitants of their respective neighborhoods, and especially to the comparatively poor, the British state in the 1970s herded together seventy-five semi-autonomous Trustee Savings Banks into a single entity, the Trustee Savings Bank (TSB) that was subsequently floated on the London Stock Exchange in 1986. Unsurprisingly, the philanthropic motives of the pioneers of the TSB movement and the local orientation of each TSB were lost to history and within a decade the TSB was the subject of a “reverse-takeover” by Lloyds Bank. Nevertheless, the TSB has re-emerged recently when, at the insistence of the European Commission in 2012, its divestment was made a condition of the British state’s rescue package to salvage Lloyds Bank; however, the inaugurating publicity of the TSB Bank (sic) did not suggest that the re-emergence of a community-orientated financial institution devoted to the needs of the less well-off was imminent.

The role of the British state is also revealed by Alan Booth and Mark Billings in their assessment of “Techno-Nationalism, the Post Office and the Creation of Britain’s National Giro,” which documents the creation of “very curious beast indeed” (p. 171). In addition to attempts by the British state to foster an indigenous computer-building industry, a persistent theme of the 1960s and 1970s, which could be generalized to encompass even more long-standing and recurrent policies to stimulate “high-tech” manufacturing in Britain, this evidences a political divide between the two parties that formed governments in the two decades before 1979. First, after the Radcliffe Committee Report of 1957, the Conservative administration saw in a Giro system a handy tool to nudge the commercial banks such that they became more responsive to the needs of the economy and more willing to address the shortcomings of their business behavior. Thereafter, Tony Benn, as Post Master General and then as Minister of Technology, in a Labour government whose leader had lauded the benefits of a scientifically informed transformation of society, supported the Giro as a prominent IT project within the nationalized sector of the economy that could contribute to this objective. However, the history of the Giro demonstrates a number of problematic features of this policy, including the difficulties of obtaining new technological capabilities which were embodied in imported equipment at a time when Britain suffered from recurrent Sterling problems and public spending difficulties.

Public perceptions of technical change in the financial sector have always been important and senior bank officials watched keenly the response of their customers, first to mechanization in the interwar years and later to computerization. As Ian Martin documents in “Britain’s First Computer Centre for Banking: What Did This Building Do?”, Barclays Bank was eager to shape public opinion in 1961 when it opened its No.1 Computer Centre near Euston station in London. However successful Barclays was in persuading its customers of the merits and advantages of its innovative strategy, and contrary to the case presented here, this was not first time that customer accounting in Britain had been dislocated “from its traditional confines of the individual bank branch” to be relocated to a centralized facility. This had been achieved, in association with a very similar publicity campaign to that narrated here, by the Westminster Bank some thirty years earlier (Wardley, 2000). As this strategy was dependent upon the comprehensive mechanization of record keeping at its Head Offices (at Lothbury, hard by the Bank of England), readers should also treat with caution Martin’s associated statement that “British banks, with the exception of the Bank of Scotland …, did not make use of tabulating machines to perform centralised branch accounting” (p. 37). The Bank of Scotland, as with other banks in Scotland, was a laggard in this respect; in England, even the Co-operative Bank had mechanized by 1935.

Hubert Bonin’s “Mechanization of Data Processing and Accounting Methods in French Banks, circa 1930-1950” provides an excellent survey of the mechanizing bank that reviews the introduction of tabulators, accounting machines and electromechanical data processors in the context of financial organizations that adopted and continually adapted technical capabilities that reflected existing procedures and dynamic managerial strategies. Here new technology is always the handmaiden of “streamlining” that saw recurrent rounds of re-organization and standardization of information within banks. Bonin also identifies the extensive and eager exchange of information about mechanization among French banks, a tendency shared by contemporary English banks. However, and English contemporaries might have been surprised by its omission here, in France the formation of the Comité Permanent d’Organisation Bancaire in 1930 saw a collective agency created to encourage increased efficiency through information exchange concerning mechanization, more effective work organization, improved clearing arrangements and better statistics and costing data. Paul Thomes responds affirmatively to the question “Is There an ICT Path in the German Savings Banking Industry? (c. 1900-1970s)” by evidencing the recurrent pioneering role of savings banks relative to technology adoption both by Germany’s large commercial banks and by the co-operative banks that served both SMEs and agricultural enterprises. Some additional interesting questions are prompted by this chapter: how was it that German banks were able to introduce machine bookkeeping during the First World War and increase mechanization in World War Two when in Britain such resources were very deliberately directed by the state to military purposes? Why was it that the Banque d’Alsace-Lorraine so quickly came to correspond to the French pattern described by Bonin rather than the “IT path” that Thomes presents for German banks? What is more certain is that a contemporary bank manager would have recognized a near 100% enhancement of the claimed productivity benefits had a hardware salesman suggested to him that “a machine-booking clerk could manage 500 entries a day against 180 done by hand: an efficiency gain of nearly 300 percent” (p. 124). Unfortunately, not only is this calculation wrong but it is one of the few examples provided in this collection of an explicit assessment of the gains, expected or realized, attributable to the introduction of new technology.

Although not the major subject of this collection, technological developments are prominent in some chapters; these include Juan Pablo Pardo-Guerra’s micro history of the successive stages of automation that delivered digitalization at the London Stock Exchange and David Stearn’s account of the iterative responses implemented by Visa to meet technical and organizational challenges of implementing a global consumer payment card system. Mexico provides the setting for Gustavo Del Ángel-Mobarak‘s study of the evolution of interbank connectivity though the use of ICT by its banks after 1965 which, despite an interlude of public ownership, once more illustrates a number of general themes. A novel and distinctive development here demonstrates the long term and continuing nature of technical change in banking; as early as 1934 wireless radio transmitters were used to transfer information between corporate headquarters and branches within Mexico City and this system was nation-wide within a decade.

Overall, this innovative anthology serves to remind that two polar positions can be discerned in studies that assess the impact of new technology. On the one hand, the adoption of novel devices can be captured by a “Gee whiz” response that emphasizes a dramatic break with past practices. Its polar opposite, by contrast, emphasizes long-run continuities and incremental developments. In this examination of technical innovation in the banking sector we find elements of both though generally the evidence presented affirms strongly the “slowly but surely” approach that one would expect from bankers who are often regarded as natural conservatives until the public is sharply reminded that risk-taking is a day-to-day activity for financiers. However, here it might also be noted that in this text at least two essential factors do not get the attention they probably deserve: one is a consideration of cost-benefit analysis of the adoption of new technology, both in terms of the net savings aspired to and the reduction in costs actually achieved; the other is gender: the employment of female staff is a basic characteristic, if not universal feature, of technological change in the financial sector and less attention than is warranted is devoted to gender-related considerations.

Peter Wardley was editor of the annual review of IT for the Economic History Review (1990-95) and has written several articles and chapters on economic and business history. Among those relating to banking history are “The Commercial Banking Industry and Its Part in the Emergence and Consolidation of the Corporate Economy in Britain before 1940,” Journal of Industrial History, 3 (2000) 71-97 (see JIH 3 3 Wardley 2000 Banks low res) and “Women, Mechanization and Cost-savings in Twentieth-century British Banks and Other Financial Institutions” in Mike Richardson and Peter Nicholls, eds. (2011) A Business and Labour History of Britain: Case Studies of Britain in the Nineteenth and Twentieth Centuries.

Copyright (c) 2013 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (November 2013). All EH.Net reviews are archived at http://www.eh.net/BookReview

Subject(s):Financial Markets, Financial Institutions, and Monetary History
Geographic Area(s):Europe
North America
Time Period(s):20th Century: Pre WWII
20th Century: WWII and post-WWII

Bertola.Uruguay.final

An Overview of the Economic History of Uruguay
since the 1870s

Luis Bértola, Universidad de la República — Uruguay

Uruguay’s Early History

Without silver or gold, without valuable species, scarcely peopled by gatherers and fishers, the Eastern Strand of the Uruguay River (Banda Oriental was the colonial name; República Oriental del Uruguay is the official name today) was, in the sixteenth and seventeenth centuries, distant and unattractive to the European nations that conquered the region. The major export product was the leather of wild descendants of cattle introduced in the early 1600s by the Spaniards. As cattle preceded humans, the state preceded society: Uruguay’s first settlement was Colonia del Sacramento, a Portuguese military fortress founded in 1680, placed precisely across from Buenos Aires, Argentina. Montevideo, also a fortress, was founded by the Spaniards in 1724. Uruguay was on the border between the Spanish and Portuguese empires, a feature which would be decisive for the creation, with strong British involvement, in 1828-1830, of an independent state.

Montevideo had the best natural harbor in the region, and rapidly became the end-point of the trans-Atlantic routes into the region, the base for a strong commercial elite, and for the Spanish navy in the region. During the first decades after independence, however, Uruguay was plagued by political instability, precarious institution building and economic retardation. Recurrent civil wars with intensive involvement by Britain, France, Portugal-Brazil and Argentina, made Uruguay a center for international conflicts, the most important being the Great War (Guerra Grande), which lasted from 1839 to 1851. At its end Uruguay had only about 130,000 inhabitants.

“Around the middle of the nineteenth century, Uruguay was dominated by the latifundium, with its ill-defined boundaries and enormous herds of native cattle, from which only the hides were exported to Great Britain and part of the meat, as jerky, to Brazil and Cuba. There was a shifting rural population that worked on the large estates and lived largely on the parts of beef carcasses that could not be marketed abroad. Often the landowners were also the caudillos of the Blanco or Colorado political parties, the protagonists of civil wars that a weak government was unable to prevent” (Barrán and Nahum, 1984, 655). This picture still holds, even if it has been excessively stylized, neglecting the importance of subsistence or domestic-market oriented peasant production.

Economic Performance in the Long Run

Despite its precarious beginnings, Uruguay’s per capita gross domestic product (GDP) growth from 1870 to 2002 shows an amazing persistence, with the long-run rate averaging around one percent per year. However, this apparent stability hides some important shifts. As shown in Figure 1, both GDP and population grew much faster before the 1930s; from 1930 to 1960 immigration vanished and population grew much more slowly, while decades of GDP stagnation and fast growth alternated; after the 1960s Uruguay became a net-emigration country, with low natural growth rates and a still spasmodic GDP growth.

GDP growth shows a pattern featured by Kuznets-like swings (Bértola and Lorenzo 2004), with extremely destructive downward phases, as shown in Table 1. This cyclical pattern is correlated with movements of the terms of trade (the relative price of exports versus imports), world demand and international capital flows. In the expansive phases exports performed well, due to increased demand and/or positive terms of trade shocks (1880s, 1900s, 1920s, 1940s and even during the Mercosur years from 1991 to 1998). Capital flows would sometimes follow these booms and prolong the cycle, or even be a decisive force to set the cycle up, as were financial flows in the 1970s and 1990s. The usual outcome, however, has been an overvalued currency, which blurred the debt problem and threatened the balance of trade by overpricing exports. Crises have been the result of a combination of changing trade conditions, devaluation and over-indebtedness, as in the 1880s, early 1910s, late 1920s, 1950s, early 1980s and late 1990s.

Population and per capita GDP of Uruguay, 1870-2002 (1913=100)

Table 1: Swings in the Uruguayan Economy, 1870-2003

Per capita GDP fall (%) Length of recession (years) Time to pre-crisis levels (years) Time to next crisis (years)
1872-1875 26 3 15 16
1888-1890 21 2 19 25
1912-1915 30 3 15 19
1930-1933 36 3 17 24-27
1954/57-59 9 2-5 18-21 27-24
1981-1984 17 3 11 17
1998-2003 21 5

Sources: See Figure 1.

Besides its cyclical movement, the terms of trade showed a sharp positive trend in 1870-1913, a strongly fluctuating pattern around similar levels in 1913-1960 and a deteriorating trend since then. While the volume of exports grew quickly up to the 1920s, it stagnated in 1930-1960 and started to grow again after 1970. As a result, the purchasing power of exports grew fourfold in 1870-1913, fluctuated along with the terms of trade in 1930-1960, and exhibited a moderate growth in 1970-2002.

The Uruguayan economy was very open to trade in the period up to 1913, featuring high export shares, which naturally declined as the rapidly growing population filled in rather empty areas. In 1930-1960 the economy was increasingly and markedly closed to international trade, but since the 1970s the economy opened up to trade again. Nevertheless, exports, which earlier were mainly directed to Europe (beef, wool, leather, linseed, etc.), were increasingly oriented to Argentina and Brazil, in the context of bilateral trade agreements in the 1970s and 1980s and of Mercosur (the trading zone encompassing Argentina, Brazil, Paraguay and Uruguay) in the 1990s.

While industrial output kept pace with agrarian export-led growth during the first globalization boom before World War I, the industrial share in GDP increased in 1930-54, and was mainly domestic-market orientated. Deindustrialization has been profound since the mid-1980s. The service sector was always large: focusing on commerce, transport and traditional state bureaucracy during the first globalization boom; focusing on health care, education and social services, during the import-substituting industrialization (ISI) period in the middle of the twentieth century; and focusing on military expenditure, tourism and finance since the 1970s.

The income distribution changed markedly over time. During the first globalization boom before World War I, an already uneven distribution of income and wealth seems to have worsened, due to massive immigration and increasing demand for land, both rural and urban. However, by the 1920s the relative prices of land and labor changed their previous trend, reducing income inequality. The trend later favored industrialization policies, democratization, introduction of wage councils, and the expansion of the welfare state based on an egalitarian ideology. Inequality diminished in many respects: between sectors, within sectors, between genders and between workers and pensioners. While the military dictatorship and the liberal economic policy implemented since the 1970s initiated a drastic reversal of the trend toward economic equality, the globalizing movements of the 1980s and 1990s under democratic rule didn’t increase equality. Thus, inequality remains at the higher levels reached during the period of dictatorship (1973-85).

Comparative Long-run Performance

If the stable long-run rate of Uruguayan per capita GDP growth hides important internal transformations, Uruguay’s changing position in the international scene is even more remarkable. During the first globalization boom the world became more unequal: the United States forged ahead as the world leader (nearly followed by other settler economies); Asia and Africa lagged far behind. Latin America showed a confusing map, in which countries as Argentina and Uruguay performed rather well, and others, such as the Andean region, lagged far behind (Bértola and Williamson 2003). Uruguay’s strong initial position tended to deteriorate in relation to the successful core countries during the late 1800s, as shown in Figure 2. This trend of negative relative growth was somewhat weak during the first half of the twentieth century, improved significantly during the 1960s, as the import-substituting industrialization model got exhausted, and has continued since the 1970s, despite policies favoring increased integration into the global economy.

 Per capita GDP of Uruguay relative to four core countries, 1870-2002

If school enrollment and literacy rates are reasonable proxies for human capital, in late 1800s both Argentina and Uruguay had a great handicap in relation to the United States, as shown in Table 2. The gap in literacy rates tended to disappear — as well as this proxy’s ability to measure comparative levels of human capital. Nevertheless, school enrollment, which includes college-level and technical education, showed a catching-up trend until the 1960’s, but reverted afterwards.

The gap in life-expectancy at birth has always been much smaller than the other development indicators. Nevertheless, some trends are noticeable: the gap increased in 1900-1930; decreased in 1930-1950; and increased again after the 1970s.

Table 2: Uruguayan Performance in Comparative Perspective, 1870-2000 (US = 100)

1870 1880 1890 1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000
GDP per capita

Uruguay

101 65 63 27 32 27 33 27 26 24 19 18 15 16

Argentina

63 34 38 31 32 29 25 25 24 21 15 16

Brazil

23 8 8 8 8 8 7 9 9 13 11 10
Latin America 13 12 13 10 9 9 9 6 6

USA

100 100 100 100 100 100 100 100 100 100 100 100 100 100
Literacy rates

Uruguay

57 65 72 79 85 91 92 94 95 97 99

Argentina

57 65 72 79 85 91 93 94 94 96 98

Brazil

39 38 37 42 46 51 61 69 76 81 86

Latin America

28 30 34 37 42 47 56 65 71 77 83

USA

100 100 100 100 100 100 100 100 100 100 100
School enrollment

Uruguay

23 31 31 30 34 42 52 46 43

Argentina

28 41 42 36 39 43 55 44 45

Brazil

12 11 12 14 18 22 30 42

Latin America

USA

100 100 100 100 100 100 100 100 100
Life expectancy at birth

Uruguay

102 100 91 85 91 97 97 97 95 96 96

Argentina

81 85 86 90 88 90 93 94 95 96 95
Brazil 60 60 56 58 58 63 79 83 85 88 88
Latin America 65 63 58 58 59 63 71 77 81 88 87
USA 100 100 100 100 100 100 100 100 100 100 100

Sources: Per capita GDP: Maddison (2001) and Astorga, Bergés and FitzGerald (2003). Literacy rates and life expectancy; Astorga, Bergés and FitzGerald (2003). School enrollment; Bértola and Bertoni (1998).

Uruguay during the First Globalization Boom: Challenge and Response

During the post-Great-War reconstruction after 1851, Uruguayan population grew rapidly (fueled by high natural rates and immigration) and so did per capita output. Productivity grew due to several causes including: the steam ship revolution, which critically reduced the price spread between Europe and America and eased access to the European market; railways, which contributed to the unification of domestic markets and reduced domestic transport costs; the diffusion and adaptation to domestic conditions of innovations in cattle-breeding and services; a significant reduction in transaction costs, related to a fluctuating but noticeable process of institutional building and strengthening of the coercive power of the state.

Wool and woolen products, hides and leather were exported mainly to Europe; salted beef (tasajo) to Brazil and Cuba. Livestock-breeding (both cattle and sheep) was intensive in natural resources and dominated by large estates. By the 1880s, the agrarian frontier was exhausted, land properties were fenced and property rights strengthened. Labor became abundant and concentrated in urban areas, especially around Montevideo’s harbor, which played an important role as a regional (supranational) commercial center. By 1908, it contained 40 percent of the nation’s population, which had risen to more than a million inhabitants, and provided the main part of Uruguay’s services, civil servants and the weak and handicraft-dominated manufacturing sector.

By the 1910s, Uruguayan competitiveness started to weaken. As the benefits of the old technological paradigm were eroding, the new one was not particularly beneficial for resource-intensive countries such as Uruguay. International demand shifted away from primary consumption, the population of Europe grew slowly and European countries struggled for self-sufficiency in primary production in a context of soaring world supply. Beginning in the 1920s, the cattle-breeding sector showed a very poor performance, due to lack of innovation away from natural pastures. In the 1930’s, its performance deteriorated mainly due to unfavorable international conditions. Export volumes stagnated until the 1970s, while purchasing power fluctuated strongly following the terms of trade.

Inward-looking Growth and Structural Change

The Uruguayan economy grew inwards until the 1950s. The multiple exchange rate system was the main economic policy tool. Agrarian production was re-oriented towards wool, crops, dairy products and other industrial inputs, away from beef. The manufacturing industry grew rapidly and diversified significantly, with the help of protectionist tariffs. It was light, and lacked capital goods or technology-intensive sectors. Productivity growth hinged upon technology transfers embodied in imported capital goods and an intensive domestic adaptation process of mature technologies. Domestic demand grew also through an expanding public sector and the expansion of a corporate welfare state. The terms of trade substantially impacted protectionism, productivity growth and domestic demand — the government raised money by manipulating exchange rates, so that when export prices rose the state had a greater capacity to protect the manufacturing sector through low exchange rates for capital goods, raw material and fuel imports and to spur productivity increases by imports of capital, while protection allowed industry to pay higher wages and thus expand domestic demand.

However, rent-seeking industries searching for protectionism and a weak clienteslist state, crowded by civil servants recruited in exchange for political favors to the political parties, directed structural change towards a closed economy and inefficient management. The obvious limits to inward looking growth of a country peopled by only about two million inhabitants were exacerbated in the late 1950s as terms of trade deteriorated. The clientelist political system, which was created by both traditional parties while the state was expanding at the national and local level, was now not able to absorb the increasing social conflicts, dyed by stringent ideological confrontation, in a context of stagnation and huge fiscal deficits.

Re-globalization and Regional Integration

The dictatorship (1973-1985) started a period of increasing openness to trade and deregulation which has persisted until the present. Dynamic integration into the world market is still incomplete, however. An attempt to return to cattle-breeding exports, as the engine of growth, was hindered by the oil crises and the ensuing European response, which restricted meat exports to that destination. The export sector was re-orientated towards “non-traditional exports” — i.e., exports of industrial goods made of traditional raw materials, to which low-quality and low-wage labor was added. Exports were also stimulated by means of strong fiscal exemptions and negative real interest rates and were re-orientated to the regional market (Argentina and Brazil) and to other developing regions. At the end of the 1970s, this policy was replaced by the monetarist approach to the balance of payments. The main goal was to defeat inflation (which had continued above 50% since the 1960s) through deregulation of foreign trade and a pre-announced exchange rate, the “tablita.” A strong wave of capital inflows led to a transitory success, but the Uruguayan peso became more and more overvalued, thus limiting exports, encouraging imports and deepening the chronic balance of trade deficit. The “tablita” remained dependent on increasing capital inflows and obviously collapsed when the risk of a huge devaluation became real. Recession and the debt crisis dominated the scene of the early 1980s.

Democratic regimes since 1985 have combined natural resource intensive exports to the region and other emergent markets, with a modest intra-industrial trade mainly with Argentina. In the 1990s, once again, Uruguay was overexposed to financial capital inflows which fueled a rather volatile growth period. However, by the year 2000, Uruguay had a much worse position in relation to the leaders of the world economy as measured by per capita GDP, real wages, equity and education coverage, than it had fifty years earlier.

Medium-run Prospects

In the 1990s Mercosur as a whole and each of its member countries exhibited a strong trade deficit with non-Mercosur countries. This was the result of a growth pattern fueled by and highly dependent on foreign capital inflows, combined with the traditional specialization in commodities. The whole Mercosur project is still mainly oriented toward price competitiveness. Nevertheless, the strongly divergent macroeconomic policies within Mercosur during the deep Argentine and Uruguayan crisis of the beginning of the twenty-first century, seem to have given place to increased coordination between Argentina and Brazil, thus making of the region a more stable environment.

The big question is whether the ongoing political revival of Mercosur will be able to achieve convergent macroeconomic policies, success in international trade negotiations, and, above all, achievements in developing productive networks which may allow Mercosur to compete outside its home market with knowledge-intensive goods and services. Over that hangs Uruguay’s chance to break away from its long-run divergent siesta.

References

Astorga, Pablo, Ame R. Bergés and Valpy FitzGerald. “The Standard of Living in Latin America during the Twentieth Century.” University of Oxford Discussion Papers in Economic and Social History 54 (2004).

Barrán, José P. and Benjamín Nahum. “Uruguayan Rural History.” Latin American Historical Review, 1985.

Bértola, Luis. The Manufacturing Industry of Uruguay, 1913-1961: A Sectoral Approach to Growth, Fluctuations and Crisis. Publications of the Department of Economic History, University of Göteborg, 61; Institute of Latin American Studies of Stockholm University, Monograph No. 20, 1990.

Bértola, Luis and Reto Bertoni. “Educación y aprendizaje en escenarios de convergencia y divergencia.” Documento de Trabajo, no. 46, Unidad Multidisciplinaria, Facultad de Ciencias Sociales, Universidad de la República, 1998.

Bértola, Luis and Fernando Lorenzo. “Witches in the South: Kuznets-like Swings in Argentina, Brazil and Uruguay since the 1870s.” In The Experience of Economic Growth, edited by J.L. van Zanden and S. Heikenen. Amsterdam: Aksant, 2004.

Bértola, Luis and Gabriel Porcile. “Argentina, Brasil, Uruguay y la Economía Mundial: una aproximación a diferentes regímenes de convergencia y divergencia.” In Ensayos de Historia Económica by Luis Bertola. Montevideo: Uruguay en la región y el mundo, 2000.

Bértola, Luis and Jeffrey Williamson. “Globalization in Latin America before 1940.” National Bureau of Economic Research Working Paper, no. 9687 (2003).

Bértola, Luis and others. El PBI uruguayo 1870-1936 y otras estimaciones. Montevideo, 1998.

Maddison, A. Monitoring the World Economy, 1820-1992. Paris: OECD, 1995.

Maddison, A. The World Economy: A Millennial Perspective. Paris: OECD, 2001.

History of Workplace Safety in the United States, 1880-1970

Mark Aldrich, Smith College

The dangers of work are usually measured by the number of injuries or fatalities occurring to a group of workers, usually over a period of one year. 1 Over the past century such measures reveal a striking improvement in the safety of work in all the advanced countries. In part this has been the result of the gradual shift of jobs from relatively dangerous goods production such as farming, fishing, logging, mining, and manufacturing into such comparatively safe work as retail trade and services. But even the dangerous trades are now far safer than they were in 1900. To take but one example, mining today remains a comparatively risky activity. Its annual fatality rate is about nine for every one hundred thousand miners employed. A century ago in 1900 about three hundred out of every one hundred thousand miners were killed on the job each year. 2

The Nineteenth Century

Before the late nineteenth century we know little about the safety of American workplaces because contemporaries cared little about it. As a result, only fragmentary information exists prior to the 1880s. Pre-industrial laborers faced risks from animals and hand tools, ladders and stairs. Industrialization substituted steam engines for animals, machines for hand tools, and elevators for ladders. But whether these new technologies generally worsened the dangers of work is unclear. What is clear is that nowhere was the new work associated with the industrial revolution more dangerous than in America.

US Was Unusually Dangerous

Americans modified the path of industrialization that had been pioneered in Britain to fit the particular geographic and economic circumstances of the American continent. Reflecting the high wages and vast natural resources of a new continent, this American system encouraged use of labor saving machines and processes. These developments occurred within a legal and regulatory climate that diminished employer’s interest in safety. As a result, Americans developed production methods that were both highly productive and often very dangerous. 3

Accidents Were “Cheap”

While workers injured on the job or their heirs might sue employers for damages, winning proved difficult. Where employers could show that the worker had assumed the risk, or had been injured by the actions of a fellow employee, or had himself been partly at fault, courts would usually deny liability. A number or surveys taken about 1900 showed that only about half of all workers fatally injured recovered anything and their average compensation only amounted to about half a year’s pay. Because accidents were so cheap, American industrial methods developed with little reference to their safety. 4

Mining

Nowhere was the American system more dangerous than in early mining. In Britain, coal seams were deep and coal expensive. As a result, British mines used mining methods that recovered nearly all of the coal because they used waste rock to hold up the roof. British methods also concentrated the working, making supervision easy, and required little blasting. American coal deposits by contrast, were both vast and near the surface; they could be tapped cheaply using techniques known as “room and pillar” mining. Such methods used coal pillars and timber to hold up the roof, because timber and coal were cheap. Since miners worked in separate rooms, labor supervision was difficult and much blasting was required to bring down the coal. Miners themselves were by no means blameless; most were paid by the ton, and when safety interfered with production, safety often took a back seat. For such reasons, American methods yielded more coal per worker than did European techniques, but they were far more dangerous, and toward the end of the nineteenth century, the dangers worsened (see Table 1).5

Table 1

British and American Mine Safety, 1890 -1904

(Fatality rates per Thousand Workers per Year)

Years American Anthracite American Bituminous Great Britain
1890-1894 3.29 2.52 1.61
1900-1904 3.13 3.53 1.28

Source: British data from Great Britain, General Report. Other data from Aldrich, Safety First.

Railroads

Nineteenth century American railroads were also comparatively dangerous to their workers – and their passengers as well – and for similar reasons. Vast North American distances and low population density turned American carriers into predominantly freight haulers – and freight was far more dangerous to workers than passenger traffic, for men had to go in between moving cars for coupling and uncoupling and ride the cars to work brakes. The thin traffic and high wages also forced American carriers to economize on both capital and labor. Accordingly, American carriers were poorly built and used few signals, both of which resulted in many derailments and collisions. Such conditions made American railroad work far more dangerous than that in Britain (see Table 2).6

Table 2

Comparative Safety of British and American Railroad Workers, 1889 – 1901

(Fatality Rates per Thousand Workers per Year)

1889 1895 1901
British railroad workers

All causes
1.14 0.95 0.89
British trainmena

All causes
4.26 3.22 2.21
Coupling 0.94 0.83 0.74
American Railroad workers

All causes
2.67 2.31 2.50
American trainmen

All causes
8.52 6.45 7.35
Coupling 1.73c 1.20 0.78
Brakingb 3.25c 2.44 2.03

Source: Aldrich, Safety First, Table 1 and Great Britain Board of Trade, General Report.

1

Note: Death rates are per thousand employees.

a. Guards, brakemen, and shunters.

b. Deaths from falls from cars and striking overhead obstructions.

Manufacturing

American manufacturing also developed in a distinctively American fashion that substituted power and machinery for labor and manufactured products with interchangeable arts for ease in mass production. Whether American methods were less safe than those in Europe is unclear but by 1900 they were extraordinarily risky by modern standards, for machines and power sources were largely unguarded. And while competition encouraged factory managers to strive for ever-increased output, they showed little interest in improving safety.7

Worker and Employer Responses

Workers and firms responded to these dangers in a number of ways. Some workers simply left jobs they felt were too dangerous, and risky jobs may have had to offer higher pay to attract workers. After the Civil War life and accident insurance companies expanded, and some workers purchased insurance or set aside savings to offset the income risks from death or injury. Some unions and fraternal organizations also offered their members insurance. Railroads and some mines also developed hospital and insurance plans to care for injured workers while many carriers provided jobs for all their injured men. 8

Improving safety, 1910-1939

Public efforts to improve safety date from the very beginnings of industrialization. States established railroad regulatory commissions as early as the 1840s. But while most of the commissions were intended to improve safety, they had few powers and were rarely able to exert much influence on working conditions. Similarly, the first state mining commission began in Pennsylvania in 1869, and other states soon followed. Yet most of the early commissions were ineffectual and as noted safety actually deteriorated after the Civil War. Factory commissions also dated from but most were understaffed and they too had little power.9

Railroads

The most successful effort to improve work safety during the nineteenth century began on the railroads in the 1880s as a small band of railroad regulators, workers, and managers began to campaign for the development of better brakes and couplers for freight cars. In response George Westinghouse modified his passenger train air brake in about 1887 so it would work on long freights, while at roughly the same time Ely Janney developed an automatic car coupler. For the railroads such equipment meant not only better safety, but also higher productivity and after 1888 they began to deploy it. The process was given a boost in 1889-1890 when the newly-formed Interstate Commerce Commission (ICC) published its first accident statistics. They demonstrated conclusively the extraordinary risks to trainmen from coupling and riding freight (Table 2). In 1893 Congress responded, passing the Safety Appliance Act, which mandated use of such equipment. It was the first federal law intended primarily to improve work safety, and by 1900 when the new equipment was widely diffused, risks to trainmen had fallen dramatically.10

Federal Safety Regulation

In the years between 1900 and World War I, a rather strange band of Progressive reformers, muckraking journalists, businessmen, and labor unions pressed for changes in many areas of American life. These years saw the founding of the Federal Food and Drug Administration, the Federal Reserve System and much else. Work safety also became of increased public concern and the first important developments came once again on the railroads. Unions representing trainmen had been impressed by the safety appliance act of 1893 and after 1900 they campaigned for more of the same. In response Congress passed a host of regulations governing the safety of locomotives and freight cars. While most of these specific regulations were probably modestly beneficial, collectively their impact was small because unlike the rules governing automatic couplers and air brakes they addressed rather minor risks.11

In 1910 Congress also established the Bureau of Mines in response to a series of disastrous and increasingly frequent explosions. The Bureau was to be a scientific, not a regulatory body and it was intended to discover and disseminate new knowledge on ways to improve mine safety.12

Workers’ Compensation Laws Enacted

Far more important were new laws that raised the cost of accidents to employers. In 1908 Congress passed a federal employers’ liability law that applied to railroad workers in interstate commerce and sharply limited defenses an employee could claim. Worker fatalities that had once cost the railroads perhaps $200 now cost $2,000. Two years later in 1910, New York became the first state to pass a workmen’s compensation law. This was a European idea. Instead of requiring injured workers to sue for damages in court and prove the employer was negligent, the new law automatically compensated all injuries at a fixed rate. Compensation appealed to businesses because it made costs more predictable and reduced labor strife. To reformers and unions it promised greater and more certain benefits. Samuel Gompers, leader of the American Federation of Labor had studied the effects of compensation in Germany. He was impressed with how it stimulated business interest in safety, he said. Between 1911 and 1921 forty-four states passed compensation laws.13

Employers Become Interested in Safety

The sharp rise in accident costs that resulted from compensation laws and tighter employers’ liability initiated the modern concern with work safety and initiated the long-term decline in work accidents and injuries. Large firms in railroading, mining, manufacturing and elsewhere suddenly became interested in safety. Companies began to guard machines and power sources while machinery makers developed safer designs. Managers began to look for hidden dangers at work, and to require that workers wear hard hats and safety glasses. They also set up safety departments run by engineers and safety committees that included both workers and managers. In 1913 companies founded the National Safety Council to pool information. Government agencies such as the Bureau of Mines and National Bureau of Standards provided scientific support while universities also researched safety problems for firms and industries14

Accident Rates Begin to Fall Steadily

During the years between World War I and World War II the combination of higher accident costs along with the institutionalization of safety concerns in large firms began to show results. Railroad employee fatality rates declined steadily after 1910 and at some large companies such as DuPont and whole industries such as steel making (see Table 3) safety also improved dramatically. Largely independent changes in technology and labor markets also contributed to safety as well. The decline in labor turnover meant fewer new employees who were relatively likely to get hurt, while the spread of factory electrification not only improved lighting but reduced the dangers from power transmission as well. In coal mining the shift from underground work to strip mining also improved safety. Collectively these long-term forces reduced manufacturing injury rates about 38 percent between 1926 and 1939 (see Table 4).15

Table 3

Steel Industry fatality and Injury rates, 1910-1939

(Rates are per million manhours)

Period Fatality rate Injury Rate
1910-1913 0.40 44.1
1937-1939 0.13 11.7

Pattern of Improvement Was Uneven

Yet the pattern of improvement was uneven, both over time and among firms and industries. Safety still deteriorated in times of economic boon when factories mines and railroads were worked to the limit and labor turnover rose. Nor were small companies as successful in reducing risks, for they paid essentially the same compensation insurance premium irrespective of their accident rate, and so the new laws had little effect there. Underground coal mining accidents also showed only modest improvement. Safety was also expensive in coal and many firms were small and saw little payoff from a lower accident rate. The one source of danger that did decline was mine explosions, which diminished in response to technologies developed by the Bureau of Mines. Ironically, however, in 1940 six disastrous blasts that killed 276 men finally led to federal mine inspection in 1941.16

Table 4

Work Injury Rates, Manufacturing and Coal Mining, 1926-1970

(Per Million Manhours)

.

Year Manufacturing Coal Mining
1926 24.2
1931 18.9 89.9
1939 14.9 69.5
1945 18.6 60.7
1950 14.7 53.3
1960 12.0 43.4
1970 15.2 42.6

Source: U.S. Department of Commerce Bureau of the Census, Historical Statistics of the United States, Colonial Times to 1970 (Washington, 1975), Series D-1029 and D-1031.

Postwar Trends, 1945-1970

The economic boon and associated labor turnover during World War II worsened work safety in nearly all areas of the economy, but after 1945 accidents again declined as long-term forces reasserted themselves (Table 4). In addition, after World War II newly powerful labor unions played an increasingly important role in work safety. In the 1960s however economic expansion again led to rising injury rates and the resulting political pressures led Congress to establish the Occupational Safety and Health Administration (OSHA) and the Mine Safety and Health Administration in 1970. The continuing problem of mine explosions also led to the foundation of the Mine Safety and Health Administration (MSHA) that same year. The work of these agencies had been controversial but on balance they have contributed to the continuing reductions in work injuries after 1970.17

References and Further Reading

Aldrich, Mark. Safety First: Technology, Labor and Business in the Building of Work Safety, 1870-1939. Baltimore: Johns Hopkins University Press, 1997.

Aldrich, Mark. “Preventing ‘The Needless Peril of the Coal Mine': the Bureau of Mines and the Campaign Against Coal Mine Explosions, 1910-1940.” Technology and Culture 36, no. 3 (1995): 483-518.

Aldrich, Mark. “The Peril of the Broken Rail: the Carriers, the Steel Companies, and Rail Technology, 1900-1945.” Technology and Culture 40, no. 2 (1999): 263-291

Aldrich, Mark. “Train Wrecks to Typhoid Fever: The Development of Railroad Medicine Organizations, 1850 -World War I.” Bulletin of the History of Medicine, 75, no. 2 (Summer 2001): 254-89.

Derickson Alan. “Participative Regulation of Hazardous Working Conditions: Safety Committees of the United Mine Workers of America,” Labor Studies Journal 18, no. 2 (1993): 25-38.

Dix, Keith. Work Relations in the Coal Industry: The Hand Loading Era. Morgantown: University of West Virginia Press, 1977. The best discussion of coalmine work for this period.

Dix, Keith. What’s a Coal Miner to Do? Pittsburgh: University of Pittsburgh Press, 1988. The best discussion of coal mine labor during the era of mechanization.

Fairris, David. “From Exit to Voice in Shopfloor Governance: The Case of Company Unions.” Business History Review 69, no. 4 (1995): 494-529.

Fairris, David. “Institutional Change in Shopfloor Governance and the Trajectory of Postwar Injury Rates in U.S. Manufacturing, 1946-1970.” Industrial and Labor Relations Review 51, no. 2 (1998): 187-203.

Fishback, Price. Soft Coal Hard Choices: The Economic Welfare of Bituminous Coal Miners, 1890-1930. New York: Oxford University Press, 1992. The best economic analysis of the labor market for coalmine workers.

Fishback, Price and Shawn Kantor. A Prelude to the Welfare State: The Origins of Workers’ Compensation. Chicago: University of Chicago Press, 2000. The best discussions of how employers’ liability rules worked.

Graebner, William. Coal Mining Safety in the Progressive Period. Lexington: University of Kentucky Press, 1976.

Great Britain Board of Trade. General Report upon the Accidents that Have Occurred on Railways of the United Kingdom during the Year 1901. London, HMSO, 1902.

Great Britain Home Office Chief Inspector of Mines. General Report with Statistics for 1914, Part I. London: HMSO, 1915.

Hounshell, David. From the American System to Mass Production, 1800-1932: The Development of Manufacturing Technology in the United States. Baltimore: Johns Hopkins University Press, 1984.

Humphrey, H. B. “Historical Summary of Coal-Mine Explosions in the United States — 1810-1958.” United States Bureau of Mines Bulletin 586 (1960).

Kirkland, Edward. Men, Cities, and Transportation. 2 vols. Cambridge: Harvard University Press, 1948, Discusses railroad regulation and safety in New England.

Lankton, Larry. Cradle to Grave: Life, Work, and Death in Michigan Copper Mines. New York: Oxford University Press, 1991.

Licht, Walter. Working for the Railroad. Princeton: Princeton University Press, 1983.

Long, Priscilla. Where the Sun Never Shines. New York: Paragon, 1989. Covers coal mine safety at the end of the nineteenth century.

Mendeloff, John. Regulating Safety: An Economic and Political Analysis of Occupational Safety and Health Policy. Cambridge: MIT Press, 1979. An accessible modern discussion of safety under OSHA.

National Academy of Sciences. Toward Safer Underground Coal Mines. Washington, DC: NAS, 1982.

Rogers, Donald. “From Common Law to Factory Laws: The Transformation of Workplace Safety Law in Wisconsin before Progressivism.” American Journal of Legal History (1995): 177-213.

Root, Norman and Daley, Judy. “Are Women Safer Workers? A New Look at the Data.” Monthly Labor Review 103, no. 9 (1980): 3-10.

Rosenberg, Nathan. Technology and American Economic Growth. New York: Harper and Row, 1972. Analyzes the forces shaping American technology.

Rosner, David and Gerald Markowity, editors. Dying for Work. Blomington: Indiana University Press, 1987.

Shaw, Robert. Down Brakes: A History of Railroad Accidents, Safety Precautions, and Operating Practices in the United States of America. London: P. R. Macmillan. 1961.

Trachenberg, Alexander. The History of Legislation for the Protection of Coal Miners in Pennsylvania, 1824 – 1915. New York: International Publishers. 1942.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States, Colonial Times to 1970. Washington, DC, 1975.

Usselman, Steven. “Air Brakes for Freight Trains: Technological Innovation in the American Railroad Industry, 1869-1900.” Business History Review 58 (1984): 30-50.

Viscusi, W. Kip. Risk By Choice: Regulating Health and Safety in the Workplace. Cambridge: Harvard University Press, 1983. The most readable treatment of modern safety issues by a leading scholar.

Wallace, Anthony. Saint Clair. New York: Alfred A. Knopf, 1987. Provides a superb discussion of early anthracite mining and safety.

Whaples, Robert and David Buffum. “Fraternalism, Paternalism, the Family and the Market: Insurance a Century Ago.” Social Science History 15 (1991): 97-122.

White, John. The American Railroad Freight Car. Baltimore: Johns Hopkins University Press, 1993. The definitive history of freight car technology.

Whiteside, James. Regulating Danger: The Struggle for Mine Safety in the Rocky Mountain Coal Industry. Lincoln: University of Nebraska Press, 1990.

Wokutch, Richard. Worker Protection Japanese Style: Occupational Safety and Health in the Auto Industry. Ithaca, NY: ILR, 1992

Worrall, John, editor. Safety and the Work Force: Incentives and Disincentives in Workers’ Compensation. Ithaca, NY: ILR Press, 1983.

1 Injuries or fatalities are expressed as rates. For example, if ten workers are injured out of 450 workers during a year, the rate would be .006666. For readability it might be expressed as 6.67 per thousand or 666.7 per hundred thousand workers. Rates may also be expressed per million workhours. Thus if the average work year is 2000 hours, ten injuries in 450 workers results in [10/450×2000]x1,000,000 = 11.1 injuries per million hours worked.

2 For statistics on work injuries from 1922-1970 see U.S. Department of Commerce, Historical Statistics, Series 1029-1036. For earlier data are in Aldrich, Safety First, Appendix 1-3.

3 Hounshell, American System. Rosenberg, Technology,. Aldrich, Safety First.

4 On the workings of the employers’ liability system see Fishback and Kantor, A Prelude, chapter 2

5 Dix, Work Relations, and his What’s a Coal Miner to Do? Wallace, Saint Clair, is a superb discussion of early anthracite mining and safety. Long, Where the Sun, Fishback, Soft Coal, chapters 1, 2, and 7. Humphrey, “Historical Summary.” Aldrich, Safety First, chapter 2.

6 Aldrich, Safety First chapter 1.

7 Aldrich, Safety First chapter 3

8 Fishback and Kantor, A Prelude, chapter 3, discusses higher pay for risky jobs as well as worker savings and accident insurance See also Whaples and Buffum, “Fraternalism, Paternalism.” Aldrich, ” Train Wrecks to Typhoid Fever.”

9Kirkland, Men, Cities. Trachenberg, The History of Legislation Whiteside, Regulating Danger. An early discussion of factory legislation is in Susan Kingsbury, ed.,xxxxx. Rogers,” From Common Law.”

10 On the evolution of freight car technology see White, American Railroad Freight Car, Usselman “Air Brakes for Freight trains,” and Aldrich, Safety First, chapter 1. Shaw, Down Brakes, discusses causes of train accidents.

11 Details of these regulations may be found in Aldrich, Safety First, chapter 5.

12 Graebner, Coal-Mining Safety, Aldrich, “‘The Needless Peril.”

13 On the origins of these laws see Fishback and Kantor, A Prelude, and the sources cited therein.

14 For assessments of the impact of early compensation laws see Aldrich, Safety First, chapter 5 and Fishback and Kantor, A Prelude, chapter 3. Compensation in the modern economy is discussed in Worrall, Safety and the Work Force. Government and other scientific work that promoted safety on railroads and in coal mining are discussed in Aldrich, “‘The Needless Peril’,” and “The Broken Rail.”

15 Farris, “From Exit to Voice.”

16 Aldrich, “‘Needless Peril,” and Humphrey

17 Derickson, “Participative Regulation” and Fairris, “Institutional Change,” also emphasize the role of union and shop floor issues in shaping safety during these years. Much of the modern literature on safety is highly quantitative. For readable discussions see Mendeloff, Regulating Safety (Cambridge: MIT Press, 1979), and Viscusi, Risk by Choice

The US Coal Industry in the Nineteenth Century

Sean Patrick Adams, University of Central Florida

Introduction

The coal industry was a major foundation for American industrialization in the nineteenth century. As a fuel source, coal provided a cheap and efficient source of power for steam engines, furnaces, and forges across the United States. As an economic pursuit, coal spurred technological innovations in mine technology, energy consumption, and transportation. When mine managers brought increasing sophistication to the organization of work in the mines, coal miners responded by organizing into industrial trade unions. The influence of coal was so pervasive in the United States that by the advent of the twentieth century, it became a necessity of everyday life. In an era where smokestacks equaled progress, the smoky air and sooty landscape of industrial America owed a great deal to the growth of the nation’s coal industry. By the close of the nineteenth century, many Americans across the nation read about the latest struggle between coal companies and miners by the light of a coal-gas lamp and in the warmth of a coal-fueled furnace, in a house stocked with goods brought to them by coal-fired locomotives. In many ways, this industry served as a major factor of American industrial growth throughout the nineteenth century.

The Antebellum American Coal Trade

Although coal had served as a major source of energy in Great Britain for centuries, British colonists had little use for North America’s massive reserves of coal prior to American independence. With abundant supplies of wood, water, and animal fuel, there was little need to use mineral fuel in seventeenth and eighteenth-century America. But as colonial cities along the eastern seaboard grew in population and in prestige, coal began to appear in American forges and furnaces. Most likely this coal was imported from Great Britain, but a small domestic trade developed in the bituminous fields outside of Richmond, Virginia and along the Monongahela River near Pittsburgh, Pennsylvania.

The Richmond Basin

Following independence from Britain, imported coal became less common in American cities and the domestic trade became more important. Economic nationalists such as Tench Coxe, Albert Gallatin, and Alexander Hamilton all suggested that the nation’s coal trade — at that time centered in the Richmond coal basin of eastern Virginia — would serve as a strategic resource for the nation’s growth and independence. Although it labored under these weighty expectations, the coal trade of eastern Virginia was hampered by its existence on the margins of the Old Dominion’s plantation economy. Colliers of the Richmond Basin used slave labor effectively in their mines, but scrambled to fill out their labor force, especially during peak periods of agricultural activity. Transportation networks in the region also restricted the growth of coal mining. Turnpikes proved too expensive for the coal trade and the James River and Kanawha Canal failed to make necessary improvements in order to accommodate coal barge traffic and streamline the loading, conveyance, and distribution of coal at Richmond’s tidewater port. Although the Richmond Basin was nation’s first major coalfield, miners there found growth potential to be limited.

The Rise of Anthracite Coal

At the same time that the Richmond Basin’s coal trade declined in importance, a new type of mineral fuel entered urban markets of the American seaboard. Anthracite coal has higher carbon content and is much harder than bituminous coal, thus earning the nickname “stone coal” in its early years of use. In 1803, Philadelphians watched a load of anthracite coal actually squelch a fire during a trial run, and city officials used the load of “stone coal” as attractive gravel for sidewalks. Following the War of 1812, however, a series of events paved the way for anthracite coal’s acceptance in urban markets. Colliers like Jacob Cist saw the shortage of British and Virginia coal in urban communities as an opportunity to promote the use of “stone coal.” Philadelphia’s American Philosophical Society and Franklin Institute enlisted the aid of the area’s scientific community to disseminate information to consumers on the particular needs of anthracite. The opening of several links between Pennsylvania’s anthracite fields via the Lehigh Coal and Navigation Company (1820), the Schuylkill Navigation Company (1825), and the Delaware and Hudson (1829) insured that the flow of anthracite from mine to market would be cheap and fast. “Stone coal” became less a geological curiosity by the 1830s and instead emerged as a valuable domestic fuel for heating and cooking, as well as a powerful source of energy for urban blacksmiths, bakers, brewers, and manufacturers. As demonstrated in Figure 1, Pennsylvania anthracite dominated urban markets by the late 1830s. By 1840, annual production had topped one million tons, or about ten times the annual production of the Richmond bituminous field.

Figure One: Percentage of Seaboard Coal Consumption by Origin, 1822-1842

Sources:

Hunt’s Merchant’s Magazine and Commercial Review 8 (June 1843): 548;

Alfred Chandler, “Anthracite Coal and the Beginnings of the Industrial Revolution,” p. 154.

The Spread of Coalmining

The antebellum period also saw the expansion of coal mining into many more states than Pennsylvania and Virginia, as North America contains a variety of workable coalfields. Ohio’s bituminous fields employed 7,000 men and raised about 320,000 tons of coal in 1850 — only three years later the state’s miners had increased production to over 1,300,000 tons. In Maryland, the George’s Creek bituminous region began to ship coal to urban markets by the Baltimore and Ohio Railroad (1842) and the Chesapeake and Ohio Canal (1850). The growth of St. Louis provided a major boost to the coal industries of Illinois and Missouri, and by 1850 colliers in the two states raised about 350,000 tons of coal annually. By the advent of the Civil War, coal industries appeared in at least twenty states.

Organization of Antebellum Mines

Throughout the antebellum period, coal mining firms tended to be small and labor intensive. The seams that were first worked in the anthracite fields of eastern Pennsylvania or the bituminous fields in Virginia, western Pennsylvania, and Ohio tended to lie close to the surface. A skilled miner and a handful of laborers could easily raise several tons of coal a day through the use of a “drift” or “slope” mine that intersected a vein of coal along a hillside. In the bituminous fields outside of Pittsburgh, for example, coal seams were exposed along the banks of the Monongahela and colliers could simply extract the coal with a pickax or shovel and roll it down the riverbank via a handcart into a waiting barge. Once the coal left the mouth of the mine, however, the size of the business handling it varied. Proprietary colliers usually worked on land that was leased for five to fifteen years — often from a large landowner or corporation. The coal was often shipped to market via a large railroad or canal corporation such as the Baltimore and Ohio Railroad, or the Delaware and Hudson Canal. Competition between mining firms and increases in production kept prices and profit margins relatively low, and many colliers slipped in and out of bankruptcy. These small mining firms were typical of the “easy entry, easy exit” nature of American business competition in the antebellum period.

Labor Relations

Since most antebellum coal mining operations were often limited to a few skilled miners aided by lesser skilled laborers, the labor relations in American coal mining regions saw little extended conflict. Early coal miners also worked close to the surface, often in horizontal drift mines, which meant that work was not as dangerous in the era before deep shaft mining. Most mining operations were far-flung enterprises away from urban centers, which frustrated attempts to organize miners into a “critical mass” of collective power — even in the nation’s most developed anthracite fields. These factors, coupled with the mine operator’s belief that individual enterprise in the anthracite regions insured a harmonious system of independent producers, had inhibited the development of strong labor organizations in Pennsylvania’s antebellum mining industry. In less developed regions, proprietors often worked in the mines themselves, so the lines between ownership, management, and labor were often blurred.

Early Unions

Most disputes, when they did occur, were temporary affairs that focused upon the low wages spurred by the intense competition among colliers. The first such action in the anthracite industry occurred in July of 1842 when workers from Minersville in Schuylkill County marched on Pottsville to protest low wages. This short-lived strike was broken up by the Orwigsburgh Blues, a local militia company. In 1848 John Bates enrolled 5,000 miners and struck for higher pay in the summer of 1849. But members of the “Bates Union” found themselves locked out of work and the movement quickly dissipated. In 1853, the Delaware and Hudson Canal Company’s miners struck for a 2½ cent per ton increase in their piece rate. This strike was successful, but failed to produce any lasting union presence in the D&H’s operations. Reports of disturbances in the bituminous fields of western Pennsylvania and Ohio follow the same pattern, as antebellum strikes tended to be localized and short-lived. Production levels thus remained high, and consumers of mineral fuel could count upon a steady supply reaching market.

Use of Anthracite in the Iron Industry

The most important technological development in the antebellum American coal industry was the successful adoption of anthracite coal to iron making techniques. Since the 1780s, bituminous coal or coke — which is bituminous coal with the impurities burned away — had been the preferred fuel for British iron makers. Once anthracite had nearly successfully entered American hearths, there seemed to be no reason why stone coal could not be used to make iron. As with its domestic use, however, the industrial potential of anthracite coal faced major technological barriers. In British and American iron furnaces of the early nineteenth century, the high heat needed to smelt iron ore required a blast of excess air to aid the combustion of the fuel, whether it was coal, wood, or charcoal. While British iron makers in the 1820s attempted to increase the efficiency of the process by using superheated air, known commonly as a “hot blast,” American iron makers still used a “cold blast” to stoke their furnaces. The density of anthracite coal resisted attempts to ignite it through the cold blast and therefore appeared to be an inappropriate fuel for most American iron furnaces.

Anthracite iron first appeared in Pennsylvania in 1840, when David Thomas brought Welsh hot blast technology into practice at the Lehigh Crane Iron Company. The firm had been chartered in 1839 under the general incorporation act. The Allentown firm’s innovation created a stir in iron making circles, and iron furnaces for smelting ore with anthracite began to appear across eastern and central Pennsylvania. In 1841, only a year after the Lehigh Crane Iron Company’s success, Walter Johnson found no less than eleven anthracite iron furnaces in operation. That same year, an American correspondent of London bankers cited savings on iron making of up to twenty-five percent after the conversion to anthracite and noted that “wherever the coal can be procured the proprietors are changing to the new plan; and it is generally believed that the quality of the iron is much improved where the entire process is affected with anthracite coal.” Pennsylvania’s investment in anthracite iron paid dividends for the industrial economy of the state and proved that coal could be adapted to a number of industrial pursuits. By 1854, forty-six percent of all American pig iron had been smelted with anthracite coal as a fuel, and by 1860 anthracite’s share of pig iron was more than fifty-six percent.

Rising Levels of Coal Output and Falling Prices

The antebellum decades saw the coal industry emerge as a critical component of America’s industrial revolution. Anthracite coal became a fixture in seaboard cities up and down the east coast of North America — as cities grew, so did the demand for coal. To the west, Pittsburgh and Ohio colliers shipped their coal as far as Louisville, Cincinnati, or New Orleans. As wood, animal, and waterpower became scarcer, mineral fuel usually took their place in domestic consumption and small-scale manufacturing. The structure of the industry, many small-scale firms working on short-term leases, meant that production levels remained high throughout the antebellum period, even in the face of falling prices. In 1840, American miners raised 2.5 million tons of coal to serve these growing markets and by 1850 increased annual production to 8.4 million tons. Although prices tended to fluctuate with the season, in the long run, they fell throughout the antebellum period. For example, in 1830 anthracite coal sold for about $11 per ton. Ten years later, the price had dropped to $7 per ton and by 1860 anthracite sold for about $5.50 a ton in New York City. Annual production in 1860 also passed twenty million tons for the first time in history. Increasing production, intense competition, low prices, and quiet labor relations all were characteristics of the antebellum coal trade in the United States, but developments during and after the Civil War would dramatically alter the structure and character of this critical industrial pursuit.

Coal and the Civil War

The most dramatic expansion of the American coal industry occurred in the late antebellum decades but the outbreak of the Civil War led to some major changes. The fuel needs of the federal army and navy, along with their military suppliers, promised a significant increase in the demand for coal. Mine operators planned for rising, or at least stable, coal prices for the duration of the war. Their expectations proved accurate. Even when prices are adjusted for wartime inflation, they increased substantially over the course of the conflict. Over the years 1860 to 1863, the real (i.e., inflation-adjusted) price of a ton of anthracite rose by over thirty percent, and in 1864 the real price had increased to forty-five percent above its 1860 level. In response, the production of coal increased to over twelve million tons of anthracite and over twenty-four million tons nationwide by 1865.

The demand for mineral fuel in the Confederacy led to changes in southern coalfields as well. In 1862, the Confederate Congress organized the Niter and Mining Bureau within the War Department to supervise the collection of niter (also known as saltpeter) for the manufacture of gunpowder and the mining of copper, lead, iron, coal, and zinc. In addition to aiding the Richmond Basin’s production, the Niter and Mining Bureau opened new coalfields in North Carolina and Alabama and coordinated the flow of mineral fuel to Confederate naval stations along the coast. Although the Confederacy was not awash in coal during the conflict, the work of the Niter and Mining Bureau established the groundwork for the expansion of mining in the postbellum South.

In addition to increases in production, the Civil War years accelerated some qualitative changes in the structure of the industry. In the late 1850s, new railroads stretched to new bituminous coalfields in states like Maryland, Ohio, and Illinois. In the established anthracite coal regions of Pennsylvania, railroad companies profited immensely from the increased traffic spurred by the war effort. For example, the Philadelphia & Reading Railroad’s margin of profit increased from $0.88 per ton of coal in 1861 to $1.72 per ton in 1865. Railroad companies emerged from the Civil War as the most important actors in the nation’s coal trade.

The American Coal Trade after the Civil War

Railroads and the Expansion of the Coal Trade

In the years immediately following the Civil War, the expansion of the coal trade accelerated as railroads assumed the burden of carrying coal to market and opening up previously inaccessible fields. They did this by purchasing coal tracts directly and leasing them to subsidiary firms or by opening their own mines. In 1878, the Baltimore and Ohio Railroad shipped three million tons of bituminous coal from mines in Maryland and from the northern coalfields of the new state of West Virginia. When the Chesapeake and Ohio Railroad linked Huntington, West Virginia with Richmond, Virginia in 1873, the rich bituminous coal fields of southern West Virginia were open for development. The Norfolk and Western developed the coalfields of southwestern Virginia by completing their railroad from tidewater to remote Tazewell County in 1883. A network of smaller lines linking individual collieries to these large trunk lines facilitated the rapid development of Appalachian coal.

Railroads also helped open up the massive coal reserves west of the Mississippi. Small coal mines in Missouri and Illinois existed in the antebellum years, but were limited to the steamboat trade down the Mississippi River. As the nation’s web of railroad construction expanded across the Great Plains, coalfields in Colorado, New Mexico, and Wyoming witnessed significant development. Coal had truly become a national endeavor in the United States.

Technological Innovations

As the coal industry expanded, it also incorporated new mining methods. Early slope or drift mines intersected coal seams relatively close to the surface and needed only small capital investments to prepare. Most miners still used picks and shovels to extract the coal, but some miners used black powder to blast holes in the coal seams, then loaded the broken coal onto wagons by hand. But as miners sought to remove more coal, shafts were dug deeper below the water line. As a result, coal mining needed larger amounts of capital as new systems of pumping, ventilation, and extraction required the implementation of steam power in mines. By the 1890s, electric cutting machines replaced the blasting method of loosening the coal in some mines, and by 1900 a quarter of American coal was mined using these methods. As the century progressed, miners raised more and more coal by using new technology. Along with this productivity came the erosion of many traditional skills cherished by experienced miners.

The Coke Industry

Consumption patterns also changed. The late nineteenth century saw the emergence of coke — a form of processed bituminous coal in which impurities are “baked” out under high temperatures — as a powerful fuel in the iron and steel industry. The discovery of excellent coking coal in the Connellsville region of southwestern Pennsylvania spurred the aggressive growth of coke furnaces there. By 1880, the Connellsville region contained more than 4,200 coke ovens and the national production of coke in the United States stood at three million tons. Two decades later, the United States consumed over twenty million tons of coke fuel.

Competition and Profits

The successful incorporation of new mining methods and the emergence of coke as a major fuel source served as both a blessing and a curse to mining firms. With the new technology they raised more coal, but as more coalfields opened up and national production neared eighty million tons by 1880, coal prices remained relatively low. Cheap coal undoubtedly helped America’s rapidly industrializing economy, but it also created an industry structure characterized by boom and bust periods, low profit margins, and cutthroat competition among firms. But however it was raised, the United States became more and more dependent upon coal as the nineteenth century progressed, as demonstrated by Figure 2.

Figure 2: Coal as a Percentage of American Energy Consumption, 1850-1900

Source: Sam H. Schurr and Bruce C. Netschert, Energy in the American Economy, 1850-1975 (Baltimore: Johns Hopkins Press, 1960), 36-37.

The Rise of Labor Unions

As coal mines became more capital intensive over the course of the nineteenth century, the role of miners changed dramatically. Proprietary mines usually employed skilled miners as subcontractors in the years prior to the Civil War; by doing so they abdicated a great deal of control over the pace of mining. Corporate reorganization and the introduction of expensive machinery eroded the traditional authority of the skilled miner. By the 1870s, many mining firms employed managers to supervise the pace of work, but kept the old system of paying mine laborers per ton rather than an hourly wage. Falling piece rates quickly became a source of discontent in coal mining regions.

Miners responded to falling wages and the restructuring of mine labor by organizing into craft unions. The Workingmen’s Benevolent Association founded in Pennsylvania in 1868, united English, Irish, Scottish, and Welsh anthracite miners. The WBA won some concessions from coal companies until Franklin Gowen, acting president of the Philadelphia and Reading Railroad led a concerted effort to break the union in the winter of 1874-75. When sporadic violence plagued the anthracite fields, Gowen led the charge against the “Molly Maguires,” a clandestine organization supposedly led by Irish miners. After the breaking of the WBA, most coal mining unions served to organize skilled workers in specific regions. In 1890, a national mining union appeared when delegates from across the United States formed the United Mine Workers of America. The UMWA struggled to gain widespread acceptance until 1897, when widespread strikes pushed many workers into union membership. By 1903, the UMWA listed about a quarter of a million members, raised a treasury worth over one million dollars, and played a major role in industrial relations of the nation’s coal industry.

Coal at the Turn of the Century

By 1900, the American coal industry was truly a national endeavor that raised fifty-seven million tons of anthracite and 212 million tons of bituminous coal. (See Tables 1 and 2 for additional trends.) Some coal firms grew to immense proportions by nineteenth-century standards. The U.S. Coal and Oil Company, for example, was capitalized at six million dollars and owned the rights to 30,000 acres of coal-bearing land. But small mining concerns with one or two employees also persisted through the turn of the century. New developments in mine technology continued to revolutionize the trade as more and more coal fields across the United States became integrated into the national system of railroads. Industrial relations also assumed nationwide dimensions. John Mitchell, the leader of the UMWA, and L.M. Bowers of the Colorado Fuel and Iron Company, symbolized a new coal industry in which hard-line positions developed in both labor and capital’s respective camps. Since the bituminous coal industry alone employed over 300,000 workers by 1900, many Americans kept a close eye on labor relations in this critical trade. Although “King Coal” stood unchallenged as the nation’s leading supplier of domestic and industrial fuel, tension between managers and workers threatened the stability of the coal industry in the twentieth century.

 

Table 1: Coal Production in the United States, 1829-1899

Year Coal Production (thousands of tons) Percent Increase over Decade Tons per capita
Anthracite Bituminous
1829 138 102 0.02
1839 1008 552 550 0.09
1849 3995 2453 313 0.28
1859 9620 6013 142 0.50
1869 17,083 15,821 110 0.85
1879 30,208 37,898 107 1.36
1889 45,547 95,683 107 2.24
1899 60,418 193,323 80 3.34

Source: Fourteenth Census of the United States, Vol. XI, Mines and Quarries, 1922, Tables 8 and 9, pp. 258 and 260.

Table 2: Leading Coal Producing States, 1889

State Coal Production (thousands of tons)
Pennsylvania 81,719
Illinois 12,104
Ohio 9977
West Virginia 6232
Iowa 4095
Alabama 3573
Indiana 2845
Colorado 2544
Kentucky 2400
Kansas 2221
Tennessee 1926

Source: Thirteenth Census of the United States, Vol. XI, Mines and Quarries, 1913, Table 4, p. 187

Suggestions for Further Reading

Adams, Sean Patrick. “Different Charters, Different Paths: Corporations and Coal in Antebellum Pennsylvania and Virginia,” Business and Economic History 27 (Fall 1998): 78-90.

Binder, Frederick Moore. Coal Age Empire: Pennsylvania Coal and Its Utilization to 1860. Harrisburg: Pennsylvania Historical and Museum Commission, 1974.

Blatz, Perry. Democratic Miners: Work and Labor Relations in the Anthracite Coal Industry, 1875-1925. Albany: SUNY Press, 1994.

Broehl, Wayne G. The Molly Maguires. Cambridge, MA: Harvard University Press, 1964.

Bruce, Kathleen. Virginia Iron Manufacture in the Slave Era. New York: The Century Company, 1931.

Chandler, Alfred. “Anthracite Coal and the Beginnings of the ‘Industrial Revolution’ in the United States,” Business History Review 46 (1972): 141-181.

DiCiccio, Carmen. Coal and Coke in Pennsylvania. Harrisburg: Pennsylvania Historical and Museum Commission, 1996

Eavenson, Howard. The First Century and a Quarter of the American Coal Industry. Pittsburgh: Privately Printed, 1942.

Eller, Ronald. Miners, Millhands, and Mountaineers: Industrialization of the Appalachian South, 1880-1930. Knoxville: University of Tennessee Press, 1982.

Harvey, Katherine. The Best Dressed Miners: Life and Labor in the Maryland Coal Region, 1835-1910. Ithaca, NY: Cornell University Press, 1993.

Hoffman, John. “Anthracite in the Lehigh Valley of Pennsylvania, 1820-1845,” United States National Museum Bulletin 252 (1968): 91-141.

Laing, James T. “The Early Development of the Coal Industry in the Western Counties of Virginia,” West Virginia History 27 (January 1966): 144-155.

Laslett, John H.M. editor. The United Mine Workers: A Model of Industrial Solidarity? University Park: Penn State University Press, 1996.

Letwin, Daniel. The Challenge of Interracial Unionism: Alabama Coal Miners, 1878-1921 Chapel Hill: University of North Carolina Press, 1998.

Lewis, Ronald. Coal, Iron, and Slaves. Industrial Slavery in Maryland and Virginia, 1715-1865. Westport, Connecticut: Greenwood Press, 1979.

Long, Priscilla. Where the Sun Never Shines: A History of America’s Bloody Coal Industry. New York: Paragon, 1989.

Nye, David E.. Consuming Power: A Social History of American Energies. Cambridge: Massachusetts Institute of Technology Press, 1998.

Palladino, Grace. Another Civil War: Labor, Capital, and the State in the Anthracite Regions of Pennsylvania, 1840-1868. Urbana: University of Illinois Press, 1990.

Powell, H. Benjamin. Philadelphia’s First Fuel Crisis. Jacob Cist and the Developing Market for Pennsylvania Anthracite. University Park: The Pennsylvania State University Press, 1978.

Schurr, Sam H. and Bruce C. Netschert. Energy in the American Economy, 1850-1975: An Economic Study of Its History and Prospects. Baltimore: Johns Hopkins Press, 1960.

Stapleton, Darwin. The Transfer of Early Industrial Technologies to America. Philadelphia: American Philosophical Society, 1987.

Stealey, John E.. The Antebellum Kanawha Salt Business and Western Markets. Lexington: The University Press of Kentucky, 1993.

Wallace, Anthony F.C. St. Clair. A Nineteenth-Century Coal Town’s Experience with a Disaster-Prone Industry. New York: Alfred A. Knopf, 1981.

Warren, Kenneth. Triumphant Capitalism: Henry Clay Frick and the Industrial Transformation of America. Pittsburgh: University of Pittsburgh Press, 1996.

Woodworth, J. B.. “The History and Conditions of Mining in the Richmond Coal-Basin, Virginia.” Transactions of the American Institute of Mining Engineers 31 (1902): 477-484.

Yearley, Clifton K.. Enterprise and Anthracite: Economics and Democracy in Schuylkill County, 1820-1875. Baltimore: The Johns Hopkins University Press, 1961.

An Overview of the Economic History of Uruguay since the 1870s

Luis Bértola, Universidad de la República — Uruguay

Uruguay’s Early History

Without silver or gold, without valuable species, scarcely peopled by gatherers and fishers, the Eastern Strand of the Uruguay River (Banda Oriental was the colonial name; República Oriental del Uruguay is the official name today) was, in the sixteenth and seventeenth centuries, distant and unattractive to the European nations that conquered the region. The major export product was the leather of wild descendants of cattle introduced in the early 1600s by the Spaniards. As cattle preceded humans, the state preceded society: Uruguay’s first settlement was Colonia del Sacramento, a Portuguese military fortress founded in 1680, placed precisely across from Buenos Aires, Argentina. Montevideo, also a fortress, was founded by the Spaniards in 1724. Uruguay was on the border between the Spanish and Portuguese empires, a feature which would be decisive for the creation, with strong British involvement, in 1828-1830, of an independent state.

Montevideo had the best natural harbor in the region, and rapidly became the end-point of the trans-Atlantic routes into the region, the base for a strong commercial elite, and for the Spanish navy in the region. During the first decades after independence, however, Uruguay was plagued by political instability, precarious institution building and economic retardation. Recurrent civil wars with intensive involvement by Britain, France, Portugal-Brazil and Argentina, made Uruguay a center for international conflicts, the most important being the Great War (Guerra Grande), which lasted from 1839 to 1851. At its end Uruguay had only about 130,000 inhabitants.

“Around the middle of the nineteenth century, Uruguay was dominated by the latifundium, with its ill-defined boundaries and enormous herds of native cattle, from which only the hides were exported to Great Britain and part of the meat, as jerky, to Brazil and Cuba. There was a shifting rural population that worked on the large estates and lived largely on the parts of beef carcasses that could not be marketed abroad. Often the landowners were also the caudillos of the Blanco or Colorado political parties, the protagonists of civil wars that a weak government was unable to prevent” (Barrán and Nahum, 1984, 655). This picture still holds, even if it has been excessively stylized, neglecting the importance of subsistence or domestic-market oriented peasant production.

Economic Performance in the Long Run

Despite its precarious beginnings, Uruguay’s per capita gross domestic product (GDP) growth from 1870 to 2002 shows an amazing persistence, with the long-run rate averaging around one percent per year. However, this apparent stability hides some important shifts. As shown in Figure 1, both GDP and population grew much faster before the 1930s; from 1930 to 1960 immigration vanished and population grew much more slowly, while decades of GDP stagnation and fast growth alternated; after the 1960s Uruguay became a net-emigration country, with low natural growth rates and a still spasmodic GDP growth.

GDP growth shows a pattern featured by Kuznets-like swings (Bértola and Lorenzo 2004), with extremely destructive downward phases, as shown in Table 1. This cyclical pattern is correlated with movements of the terms of trade (the relative price of exports versus imports), world demand and international capital flows. In the expansive phases exports performed well, due to increased demand and/or positive terms of trade shocks (1880s, 1900s, 1920s, 1940s and even during the Mercosur years from 1991 to 1998). Capital flows would sometimes follow these booms and prolong the cycle, or even be a decisive force to set the cycle up, as were financial flows in the 1970s and 1990s. The usual outcome, however, has been an overvalued currency, which blurred the debt problem and threatened the balance of trade by overpricing exports. Crises have been the result of a combination of changing trade conditions, devaluation and over-indebtedness, as in the 1880s, early 1910s, late 1920s, 1950s, early 1980s and late 1990s.

Population and per capita GDP of Uruguay, 1870-2002 (1913=100)

Table 1: Swings in the Uruguayan Economy, 1870-2003

Per capita GDP fall (%) Length of recession (years) Time to pre-crisis levels (years) Time to next crisis (years)
1872-1875 26 3 15 16
1888-1890 21 2 19 25
1912-1915 30 3 15 19
1930-1933 36 3 17 24-27
1954/57-59 9 2-5 18-21 27-24
1981-1984 17 3 11 17
1998-2003 21 5

Sources: See Figure 1.

Besides its cyclical movement, the terms of trade showed a sharp positive trend in 1870-1913, a strongly fluctuating pattern around similar levels in 1913-1960 and a deteriorating trend since then. While the volume of exports grew quickly up to the 1920s, it stagnated in 1930-1960 and started to grow again after 1970. As a result, the purchasing power of exports grew fourfold in 1870-1913, fluctuated along with the terms of trade in 1930-1960, and exhibited a moderate growth in 1970-2002.

The Uruguayan economy was very open to trade in the period up to 1913, featuring high export shares, which naturally declined as the rapidly growing population filled in rather empty areas. In 1930-1960 the economy was increasingly and markedly closed to international trade, but since the 1970s the economy opened up to trade again. Nevertheless, exports, which earlier were mainly directed to Europe (beef, wool, leather, linseed, etc.), were increasingly oriented to Argentina and Brazil, in the context of bilateral trade agreements in the 1970s and 1980s and of Mercosur (the trading zone encompassing Argentina, Brazil, Paraguay and Uruguay) in the 1990s.

While industrial output kept pace with agrarian export-led growth during the first globalization boom before World War I, the industrial share in GDP increased in 1930-54, and was mainly domestic-market orientated. Deindustrialization has been profound since the mid-1980s. The service sector was always large: focusing on commerce, transport and traditional state bureaucracy during the first globalization boom; focusing on health care, education and social services, during the import-substituting industrialization (ISI) period in the middle of the twentieth century; and focusing on military expenditure, tourism and finance since the 1970s.

The income distribution changed markedly over time. During the first globalization boom before World War I, an already uneven distribution of income and wealth seems to have worsened, due to massive immigration and increasing demand for land, both rural and urban. However, by the 1920s the relative prices of land and labor changed their previous trend, reducing income inequality. The trend later favored industrialization policies, democratization, introduction of wage councils, and the expansion of the welfare state based on an egalitarian ideology. Inequality diminished in many respects: between sectors, within sectors, between genders and between workers and pensioners. While the military dictatorship and the liberal economic policy implemented since the 1970s initiated a drastic reversal of the trend toward economic equality, the globalizing movements of the 1980s and 1990s under democratic rule didn’t increase equality. Thus, inequality remains at the higher levels reached during the period of dictatorship (1973-85).

Comparative Long-run Performance

If the stable long-run rate of Uruguayan per capita GDP growth hides important internal transformations, Uruguay’s changing position in the international scene is even more remarkable. During the first globalization boom the world became more unequal: the United States forged ahead as the world leader (nearly followed by other settler economies); Asia and Africa lagged far behind. Latin America showed a confusing map, in which countries as Argentina and Uruguay performed rather well, and others, such as the Andean region, lagged far behind (Bértola and Williamson 2003). Uruguay’s strong initial position tended to deteriorate in relation to the successful core countries during the late 1800s, as shown in Figure 2. This trend of negative relative growth was somewhat weak during the first half of the twentieth century, improved significantly during the 1960s, as the import-substituting industrialization model got exhausted, and has continued since the 1970s, despite policies favoring increased integration into the global economy.

 Per capita GDP of Uruguay relative to four core countries,  1870-2002

If school enrollment and literacy rates are reasonable proxies for human capital, in late 1800s both Argentina and Uruguay had a great handicap in relation to the United States, as shown in Table 2. The gap in literacy rates tended to disappear — as well as this proxy’s ability to measure comparative levels of human capital. Nevertheless, school enrollment, which includes college-level and technical education, showed a catching-up trend until the 1960’s, but reverted afterwards.

The gap in life-expectancy at birth has always been much smaller than the other development indicators. Nevertheless, some trends are noticeable: the gap increased in 1900-1930; decreased in 1930-1950; and increased again after the 1970s.

Table 2: Uruguayan Performance in Comparative Perspective, 1870-2000 (US = 100)

1870 1880 1890 1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000
GDP per capita

Uruguay

101 65 63 27 32 27 33 27 26 24 19 18 15 16

Argentina

63 34 38 31 32 29 25 25 24 21 15 16

Brazil

23 8 8 8 8 8 7 9 9 13 11 10
Latin America 13 12 13 10 9 9 9 6 6

USA

100 100 100 100 100 100 100 100 100 100 100 100 100 100
Literacy rates

Uruguay

57 65 72 79 85 91 92 94 95 97 99

Argentina

57 65 72 79 85 91 93 94 94 96 98

Brazil

39 38 37 42 46 51 61 69 76 81 86

Latin America

28 30 34 37 42 47 56 65 71 77 83

USA

100 100 100 100 100 100 100 100 100 100 100
School enrollment

Uruguay

23 31 31 30 34 42 52 46 43

Argentina

28 41 42 36 39 43 55 44 45

Brazil

12 11 12 14 18 22 30 42

Latin America

USA

100 100 100 100 100 100 100 100 100
Life expectancy at birth

Uruguay

102 100 91 85 91 97 97 97 95 96 96

Argentina

81 85 86 90 88 90 93 94 95 96 95
Brazil 60 60 56 58 58 63 79 83 85 88 88
Latin America 65 63 58 58 59 63 71 77 81 88 87
USA 100 100 100 100 100 100 100 100 100 100 100

Sources: Per capita GDP: Maddison (2001) and Astorga, Bergés and FitzGerald (2003). Literacy rates and life expectancy; Astorga, Bergés and FitzGerald (2003). School enrollment; Bértola and Bertoni (1998).

Uruguay during the First Globalization Boom: Challenge and Response

During the post-Great-War reconstruction after 1851, Uruguayan population grew rapidly (fueled by high natural rates and immigration) and so did per capita output. Productivity grew due to several causes including: the steam ship revolution, which critically reduced the price spread between Europe and America and eased access to the European market; railways, which contributed to the unification of domestic markets and reduced domestic transport costs; the diffusion and adaptation to domestic conditions of innovations in cattle-breeding and services; a significant reduction in transaction costs, related to a fluctuating but noticeable process of institutional building and strengthening of the coercive power of the state.

Wool and woolen products, hides and leather were exported mainly to Europe; salted beef (tasajo) to Brazil and Cuba. Livestock-breeding (both cattle and sheep) was intensive in natural resources and dominated by large estates. By the 1880s, the agrarian frontier was exhausted, land properties were fenced and property rights strengthened. Labor became abundant and concentrated in urban areas, especially around Montevideo’s harbor, which played an important role as a regional (supranational) commercial center. By 1908, it contained 40 percent of the nation’s population, which had risen to more than a million inhabitants, and provided the main part of Uruguay’s services, civil servants and the weak and handicraft-dominated manufacturing sector.

By the 1910s, Uruguayan competitiveness started to weaken. As the benefits of the old technological paradigm were eroding, the new one was not particularly beneficial for resource-intensive countries such as Uruguay. International demand shifted away from primary consumption, the population of Europe grew slowly and European countries struggled for self-sufficiency in primary production in a context of soaring world supply. Beginning in the 1920s, the cattle-breeding sector showed a very poor performance, due to lack of innovation away from natural pastures. In the 1930’s, its performance deteriorated mainly due to unfavorable international conditions. Export volumes stagnated until the 1970s, while purchasing power fluctuated strongly following the terms of trade.

Inward-looking Growth and Structural Change

The Uruguayan economy grew inwards until the 1950s. The multiple exchange rate system was the main economic policy tool. Agrarian production was re-oriented towards wool, crops, dairy products and other industrial inputs, away from beef. The manufacturing industry grew rapidly and diversified significantly, with the help of protectionist tariffs. It was light, and lacked capital goods or technology-intensive sectors. Productivity growth hinged upon technology transfers embodied in imported capital goods and an intensive domestic adaptation process of mature technologies. Domestic demand grew also through an expanding public sector and the expansion of a corporate welfare state. The terms of trade substantially impacted protectionism, productivity growth and domestic demand — the government raised money by manipulating exchange rates, so that when export prices rose the state had a greater capacity to protect the manufacturing sector through low exchange rates for capital goods, raw material and fuel imports and to spur productivity increases by imports of capital, while protection allowed industry to pay higher wages and thus expand domestic demand.

However, rent-seeking industries searching for protectionism and a weak clienteslist state, crowded by civil servants recruited in exchange for political favors to the political parties, directed structural change towards a closed economy and inefficient management. The obvious limits to inward looking growth of a country peopled by only about two million inhabitants were exacerbated in the late 1950s as terms of trade deteriorated. The clientelist political system, which was created by both traditional parties while the state was expanding at the national and local level, was now not able to absorb the increasing social conflicts, dyed by stringent ideological confrontation, in a context of stagnation and huge fiscal deficits.

Re-globalization and Regional Integration

The dictatorship (1973-1985) started a period of increasing openness to trade and deregulation which has persisted until the present. Dynamic integration into the world market is still incomplete, however. An attempt to return to cattle-breeding exports, as the engine of growth, was hindered by the oil crises and the ensuing European response, which restricted meat exports to that destination. The export sector was re-orientated towards “non-traditional exports” — i.e., exports of industrial goods made of traditional raw materials, to which low-quality and low-wage labor was added. Exports were also stimulated by means of strong fiscal exemptions and negative real interest rates and were re-orientated to the regional market (Argentina and Brazil) and to other developing regions. At the end of the 1970s, this policy was replaced by the monetarist approach to the balance of payments. The main goal was to defeat inflation (which had continued above 50% since the 1960s) through deregulation of foreign trade and a pre-announced exchange rate, the “tablita.” A strong wave of capital inflows led to a transitory success, but the Uruguayan peso became more and more overvalued, thus limiting exports, encouraging imports and deepening the chronic balance of trade deficit. The “tablita” remained dependent on increasing capital inflows and obviously collapsed when the risk of a huge devaluation became real. Recession and the debt crisis dominated the scene of the early 1980s.

Democratic regimes since 1985 have combined natural resource intensive exports to the region and other emergent markets, with a modest intra-industrial trade mainly with Argentina. In the 1990s, once again, Uruguay was overexposed to financial capital inflows which fueled a rather volatile growth period. However, by the year 2000, Uruguay had a much worse position in relation to the leaders of the world economy as measured by per capita GDP, real wages, equity and education coverage, than it had fifty years earlier.

Medium-run Prospects

In the 1990s Mercosur as a whole and each of its member countries exhibited a strong trade deficit with non-Mercosur countries. This was the result of a growth pattern fueled by and highly dependent on foreign capital inflows, combined with the traditional specialization in commodities. The whole Mercosur project is still mainly oriented toward price competitiveness. Nevertheless, the strongly divergent macroeconomic policies within Mercosur during the deep Argentine and Uruguayan crisis of the beginning of the twenty-first century, seem to have given place to increased coordination between Argentina and Brazil, thus making of the region a more stable environment.

The big question is whether the ongoing political revival of Mercosur will be able to achieve convergent macroeconomic policies, success in international trade negotiations, and, above all, achievements in developing productive networks which may allow Mercosur to compete outside its home market with knowledge-intensive goods and services. Over that hangs Uruguay’s chance to break away from its long-run divergent siesta.

References

Astorga, Pablo, Ame R. Bergés and Valpy FitzGerald. “The Standard of Living in Latin America during the Twentieth Century.” University of Oxford Discussion Papers in Economic and Social History 54 (2004).

Barrán, José P. and Benjamín Nahum. “Uruguayan Rural History.” Latin American Historical Review, 1985.

Bértola, Luis. The Manufacturing Industry of Uruguay, 1913-1961: A Sectoral Approach to Growth, Fluctuations and Crisis. Publications of the Department of Economic History, University of Göteborg, 61; Institute of Latin American Studies of Stockholm University, Monograph No. 20, 1990.

Bértola, Luis and Reto Bertoni. “Educación y aprendizaje en escenarios de convergencia y divergencia.” Documento de Trabajo, no. 46, Unidad Multidisciplinaria, Facultad de Ciencias Sociales, Universidad de la República, 1998.

Bértola, Luis and Fernando Lorenzo. “Witches in the South: Kuznets-like Swings in Argentina, Brazil and Uruguay since the 1870s.” In The Experience of Economic Growth, edited by J.L. van Zanden and S. Heikenen. Amsterdam: Aksant, 2004.

Bértola, Luis and Gabriel Porcile. “Argentina, Brasil, Uruguay y la Economía Mundial: una aproximación a diferentes regímenes de convergencia y divergencia.” In Ensayos de Historia Económica by Luis Bertola. Montevideo: Uruguay en la región y el mundo, 2000.

Bértola, Luis and Jeffrey Williamson. “Globalization in Latin America before 1940.” National Bureau of Economic Research Working Paper, no. 9687 (2003).

Bértola, Luis and others. El PBI uruguayo 1870-1936 y otras estimaciones. Montevideo, 1998.

Maddison, A. Monitoring the World Economy, 1820-1992. Paris: OECD, 1995.

Maddison, A. The World Economy: A Millennial

Citation: Bertola, Luis. “An Overview of the Economic History of Uruguay since the 1870s”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/article/Bertola.Uruguay.final

Urban Decline (and Success) in the United States

Fred Smith and Sarah Allen, Davidson College

Introduction

Any discussion of urban decline must begin with a difficult task – defining what is meant by urban decline. Urban decline (or “urban decay”) is a term that evokes images of abandoned homes, vacant storefronts, and crumbling infrastructure, and if asked to name a city that has suffered urban decline, people often think of a city from the upper Midwest like Cleveland, Detroit, or Buffalo. Yet, while nearly every American has seen or experienced urban decline, the term is one that is descriptive and not easily quantifiable. Further complicating the story is this simple fact – metropolitan areas, like greater Detroit, may experience the symptoms of severe urban decline in one neighborhood while remaining economically robust in others. Indeed, the city of Detroit is a textbook case of urban decline, but many of the surrounding communities in metropolitan Detroit are thriving. An additional complication comes from the fact that modern American cities – cities like Dallas, Charlotte, and Phoenix – don’t look much like their early twentieth century counterparts. Phoenix of the early twenty-first century is an economically vibrant city, yet the urban core of Phoenix looks very, very different from the urban core found in “smaller” cities like Boston or San Francisco.[1] It is unlikely that a weekend visitor to downtown Phoenix would come away with the impression that Phoenix is a rapidly growing city, for downtown Phoenix does not contain the housing, shopping, or recreational venues that are found in downtown San Francisco or Boston.

There isn’t a single variable that will serve as a perfect choice for measuring urban decline, but this article will take an in depth look at urban decline by focusing on the best measure of a city’s well being – population. In order to provide a thorough understanding of urban decline, this article contains three additional sections. The next section employs data from a handful of sources to familiarize the reader with the location and severity of urban decline in the United States. Section three is dedicated to explaining the causes of urban decline in the U.S. Finally, the fourth section looks at the future of cities in the United States and provides some concluding remarks.

Urban Decline in the United States – Quantifying the Population Decline

Between 1950 and 2000 the population of the United States increased by approximately 120 million people, from 152 million to 272 million. Despite the dramatic increase in population experienced by the country as a whole, different cities and states experienced radically different rates of growth. Table 1 shows the population figures for a handful of U.S. cities for the years 1950 to 2000. (It should be noted that these figures are population totals for the cities in the list, not for the associated metropolitan areas.)

Table 1: Population for Selected U.S. Cities, 1950-2000

City

Population

% Change

1950 – 2000

1950

1960

1970

1980

1990

2000

New York

7,891,957

7,781,984

7,895,563

7,071,639

7,322,564

8,008,278

1.5

Philadelphia

2,071,605

2,002,512

1,949,996

1,688,210

1,585,577

1,517,550

-26.7

Boston

801,444

697,177

641,071

562,994

574,283

589,141

-26.5

Chicago

3,620,962

3,550,404

3,369,357

3,005,072

2,783,726

2,896,016

-20.0

Detroit

1,849,568

1,670,144

1,514,063

1,203,339

1,027,974

951,270

-48.6

Cleveland

914,808

876,050

750,879

573,822

505,616

478,403

-47.7

Kansas City

456,622

475,539

507,330

448,159

435,146

441,545

-3.3

Denver

415,786

493,887

514,678

492,365

467,610

554,636

33.4

Omaha

251,117

301,598

346,929

314,255

335,795

390,007

55.3

Los Angeles

1,970,358

2,479,015

2,811,801

2,966,850

3,485,398

3,694,820

87.5

San Francisco

775,357

740,316

715,674

678,974

723,959

776,733

0.2

Seattle

467,591

557,087

530,831

493,846

516,259

563,374

20.5

Houston

596,163

938,219

1,233,535

1,595,138

1,630,553

1,953,631

227.7

Dallas

434,462

679,684

844,401

904,078

1,006,877

1,188,580

173.6

Phoenix

106,818

439,170

584,303

789,704

983,403

1,321,045

1136.7

New Orleans

570,445

627,525

593,471

557,515

496,938

484,674

-15.0

Atlanta

331,314

487,455

495,039

425,022

394,017

416,474

25.7

Nashville

174,307

170,874

426,029

455,651

488,371

545,524

213.0

Washington

802,178

763,956

756,668

638,333

606,900

572,059

-28.7

Miami

249,276

291,688

334,859

346,865

358,548

362,470

45.4

Charlotte

134,042

201,564

241,178

314,447

395,934

540,828

303.5

Source: U.S. Census Bureau.

Several trends emerge from the data in Table 1. The cities in the table are clustered together by region, and the cities at the top of the table – cities from the Northeast and Midwest – experience no significant population growth (New York City) or experience dramatic population loss (Detroit and Cleveland). These cities’ experiences stand in stark contrast to that of the cities located in the South and West – cities found farther down the list. Phoenix, Houston, Dallas, Charlotte, and Nashville all experience triple digit population increases during the five decades from 1950 to 2000. Figure 1 displays this information even more dramatically:

Figure 1: Percent Change in Population, 1950 – 2000

Source: U.S. Census Bureau.

While Table 1 and Figure 1 clearly display the population trends within these cities, they do not provide any information about what was happening to the metropolitan areas in which these cities are located. Table 2 fills this gap. (Please note – these metropolitan areas do not correspond directly to the metropolitan areas identified by the U.S. Census Bureau. Rather, Jordan Rappaport – an economist at the Kansas City Federal Reserve Bank – created these metropolitan areas for his 2005 article “The Shared Fortunes of Cities and Suburbs.”)

Table 2: Population of Selected Metropolitan Areas, 1950 to 2000

Metropolitan Area

1950

1960

1970

2000

Percent Change 1950 to 2000

New York-Newark-Jersey City, NY

13,047,870

14,700,000

15,812,314

16,470,048

26.2

Philadelphia, PA

3,658,905

4,175,988

4,525,928

4,580,167

25.2

Boston, MA

3,065,344

3,357,607

3,708,710

4,001,752

30.5

Chicago-Gary, IL-IN

5,612,248

6,805,362

7,606,101

8,573,111

52.8

Detroit, MI

3,150,803

3,934,800

4,434,034

4,366,362

38.6

Cleveland, OH

1,640,319

2,061,668

2,238,320

1,997,048

21.7

Kansas City, MO-KS

972,458

1,232,336

1,414,503

1,843,064

89.5

Denver, CO

619,774

937,677

1,242,027

2,414,649

289.6

Omaha, NE

471,079

568,188

651,174

803,201

70.5

Los Angeles-Long Beach, CA

4,367,911

6,742,696

8,452,461

12,365,627

183.1

San Francisco-Oakland, CA

2,531,314

3,425,674

4,344,174

6,200,867

145.0

Seattle, WA

920,296

1,191,389

1,523,601

2,575,027

179.8

Houston, TX

1,021,876

1,527,092

2,121,829

4,540,723

344.4

Dallas, TX

780,827

1,119,410

1,555,950

3,369,303

331.5

Phoenix, AZ

NA

663,510

967,522

3,251,876

390.1*

New Orleans, LA

754,856

969,326

1,124,397

1,316,510

74.4

Atlanta, GA

914,214

1,224,368

1,659,080

3,879,784

324.4

Nashville, TN

507,128

601,779

704,299

1,238,570

144.2

Washington, DC

1,543,363

2,125,008

2,929,483

4,257,221

175.8

Miami, FL

579,017

1,268,993

1,887,892

3,876,380

569.5

Charlotte, NC

751,271

876,022

1,028,505

1,775,472

136.3

* The percentage change is for the period from 1960 to 2000.

Source: Rappaport; http://www.kc.frb.org/econres/staff/jmr.htm

Table 2 highlights several of the difficulties in conducting a meaningful discussion about urban decline. First, by glancing at the metro population figures for Cleveland and Detroit, it becomes clear that while these cities were experiencing severe urban decay, the suburbs surrounding them were not. The Detroit metropolitan area grew more rapidly than the Boston, Philadelphia, or New York metro areas, and even the Cleveland metro area experienced growth between 1950 and 2000. Next, we can see from Tables 1 and 2 that some of the cities experiencing dramatic growth between 1950 and 2000 did not enjoy similar increases in population at the metro level. The Phoenix, Charlotte, and Nashville metro areas experienced tremendous growth, but their metro growth rates were not nearly as large as their city growth rates. This raises an important question – did these cities experience tremendous growth rates because the population was growing rapidly or because the cities were annexing large amounts of land from the surrounding suburbs? Table 3 helps to answer this question. In Table 3, land area, measured in square miles, is provided for each of the cities initially listed in Table 1. The data in Table 3 clearly indicate that Nashville and Charlotte, as well as Dallas, Phoenix, and Houston, owe some of their growth to the expansion of their physical boundaries. Charlotte, Phoenix, and Nashville are particularly obvious examples of this phenomenon, for each city increased its physical footprint by over seven hundred percent between 1950 and 2000.

Table 3: Land Area for Selected U.S. Cities, 1950 – 2000

Metropolitan Area

1950

1960

1970

2000

Percent Change 1950 to 2000

New York, NY

315.1

300

299.7

303.3

-3.74

Philadelphia, PA

127.2

129

128.5

135.1

6.21

Boston, MA

47.8

46

46

48.4

1.26

Chicago, IL

207.5

222

222.6

227.1

9.45

Detroit, MI

139.6

138

138

138.8

-0.57

Cleveland, OH

75

76

75.9

77.6

3.47

Kansas City, MO

80.6

130

316.3

313.5

288.96

Denver, CO

66.8

68

95.2

153.4

129.64

Omaha, NE

40.7

48

76.6

115.7

184.28

Los Angeles, CA

450.9

455

463.7

469.1

4.04

San Francisco, CA

44.6

45

45.4

46.7

4.71

Seattle, WA

70.8

82

83.6

83.9

18.50

Houston, TX

160

321

433.9

579.4

262.13

Dallas, TX

112

254

265.6

342.5

205.80

Phoenix, AZ

17.1

187

247.9

474.9

2677.19

New Orleans, LA

199.4

205

197.1

180.6

-9.43

Atlanta, GA

36.9

136

131.5

131.7

256.91

Nashville, TN

22

29

507.8

473.3

2051.36

Washington, DC

61.4

61

61.4

61.4

0.00

Miami, FL

34.2

34

34.3

35.7

4.39

Charlotte, NC

30

64.8

76

242.3

707.67

Sources: Rappaport, http://www.kc.frb.org/econres/staff/jmr.htm; Gibson, Population of the 100 Largest Cities.

Taken together, Tables 1 through 3 paint a clear picture of what has happened in urban areas in the United States between 1950 and 2000: Cities in the Southern and Western U.S. have experienced relatively high rates of growth when they are compared to their neighbors in the Midwest and Northeast. And, as a consequence of this, central cities in the Midwest and Northeast have remained the same size or they have experienced moderate to severe urban decay. But, to complete this picture, it is worth considering some additional data. Table 4 presents regional population and housing data for the United States during the period from 1950 to 2000.

Table 4: Regional Population and Housing Data for the U.S., 1950 – 2000

1950

1960

1970

1980

1990

2000

Population Density – persons/(square mile)

50.9

50.7

57.4

64

70.3

79.6

Population by Region

West

19,561,525

28,053,104

34,804,193

43,172,490

52,786,082

63,197,932

South

47,197,088

54,973,113

62,795,367

75,372,362

85,445,930

100,236,820

Midwest

44,460,762

51,619,139

56,571,663

58,865,670

59,668,632

64,392,776

Northeast

39,477,986

44,677,819

49,040,703

49,135,283

50,809,229

53,594,378

Population by Region – % of Total

West

13

15.6

17.1

19.1

21.2

22.5

South

31.3

30.7

30.9

33.3

34.4

35.6

Midwest

29.5

28.8

27.8

26

24

22.9

Northeast

26.2

24.9

24.1

21.7

20.4

19

Population Living in non-Metropolitan Areas (millions)

66.2

65.9

63

57.1

56

55.4

Population Living in Metropolitan Areas (millions)

84.5

113.5

140.2

169.4

192.7

226

Percent in Suburbs in Metropolitan Area

23.3

30.9

37.6

44.8

46.2

50

Percent in Central City in Metropolitan Area

32.8

32.3

31.4

30

31.3

30.3

Percent Living in the Ten Largest Cities

14.4

12.1

10.8

9.2

8.8

8.5

Percentage Minority by Region

West

26.5

33.3

41.6

South

25.7

28.2

34.2

Midwest

12.5

14.2

18.6

Northeast

16.6

20.6

26.6

Housing Units by Region

West

6,532,785

9,557,505

12,031,802

17,082,919

20,895,221

24,378,020

South

13,653,785

17,172,688

21,031,346

29,419,692

36,065,102

42,382,546

Midwest

13,745,646

16,797,804

18,973,217

22,822,059

24,492,718

26,963,635

Northeast

12,051,182

14,798,360

16,642,665

19,086,593

20,810,637

22,180,440

Source: Hobbs and Stoops (2002).

There are several items of particular interest in Table 4. Every region in the United States becomes more diverse between 1980 and 2000. No region has a minority population greater than 26.5 percent minority in 1980, but only the Midwest remains below 26.5 percent minority by 2000. The U.S. population becomes increasingly urbanized over time, yet the percentage of Americans who live in central cities remains nearly constant. Thus, it is the number of Americans living in suburban communities that has fueled the dramatic increase in “urban” residents. This finding is reinforced by looking at the figures for average population density for the United States as a whole, the figures listing the numbers of Americans living in metropolitan versus non-metropolitan areas, and the figures listing the percentage of Americans living in the ten largest cities in the United States.

Other Measures of Urban Decline

While the population decline documented in the first part of this section suggests that cities in the Northeast and Midwest experienced severe urban decline, anyone who has visited the cities of Detroit and Boston would be able to tell you that the urban decline in these cities has affected their downtowns in very different ways. The central city in Boston is, for the most part, economically vibrant. A visitor to Boston would fine manicured public spaces as well as thriving retail, housing, and commercial sectors. Detroit’s downtown is still scarred by vacant office towers, abandoned retail space, and relatively little housing. Furthermore, the city’s public spaces would not compare favorably to those of Boston. While the leaders of Detroit have made some needed improvements to the city’s downtown in the past several years, the central city remains a mere shadow of its former self. Thus, the loss of population experienced by Detroit and Boston do not tell the full story about how urban decline has affected these cities. They have both lost population, yet Detroit has lost a great deal more – it no longer possesses a well-functioning urban economy.

To date, there have been relatively few attempts to quantify the loss of economic vitality in cities afflicted by urban decay. This is due, in part, to the complexity of the problem. There are few reliable historical measures of economic activity available at the city level. However, economists and other social scientists are beginning to better understand the process and the consequences of severe urban decline.

Economists Edward Glaeser and Joseph Gyourko (2005) developed a model that thoroughly explains the process of urban decline. One of their principal insights is that the durable nature of housing means that the process of urban decline will not mirror the process of urban expansion. In a growing city, the demand for housing is met through the construction of new dwellings. When a city faces a reduction in economic productivity and the resulting reduction in the demand for labor, workers will begin to leave the city. Yet, when population in a city begins to decline, housing units do not magically disappear from the urban landscape. Thus, in Glaeser and Gyourko’s model a declining city is characterized by a stock of housing that interacts with a reduction in housing demand, producing a rapid reduction in the real price of housing. Empirical evidence supports the assertions made by the model, for in cities like Cleveland, Detroit, and Buffalo the real price of housing declined in the second half of the twentieth century. An important implication of the Glaeser and Gyourko model is that declining housing prices are likely to attract individuals who are poor and who have acquired relatively little human capital. The presence of these workers makes it difficult for a declining city – like Detroit – to reverse its economic decline, for it becomes relatively difficult to attract businesses that need workers with high levels of human capital.

Complementing the theoretical work of Glaeser and Gyourko, Fred H. Smith (2003) used property values as a proxy for economic activity in order to quantify the urban decline experienced by Cleveland, Ohio. Smith found that the aggregate assessed value for the property in the downtown core of Cleveland fell from its peak of nearly $600 million in 1930 to a mere $45 million by 1980. (Both figures are expressed in 1980 dollars.) Economists William Collins and Robert Margo have also examined the impact of urban decline on property values. Their work focuses on how the value of owner occupied housing declined in cities that experienced a race riot in the 1960s, and, in particular, it focuses on the gap in property values that developed between white and black owned homes. Nonetheless, a great deal of work still remains to be done before the magnitude of urban decay in the United States is fully understood.

What Caused Urban Decline in the United States?

Having examined the timing and the magnitude of the urban decline experienced by U.S. cities, it is now necessary to consider why these cities decayed. In the subsections that follow, each of the principal causes of urban decline is considered in turn.

Decentralizing Technologies

In “Sprawl and Urban Growth,” Edward Glaeser and Matthew Kahn (2001) assert that “while many factors may have helped the growth of sprawl, it ultimately has only one root cause: the automobile” (p. 2). Urban sprawl is simply a popular term for the decentralization of economic activity, one of the principal symptoms of urban decline. So it should come as no surprise that many of the forces that have caused urban sprawl are in fact the same forces that have driven the decline of central cities. As Glaeser and Kahn suggest, the list of causal forces must begin with the emergence of the automobile.

In order to maximize profit, firm owners must choose their location carefully. Input prices and transportation costs (for inputs and outputs) vary across locations. Firm owners ultimately face two important decisions about location, and economic forces dictate the choices made in each instance. First, owners must decide in which city they will do business. Then, the firm owners must decide where the business should be located within the chosen city. In each case, transportation costs and input costs must dominate the owners’ decision making. For example, a business owner whose firm will produce steel must consider the costs of transporting inputs (e.g. iron ore), the costs of transporting the output (steel), and the cost of other inputs in the production process (e.g. labor). For steel firms operating in the late nineteenth century these concerns were balanced out by choosing locations in the Midwest, either on the Great Lakes (e.g. Cleveland) or major rivers (e.g. Pittsburgh). Cleveland and Pittsburgh were cities with plentiful labor and relatively low transport costs for both inputs and the output. However, steel firm owners choosing Cleveland or Pittsburgh also had to choose a location within these cities. Not surprisingly, the owners chose locations that minimized transportation costs. In Cleveland, for example, the steel mills were built near the shore of Lake Erie and relatively close to the main rail terminal. This minimized the costs of getting iron ore from ships that had come to the city via Lake Erie, and it also provided easy access to water or rail transportation for shipping the finished product. The cost of choosing a site near the rail terminal and the city’s docks was not insignificant: Land close to the city’s transportation hub was in high demand, and, therefore, relatively expensive. It would have been cheaper for firm owners to buy land on the periphery of these cities, but they chose not to do this because the costs associated with transporting inputs and outputs to and from the transportation hub would have dominated the savings enjoyed from buying cheaper land on the periphery of the city. Ultimately, it was the absence of cheap intra-city transport that compressed economic activity into the center of an urban area.

Yet, transportation costs and input prices have not simply varied across space; they’ve also changed over time. The introduction of the car and truck had a profound impact on transportation costs. In 1890, moving a ton of goods one mile cost 18.5 cents (measured in 2001 dollars). By 2003 the cost had fallen to 2.3 cents (measured in 2001 dollars) per ton-mile (Glaeser and Kahn 2001, p. 4). While the car and truck dramatically lowered transportation costs, they did not immediately affect firm owners’ choices about which city to choose as their base of operations. Rather, the immediate impact was felt in the choice of where within a city a firm should choose to locate. The intra-city truck made it easy for a firm to locate on the periphery of the city, where land was plentiful and relatively cheap. Returning to the example from the previous paragraph, the introduction of the intra-city truck allowed the owners of steel mills in Cleveland to build new plants on the periphery of the urban area where land was much cheaper (Encyclopedia of Cleveland History). Similarly, the car made it possible for residents to move away from the city center and out to the periphery of the city – or even to newly formed suburbs. (The suburbanization of the urban population had begun in the late nineteenth century when streetcar lines extended from the central city out to the periphery of the city or to communities surrounding the city; the automobile simply accelerated the process of decentralization.) The retail cost of a Ford Model T dropped considerably between 1910 and 1925 – from approximately $1850 to $470, measuring the prices in constant 1925 dollars (these values would be roughly $21,260 and $5400 in 2006 dollars), and the market responded accordingly. As Table 5 illustrates, the number of passenger car registrations increased dramatically during the twentieth century.

Table 5: Passenger Car Registrations in the United States, 1910-1980

Year

Millions of Registered Vehicles

1910

.5

1920

8.1

1930

23.0

1940

27.5

1950

40.4

1960

61.7

1970

89.2

1980

131.6

Source: Muller, p. 36.

While changes in transportation technology had a profound effect on firms’ and residents’ choices about where to locate within a given city, they also affected the choice of which city would be the best for the firm or resident. Americans began demanding more and improved roads to capitalize on the mobility made possible by the car. Also, the automotive, construction, and tourism related industries lobbied state and federal governments to become heavily involved in funding road construction, a responsibility previously relegated to local governments. The landmark National Interstate and Defense Highway Act of 1956 signified a long-term commitment by the national government to unite the country through an extensive network of interstates, while also improving access between cities’ central business district and outlying suburbs. As cars became affordable for the average American, and paved roads became increasingly ubiquitous, not only did the suburban frontier open up to a rising proportion of the population; it was now possible to live almost anywhere in the United States. (However, it is important to note that the widespread availability of air conditioning was a critical factor in Americans’ willingness to move to the South and West.)

Another factor that opened up the rest of the United States for urban development was a change in the cost of obtaining energy. Obtaining abundant, cheap energy is a concern for firm owners and for households. Historical constraints on production and residential locations continued to fall away in the late nineteenth and early twentieth century as innovations in energy production began to take hold. One of the most important of these advances was the spread of the alternating-current electric grid, which further expanded firms’ choices regarding plant location and layout. Energy could be generated at any site and could travel long distances through thin copper wires. Over a fifty-year period from 1890 to 1940, the proportion of goods manufactured using electrical power soared from 0.1 percent to 85.6 percent (Nye 1990). With the complementary advancements in transportation, factories now had the option of locating outside of the city where they could capture savings from cheaper land. The flexibility of electrical power also offered factories new freedom in the spatial organization of production. Whereas steam engines had required a vertical system of organization in multi-level buildings, the AC grid made possible a form of production that permanently transformed the face of manufacturing – the assembly line (Nye 1990).

The Great Migration

Technological advances were not bound by urban limits; they also extended into rural America where they had sweeping social and economic repercussions. Historically, the vast majority of African Americans had worked on Southern farms, first as slaves and then as sharecroppers. But progress in the mechanization of farming – particularly the development of the tractor and the mechanical cotton-picker – reduced the need for unskilled labor on farms. The dwindling need for farm laborers coupled with continuing racial repression in the South led hundreds of thousands of southern African Americans to migrate North in search of new opportunities. The overall result was a dramatic shift in the spatial distribution of African Americans. In 1900, more than three-fourths of black Americans lived in rural areas, and all but a handful of rural blacks lived in the South. By 1960, 73% of blacks lived in urban areas, and the majority of the urban blacks lived outside of the South (Cahill 1974).

Blacks had begun moving to Northern cities in large numbers at the onset of World War I, drawn by the lure of booming wartime industries. In the 1940s, Southern blacks began pouring into the industrial centers at more than triple the rate of the previous decade, bringing with them a legacy of poverty, poor education, and repression. The swell of impoverished and uneducated African Americans rarely received a friendly reception in Northern communities. Instead they frequently faced more of the treatment they had sought to escape (Groh 1972). Furthermore, the abundance of unskilled manufacturing jobs that had greeted the first waves of migrants had begun to dwindle. Manufacturing firms in the upper Midwest (the Rustbelt) faced increased competition from foreign firms, and many of the American firms that remained in business relocated to the suburbs or the Sunbelt to take advantage of cheap land. African Americans had difficulty accessing jobs at locations in the suburbs, and the result for many was a “spatial mismatch” – they lived in the inner city where employment opportunities were scarce, yet lacked access to transportation and that would allow them to commute to the suburban jobs (Kain 1968). Institutionalized racism, which hindered blacks’ attempts to purchase real estate in the suburbs, as well as the proliferation of inner city public housing projects, reinforced the spatial mismatch problem. As inner city African Americans coped with high unemployment rates, high crime rates and urban disturbances such as the race riots of the 1960s were obvious symptoms of economic distress. High crime rates and the race riots simply accelerated the demographic transformation of Northern cities. White city residents had once been “pulled” to the suburbs by the availability of cheap land and cheap transportation when the automobile became affordable; now white residents were being “pushed” by racism and the desire to escape the poverty and crime that had become common in the inner city. Indeed, by 2000 more than 80 percent of Detroit’s residents were African American – a stark contrast from 1950 when only 16 percent of the population was black.

The American City in the Twenty-First Century

Some believe that technology – specifically advances in information technology – will render the city obsolete in the twenty-first century. Urban economists find their arguments unpersuasive (Glaeser 1998). Recent history shows that the way we interact with one another has changed dramatically in a very short period of time. E-mail, cell phones, and text messages belonged to the world science fiction as recently as 1980. Clearly, changes in information technology no longer make it a requirement that we locate ourselves in close proximity to the people we want to interact with. Thus, one can understand the temptation to think that we will no longer need to live so close to one another in New York, San Francisco or Chicago. Ultimately, a person or a firm will only locate in a city if the benefits from being in the city outweigh the costs. What is missing from this analysis, though, is that people and firms locate in cities for reasons that are not immediately obvious.

Economists point to economies of agglomeration as one of the main reasons that firms will continue to choose urban locations over rural locations. Economics of agglomeration exist when a firm’s productivity is enhanced (or its cost of doing business is lowered) because it is located in a cluster of complementary firms of in a densely populated area. A classic example of an urban area that displays substantial economies of agglomeration is “Silicon Valley” (near San Jose, California). Firms choosing to locate in Silicon Valley benefit from several sources of economies of agglomeration, but two of the most easily understood are knowledge spillovers and labor pooling. Knowledge spillovers in Silicon Valley occur because individuals who work at “computer firms” (firms producing software, hardware, etc.) are likely to interact with one another on a regular basis. These interactions can be informal – playing together on a softball team, running into one another at a child’s soccer game, etc. – but they are still very meaningful because they promote the exchange of ideas. By exchanging ideas and information it makes it possible for workers to (potentially) increase their productivity at their own job. Another example of economies of agglomeration in Silicon Valley is the labor pooling that occurs there. Because workers who are trained in computer related fields know that computer firms are located in Silicon Valley, they are more likely to choose to live in or around Silicon Valley. Thus, firms operating in Silicon Valley have an abundant supply of labor in close proximity, and, similarly, workers enjoy the opportunities associated with having several firms that can make use of their skills in a small geographic area. The clustering of computer industry workers and firms allows firms to save money when they need to hire another worker, and it makes it easier for workers who need a job to find one.

In addition to economies of agglomeration, there are other economic forces that make the disappearance of the city unlikely. Another of the benefits that some individuals will associate with urban living is the diversity of products and experiences that are available in a city. For example, in a large city like Chicago it is possible to find deep dish pizza, thin crust pizza, Italian food, Persian food, Greek food, Swedish food, Indian food, Chinese food… literally almost any type of food that you might imagine. Why is all of this food available in Chicago but not in a small town in southern Illinois? Economists answer this question using the concept of demand density. Lots of people like Chinese food, so it is not uncommon to find a Chinese restaurant in a small town. Fewer people, though, have been exposed to Persian cuisine. While it is quite likely that the average American would like Persian food if it were available, most Americans haven’t had the opportunity to try it. Hence, the average American is unlikely to demand much Persian food in a given time period. So, individuals who are interested in operating a Persian food restaurant logically choose to operate in Chicago instead of a small town in southern Illinois. While each individual living in Chicago may not demand Persian food any more frequently than the individuals living in the small town, the presence of so many people in a relatively small area makes it possible for the Persian food restaurant to operate and thrive. Moreover, exposure to Persian food may change people’s tastes and preferences. Over time, the amount of Persian food demand (on average) from each inhabitant of the city may increase.

Individuals who value Persian food – or any of the other experiences that can only be found in a large city – will value the opportunity to live in a large city more than they will value the opportunity to live in a rural area. But the incredible diversity that a large city has to offer is a huge benefit to some individuals, not to everyone. Rural areas will continue to be populated as long as there are people who prefer the pleasures of low-density living. For these individuals, the pleasure of being able to walk in the woods or hike in the mountains may be more than enough compensation for living in a part of the country that doesn’t have a Persian restaurant.

As long as there are people (and firm owners) who believe that the benefits from locating in a city outweigh the costs, cities will continue to exist. The data shown above make it clear that Americans continue to value urban living. Indeed, the population figures for Chicago and New York suggest that in the 1990s more people were finding that there are net benefits to living in very large cities. The rapid expansion of cities in the South and Southwest simply reinforces this idea. To be sure, the urban living experienced in Charlotte is not the same as the urban living experience in Chicago or New York. So, while the urban cores of cities like Detroit and Cleveland are not likely to return to their former size anytime soon, and urban decline will continue to be a problem for these cities in the foreseeable future, it remains clear that Americans enjoy the benefits of urban living and that the American city will continue to thrive in the future.

References

Cahill, Edward E. “Migration and the Decline of the Black Population in Rural and Non-Metropolitan Areas.” Phylon 35, no. 3, (1974): 284-92.

Casadesus-Masanell, Ramon. “Ford’s Model-T: Pricing over the Product Life Cycle,” ABANTE –

Studies in Business Management 1, no. 2, (1998): 143-65.

Chudacoff, Howard and Judith Smith. The Evolution of American Urban Society, fifth edition. Upper Saddle River, NJ: Prentice Hall, 2000.

Collins, William and Robert Margo. “The Economic Aftermath of the 1960s Riots in American Cities: Evidence from Property Values.” Journal of Economic History 67, no. 4 (2007): 849 -83.

Collins, William and Robert Margo. “Race and the Value of Owner-Occupied Housing, 1940-1990.”

Regional Science and Urban Economics 33, no. 3 (2003): 255-86.

Cutler, David et al. “The Rise and Decline of the American Ghetto.” Journal of Political Economy 107, no. 3 (1999): 455-506.

Frey, William and Alden Speare, Jr. Regional and Metropolitan Growth and Decline in the United States. New York: Russell Sage Foundation, 1988.

Gibson, Campbell. “Population of the 100 Largest Cities and Other Urban Places in the United States: 1790 to 1990.” Population Division Working Paper, no. 27, U.S. Bureau of the Census, June 1998. Accessed at: http://www.census.gov/population/www/documentation/twps0027.html

Glaeser, Edward. “Are Cities Dying?” Journal of Economic Perspectives 12, no. 2 (1998): 139-60.

Glaeser, Edward and Joseph Gyourko. “Urban Decline and Durable Housing.” Journal of Political Economy 113, no. 2 (2005): 345-75.

Glaeser, Edward and Matthew Kahn. “Decentralized Employment and the Transformation of the American City.” Brookings-Wharton Papers on Urban Affairs, 2001.

Glaeser, Edward and Janet Kohlhase. “Cities, Regions, and the Decline of Transport Costs.” NBER Working Paper Series, National Bureau of Economic Research, 2003.

Glaeser, Edward and Albert Saiz. “The Rise of the Skilled City.” Brookings-Wharton Papers on Urban Affairs, 2004.

Glaeser, Edward and Jesse Shapiro. “Urban Growth in the 1990s: Is City Living Back?” Journal of Regional Science 43, no. 1 (2003): 139-65.

Groh, George. The Black Migration: The Journey to Urban America. New York: Weybright and Talley, 1972.

Gutfreund, Owen D. Twentieth Century Sprawl: Highways and the Reshaping of the American Landscape. Oxford: Oxford University Press, 2004.

Hanson, Susan, ed. The Geography of Urban Transportation. New York: Guilford Press, 1986.

Hobbs, Frank and Nicole Stoops. Demographic Trends in the Twentieth Century: Census 2000 Special Reports. Washington, DC: U.S. Census Bureau, 2002.

Kim, Sukkoo. “Urban Development in the United States, 1690-1990.” NBER Working Paper Series, National Bureau of Economic Research, 1999.

Mieszkowski, Peter and Edwin Mills. “The Causes of Metropolitan Suburbanization.” Journal of Economic Perspectives 7, no. 3 (1993): 135-47.

Muller, Peter. “Transportation and Urban Form: Stages in the Spatial Evolution of the American Metropolis.” In The Geography of Urban Transportation, edited by Susan Hanson. New York: Guilford Press, 1986.

Nye, David. Electrifying America: Social Meanings of a New Technology, 1880-1940. Cambridge, MA: MIT Press, 1990.

Nye, David. Consuming Power: A Social History of American Energies. Cambridge, MA: MIT Press, 1998.

Rae, Douglas. City: Urbanism and Its End. New Haven: Yale University Press, 2003.

Rappaport, Jordan. “U.S. Urban Decline and Growth, 1950 to 2000.” Economic Review: Federal Reserve Bank of Kansas City, no. 3, 2003: 15-44.

Rodwin, Lloyd and Hidehiko Sazanami, eds. Deindustrialization and Regional Economic Transformation: The Experience of the United States. Boston: Unwin Hyman, 1989.

Smith, Fred H. “Decaying at the Core: Urban Decline in Cleveland, Ohio.” Research in Economic History 21 (2003): 135-84.

Stanback, Thomas M. Jr. and Thierry J. Noyelle. Cities in Transition: Changing Job Structures in Atlanta, Denver, Buffalo, Phoenix, Columbus (Ohio), Nashville, Charlotte. Totowa, NJ: Allanheld, Osmun, 1982.

Van Tassel, David D. and John J. Grabowski, editors, The Encyclopedia of Cleveland History. Bloomington: Indiana University Press, 1996. Available at http://ech.case.edu/


[1] Reporting the size of a “city” should be done with care. In day-to-day usage, many Americans might talk about the size (population) of Boston and assert that Boston is a larger city than Phoenix. Strictly speaking, this is not true. The 2000 Census reports that the population of Boston was 589,000 while Phoenix had a population of 1.3 million. However, the Boston metropolitan area contained 4.4 million inhabitants in 2000 – substantially more than the 3.3 million residents of the Phoenix metropolitan area.

Citation: Smith, Fred and Sarah Allen. “Urban Decline (and Success), US”. EH.Net Encyclopedia, edited by Robert Whaples. June 5, 2008. URL http://eh.net/encyclopedia/urban-decline-and-success-in-the-united-states/

Sweden – Economic Growth and Structural Change, 1800-2000

Lennart Schön, Lund University

This article presents an overview of Swedish economic growth performance internationally and statistically and an account of major trends in Swedish economic development during the nineteenth and twentieth centuries.1

Modern economic growth in Sweden took off in the middle of the nineteenth century and in international comparative terms Sweden has been rather successful during the past 150 years. This is largely thanks to the transformation of the economy and society from agrarian to industrial. Sweden is a small economy that has been open to foreign influences and highly dependent upon the world economy. Thus, successive structural changes have put their imprint upon modern economic growth.

Swedish Growth in International Perspective

The century-long period from the 1870s to the 1970s comprises the most successful part of Swedish industrialization and growth. On a per capita basis the Japanese economy performed equally well (see Table 1). The neighboring Scandinavian countries also grew rapidly but at a somewhat slower rate than Sweden. Growth in the rest of industrial Europe and in the U.S. was clearly outpaced. Growth in the entire world economy, as measured by Maddison, was even slower.

Table 1 Annual Economic Growth Rates per Capita in Industrial Nations and the World Economy, 1871-2005

Year Sweden Rest of Nordic Countries Rest of Western Europe United States Japan World Economy
1871/1875-1971/1975 2.4 2.0 1.7 1.8 2.4 1.5
1971/1975-2001/2005 1.7 2.2 1.9 2.0 2.2 1.6

Note: Rest of Nordic countries = Denmark, Finland and Norway. Rest of Western Europe = Austria, Belgium, Britain, France, Germany, Italy, the Netherlands, and Switzerland.

Source: Maddison (2006); Krantz/Schön (forthcoming 2007); World Bank, World Development Indicator 2000; Groningen Growth and Development Centre, www.ggdc.com.

The Swedish advance in a global perspective is illustrated in Figure 1. In the mid-nineteenth century the Swedish average income level was close to the average global level (as measured by Maddison). In a European perspective Sweden was a rather poor country. By the 1970s, however, the Swedish income level was more than three times higher than the global average and among the highest in Europe.

Figure 1
Swedish GDP per Capita in Relation to World GDP per Capita, 1870-2004
(Nine year moving averages)
Swedish GDP per Capita in Relation to World GDP per Capita, 1870-2004
Sources: Maddison (2006); Krantz/Schön (forthcoming 2007).

Note. The annual variation in world production between Maddison’s benchmarks 1870, 1913 and 1950 is estimated from his supply of annual country series.

To some extent this was a catch-up story. Sweden was able to take advantage of technological and organizational advances made in Western Europe and North America. Furthermore, Scandinavian countries with resource bases such as Sweden and Finland had been rather disadvantaged as long as agriculture was the main source of income. The shift to industry expanded the resource base and industrial development – directed both to a growing domestic market but even more to a widening world market – became the main lever of growth from the late nineteenth century.

Catch-up is not the whole story, though. In many industrial areas Swedish companies took a position at the technological frontier from an early point in time. Thus, in certain sectors there was also forging ahead,2 quickening the pace of structural change in the industrializing economy. Furthermore, during a century of fairly rapid growth new conditions have arisen that have required profound adaptation and a renewal of entrepreneurial activity as well as of economic policies.

The slow down in Swedish growth from the 1970s may be considered in this perspective. While in most other countries growth from the 1970s fell only in relation to growth rates in the golden post-war ages, Swedish growth fell clearly below the historical long run growth trend. It also fell to a very low level internationally. The 1970s certainly meant the end to a number of successful growth trajectories in the industrial society. At the same time new growth forces appeared with the electronic revolution, as well as with the advance of a more service based economy. It may be the case that this structural change hit the Swedish economy harder than most other economies, at least of the industrial capitalist economies. Sweden was forced into a transformation of its industrial economy and of its political economy in the 1970s and the 1980s that was more profound than in most other Western economies.

A Statistical Overview, 1800-2000

Swedish economic development since 1800 may be divided into six periods with different growth trends, as well as different composition of growth forces.

Table 2 Annual Growth Rates in per Capita Production, Total Investments, Foreign Trade and Population in Sweden, 1800-2000

Period Per capita GDP Investments Foreign Trade Population
1800-1840 0.6 0.3 0.7 0.8
1840-1870 1.2 3.0 4.6 1.0
1870-1910 1.7 3.0 3.3 0.6
1910-1950 2.2 4.2 2.0 0.5
1950-1975 3.6 5.5 6.5 0.6
1975-2000 1.4 2.1 4.3 0.4
1800-2000 1.9 3.4 3.8 0.7

Source: Krantz/Schön (forthcoming 2007).

In the first decades of the nineteenth century the agricultural sector dominated and growth was slow in all aspects but in population. Still there was per capita growth, but to some extent this was a recovery from the low levels during the Napoleonic Wars. The acceleration during the next period around the mid-nineteenth century is marked in all aspects. Investments and foreign trade became very dynamic ingredients with the onset of industrialization. They were to remain so during the following periods as well. Up to the 1970s per capita growth rates increased for each successive period. In an international perspective it is most notable that per capita growth rates increased also in the interwar period, despite the slow down in foreign trade. The interwar period is crucial for the long run relative success of Swedish economic growth. The decisive culmination in the post-war period with high growth rates in investments and in foreign trade stands out as well, as the deceleration in all aspects in the late twentieth century.

An analysis in a traditional growth accounting framework gives a long term pattern with certain periodic similarities (see Table 3). Thus, total factor productivity growth has increased over time up to the 1970s, only to decrease to its long run level in the last decades. This deceleration in productivity growth may be looked upon either as a failure of the “Swedish Model” to accommodate new growth forces or as another case of the “productivity paradox” in lieu of the information technology revolution.3

Table 3 Total Factor Productivity (TFP) Growth and Relative Contribution of Capital, Labor and TFP to GDP Growth in Sweden, 1840-2000

Period TFP Growth Capital Labor TFP
1840-1870 0.4 55 27 18
1870-1910 0.7 50 18 32
1910-1950 1.0 39 24 37
1950-1975 2.1 45 7 48
1975-2000 1.0 44 1 55
1840-2000 1.1 45 16 39

Source: See Table 2.

In terms of contribution to overall growth, TFP has increased its share for every period. The TFP share was low in the 1840s but there was a very marked increase with the onset of modern industrialization from the 1870s. In relative terms TFP reached its highest level so far from the 1970s, thus indicating an increasing role of human capital, technology and knowledge in economic growth. The role of capital accumulation was markedly more pronounced in early industrialization with the build-up of a modern infrastructure and with urbanization, but still capital did retain much of its importance during the twentieth century. Thus its contribution to growth during the post-war Golden Ages was significant with very high levels of material investments. At the same time TFP growth culminated with positive structural shifts, as well as increased knowledge intensity complementary to the investments. Labor has in quantitative terms progressively reduced its role in economic growth. One should observe, however, the relatively large importance of labor in Swedish economic growth during the interwar period. This was largely due to demographic factors and to the employment situation that will be further commented upon.

In the first decades of the nineteenth century, growth was still led by the primary production of agriculture, accompanied by services and transport. Secondary production in manufacturing and building was, on the contrary, very stagnant. From the 1840s the industrial sector accelerated, increasingly supported by transport and communications, as well as by private services. The sectoral shift from agriculture to industry became more pronounced at the turn of the twentieth century when industry and transportation boomed, while agricultural growth decelerated into subsequent stagnation. In the post-war period the volume of services, both private and public, increased strongly, although still not outpacing industry. From the 1970s the focus shifted to private services and to transport and communications, indicating fundamental new prerequisites of growth.

Table 4 Growth Rates of Industrial Sectors, 1800-2000

Period Agriculture Industrial and Hand Transport and Communic. Building Private Services Public Services GDP
1800-1840 1.5 0.3 1.1 -0.1 1.4 1.5 1.3
1840-1870 2.1 3.7 1.8 2.4 2.7 0.8 2.3
1870-1910 1.0 5.0 3.9 1.3 2.7 1.0 2.3
1910-1950 0.0 3.5 4.9 1.4 2.2 2.2 2.7
1950-1975 0.4 5.1 4.4 3.8 4.3 4.0 4.3
1975-2000 -0.4 1.9 2.6 -0.8 2.2 0.2 1.8
1800-2000 0.9 3.8 3.7 1.8 2.7 1.7 2.6

Source: See Table 2.

Note: Private services are exclusive of dwelling services.

Growth and Transformation in the Agricultural Society of the Early Nineteenth Century

During the first half of the nineteenth century the agricultural sector and the rural society dominated the Swedish economy. Thus, more than three-quarters of the population were occupied in agriculture while roughly 90 percent lived in the countryside. Many non-agrarian activities such as the iron industry, the saw mill industry and many crafts as well as domestic, religious and military services were performed in rural areas. Although growth was slow, a number of structural and institutional changes occurred that paved the way for future modernization.

Most important was the transformation of agriculture. From the late eighteenth century commercialization of the primary sector intensified. Particularly during the Napoleonic Wars, the domestic market for food stuffs widened. The population increase in combination with the temporary decrease in imports stimulated enclosures and reclamation of land, the introduction of new crops and new methods and above all it stimulated a greater degree of market orientation. In the decades after the war the traditional Swedish trade deficit in grain even shifted to a trade surplus with an increasing exportation of oats, primarily to Britain.

Concomitant with the agricultural transformation were a number of infrastructural and institutional changes. Domestic transportation costs were reduced through investments in canals and roads. Trade of agricultural goods was liberalized, reducing transaction costs and integrating the domestic market even further. Trading companies became more effective in attracting agricultural surpluses for more distant markets. In support of the agricultural sector new means of information were introduced by, for example, agricultural societies that published periodicals on innovative methods and on market trends. Mortgage societies were established to supply agriculture with long term capital for investments that in turn intensified the commercialization of production.

All these elements meant a profound institutional change in the sense that the price mechanism became much more effective in directing human behavior. Furthermore, a greater interest in information and in the main instrument of information, namely literacy, was infused. Traditionally, popular literacy had been upheld by the church, mainly devoted to knowledge of the primary Lutheran texts. In the new economic environment, literacy was secularized and transformed into a more functional literacy marked by the advent of schools for public education in the 1840s.

The Breakthrough of Modern Economic Growth in the Mid-nineteenth Century

In the decades around the middle of the nineteenth century new dynamic forces appeared that accelerated growth. Most notably foreign trade expanded by leaps and bounds in the 1850s and 1860s. With new export sectors, industrial investments increased. Furthermore, railways became the most prominent component of a new infrastructure and with this construction a new component in Swedish growth was introduced, heavy capital imports.

The upswing in industrial growth in Western Europe during the 1850s, in combination with demand induced through the Crimean War, led to a particularly strong expansion in Swedish exports with sharp price increases for three staple goods – bar iron, wood and oats. The charcoal-based Swedish bar iron had been the traditional export good and had completely dominated Swedish exports until mid-nineteenth century. Bar iron met, however, increasingly strong competition from British and continental iron and steel industries and Swedish exports had stagnated in the first half of the nineteenth century. The upswing in international demand, following the diffusion of industrialization and railway construction, gave an impetus to the modernization of Swedish steel production in the following decades.

The saw mill industry was a really new export industry that grew dramatically in the 1850s and 1860s. Up until this time, the vast forests in Sweden had been regarded mainly as a fuel resource for the iron industry and for household heating and local residential construction. With sharp price increases on the Western European market from the 1840s and 1850s, the resources of the sparsely populated northern part of Sweden suddenly became valuable. A formidable explosion of saw mill construction at the mouths of the rivers along the northern coastline followed. Within a few decades Swedish merchants, as well as Norwegian, German, British and Dutch merchants, became saw mill owners running large-scale capitalist enterprises at the fringe of the European civilization.

Less dramatic but equally important was the sudden expansion of Swedish oat exports. The market for oats appeared mainly in Britain, where short-distance transportation in rapidly growing urban centers increased the fleet of horses. Swedish oats became an important energy resource during the decades around the mid-nineteenth century. In Sweden this had a special significance since oats could be cultivated on rather barren and marginal soils and Sweden was richly endowed with such soils. Thus, the market for oats with strongly increasing prices stimulated further the commercialization of agriculture and the diffusion of new methods. It was furthermore so since oats for the market were a substitute for local flax production – also thriving on barren soils – while domestic linen was increasingly supplanted by factory-produced cotton goods.

The Swedish economy was able to respond to the impetus from Western Europe during these decades, to diffuse the new influences in the economy and to integrate them in its development very successfully. The barriers to change seem to have been weak. This is partly explained by the prior transformation of agriculture and the evolution of market institutions in the rural economy. People reacted to the price mechanism. New social classes of commercial peasants, capitalists and wage laborers had emerged in an era of domestic market expansion, with increased regional specialization, and population increase.

The composition of export goods also contributed to the diffusion of participation and to the diffusion of export income. Iron, wood and oats meant both a regional and a social distribution. The value of prior marginal resources such as soils in the south and forests in the north was inflated. The technology was simple and labor intensive in industry, forestry, agriculture and transportation. The demand for unskilled labor increased strongly that was to put an imprint upon Swedish wage development in the second half of the nineteenth century. Commercial houses and industrial companies made profits but export income was distributed to many segments of the population.

The integration of the Swedish economy was further enforced through initiatives taken by the State. The parliament decision in the 1850s to construct the railway trunk lines meant, first, a more direct involvement by the State in the development of a modern infrastructure and, second, new principles of finance since the State had to rely upon capital imports. At the same time markets for goods, labor and capital were liberalized and integration both within Sweden and with the world market deepened. The Swedish adoption of the Gold Standard in 1873 put a final stamp on this institutional development.

A Second Industrial Revolution around 1900

In the late nineteenth century, particularly in the 1880s, international competition became fiercer for agriculture and early industrial branches. The integration of world markets led to falling prices and stagnation in the demand for Swedish staple goods such as iron, sawn wood and oats. Profits were squeezed and expansion thwarted. On the other hand there arose new markets. Increasing wages intensified mechanization both in agriculture and in industry. The demand increased for more sophisticated machinery equipment. At the same time consumer demand shifted towards better foodstuff – such as milk, butter and meat – and towards more fabricated industrial goods.

The decades around the turn of the twentieth century meant a profound structural change in the composition of Swedish industrial expansion that was crucial for long term growth. New and more sophisticated enterprises were founded and expanded particularly from the 1890s, in the upswing after the Baring Crisis.

The new enterprises were closely related to the so called Second Industrial Revolution in which scientific knowledge and more complex engineering skills were main components. The electrical motor became especially important in Sweden. A new development block was created around this innovation that combined engineering skills in companies such as ASEA (later ABB) with a large demand in energy-intensive processes and with the large supply of hydropower in Sweden.4 Financing the rapid development of this large block engaged commercial banks, knitting closer ties between financial capital and industry. The State, once again, engaged itself in infrastructural development in support of electrification, still resorting to heavy capital imports.

A number of innovative industries were founded in this period – all related to increased demand for mechanization and engineering skills. Companies such as AGA, ASEA, Ericsson, Separator (AlfaLaval) and SKF have been labeled “enterprises of genius” and all are represented with renowned inventors and innovators. This was, of course, not an entirely Swedish phenomenon. These branches developed simultaneously on the Continent, particularly in nearby Germany and in the U.S. Knowledge and innovative stimulus was diffused among these economies. The question is rather why this new development became so strong in Sweden so that new industries within a relatively short period of time were able to supplant old resource-based industries as main driving forces of industrialization.

Traditions of engineering skills were certainly important, developed in old heavy industrial branches such as iron and steel industries and stimulated further by State initiatives such as railway construction or, more directly, the founding of the Royal Institute of Technology. But apart from that the economic development in the second half of the nineteenth century fundamentally changed relative factor prices and the profitability of allocation of resources in different lines of production.

The relative increase in the wages of unskilled labor had been stimulated by the composition of early exports in Sweden. This was much reinforced by two components in the further development – emigration and capital imports.

Within approximately the same period, 1850-1910, the Swedish economy received a huge amount of capital mainly from Germany and France, while delivering an equally huge amount of labor to primarily the U.S. Thus, Swedish relative factor prices changed dramatically. Swedish interest rates remained at rather high levels compared to leading European countries until 1910, due to a continuous large demand for capital in Sweden, but relative wages rose persistently (see Table 5). As in the rest of Scandinavia, wage increases were much stronger than GDP growth in Sweden indicating a shift in income distribution in favor of labor, particularly in favor of unskilled labor, during this period of increased world market integration.

Table 5 Annual Increase in Real Wages of Unskilled Labor and Annual GDP Growth per Capita, 1870-1910

Country Annual real wage increase, 1870-1910 Annual GDP growth per capita, 1870-1910
Sweden 2.8 1.7
Denmark and Norway 2.6 1.3
France, Germany and Great Britain 1.1 1.2
United States 1.1 1.6

Sources: Wages from Williamson (1995); GDP growth see Table 1.

Relative profitability fell in traditional industries, which exploited rich natural resources and cheap labor, while more sophisticated industries were favored. But the causality runs both ways. Had this structural shift with the growth of new and more profitable industries not occurred, the Swedish economy would not have been able to sustain the wage increase.5

Accelerated Growth in the War-stricken Period, 1910-1950

The most notable feature of long term Swedish growth is the acceleration in growth rates during the period 1910-1950, which in Europe at large was full of problems and catastrophes.6 Thus, Swedish per capita production grew at 2.2 percent annually while growth in the rest of Scandinavia was somewhat below 2 percent and in the rest of Europe hovered at 1 percent. The Swedish acceleration was based mainly on three pillars.

First, the structure created at the end of the nineteenth century was very viable, with considerable long term growth potential. It consisted of new industries and new infrastructures that involved industrialists and financial capitalists, as well as public sector support. It also involved industries meeting a relatively strong demand in war times, as well as in the interwar period, both domestically and abroad.

Second, the First World War meant an immense financial bonus to the Swedish market. A huge export surplus at inflated prices during the war led to the domestication of the Swedish national debt. This in turn further capitalized the Swedish financial market, lowering interest rates and ameliorating sequential innovative activity in industry. A domestic money market arose that provided the State with new instruments for economic policy that were to become important for the implementation of the new social democratic “Keynesian” policies of the 1930s.

Third, demographic development favored the Swedish economy in this period. The share of the economically active age group 15-64 grew substantially. This was due partly to the fact that prior emigration had sized down cohorts that now would have become old age pensioners. Comparatively low mortality of young people during the 1910s, as well as an end to mass emigration further enhanced the share of the active population. Both the labor market and domestic demand was stimulated in particular during the 1930s when the household forming age group of 25-30 years increased.

The augmented labor supply would have increased unemployment had it not been combined with the richer supply of capital and innovative industrial development that met elastic demand both domestically and in Europe.

Thus, a richer supply of both capital and labor stimulated the domestic market in a period when international market integration deteriorated. Above all it stimulated the development of mass production of consumption goods based upon the innovations of the Second Industrial Revolution. Significant new enterprises that emanated from the interwar period were very much related to the new logic of the industrial society, such as Volvo, SAAB, Electrolux, Tetra Pak and IKEA.

The Golden Age of Growth, 1950-1975

The Swedish economy was clearly part of the European Golden Age of growth, although Swedish acceleration from the 1950s was less pronounced than in the rest of Western Europe, which to a much larger extent had been plagued by wars and crises.7 The Swedish post-war period was characterized primarily by two phenomena – the full fruition of development blocks based upon the great innovations of the late nineteenth century (the electrical motor and the combustion engine) and the cementation of the “Swedish Model” for the welfare state. These two phenomena were highly complementary.

The Swedish Model had basically two components. One was a greater public responsibility for social security and for the creation and preservation of human capital. This led to a rapid increase in the supply of public services in the realms of education, health and children’s day care as well as to increases in social security programs and in public savings for transfers to pensioners program. The consequence was high taxation. The other component was a regulation of labor and capital markets. This was the most ingenious part of the model, constructed to sustain growth in the industrial society and to increase equality in combination with the social security program and taxation.

The labor market program was the result of negotiations between trade unions and the employers’ organization. It was labeled “solidaristic wage policy” with two elements. One was to achieve equal wages for equal work, regardless of individual companies’ ability to pay. The other element was to raise the wage level in low paid areas and thus to compress the wage distribution. The aim of the program was actually to increase the speed in the structural rationalization of industries and to eliminate less productive companies and branches. Labor should be transferred to the most productive export-oriented sectors. At the same time income should be distributed more equally. A drawback of the solidaristic wage policy from an egalitarian point of view was that profits soared in the productive sectors since wage increases were held back. However, capital market regulations hindered the ability of high profits to be converted into very high incomes for shareholders. Profits were taxed very low if they were converted into further investments within the company (the timing in the use of the funds was controlled by the State in its stabilization policy) but taxed heavily if distributed to share holders. The result was that investments within existing profitable companies were supported and actually subsidized while the mobility of capital dwindled and the activity at the stock market fell.

As long as the export sectors grew, the program worked well.8 Companies founded in the late nineteenth century and in the interwar period developed into successful multinationals in engineering with machinery, auto industries and shipbuilding, as well as in resource-based industries of steel and paper. The expansion of the export sector was the main force behind the high growth rates and the productivity increases but the sector was strongly supported by public investments or publicly subsidized investments in infrastructure and residential construction.

Hence, during the Golden Age of growth the development blocks around electrification and motorization matured in a broad modernization of the society, where mass consumption and mass production was supported by social programs, by investment programs and by labor market policy.

Crisis and Restructuring from the 1970s

In the 1970s and early 1980s a number of industries – such as steel works, pulp and paper, shipbuilding, and mechanical engineering – ran into crisis. New global competition, changing consumer behavior and profound innovative renewal, especially in microelectronics, made some of the industrial pillars of the Swedish Model crumble. At the same time the disadvantages of the old model became more apparent. It put obstacles to flexibility and to entrepreneurial initiatives and it reduced individual incentives for mobility. Thus, while the Swedish Model did foster rationalization of existing industries well adapted to the post-war period, it did not support more profound transformation of the economy.

One should not exaggerate the obstacles to transformation, though. The Swedish economy was still very open in the market for goods and many services, and the pressure to transform increased rapidly. During the 1980s a far-reaching structural change within industry as well as in economic policy took place, engaging both private and public actors. Shipbuilding was almost completely discontinued, pulp industries were integrated into modernized paper works, the steel industry was concentrated and specialized, and the mechanical engineering was digitalized. New and more knowledge-intensive growth industries appeared in the 1980s, such as IT-based telecommunication, pharmaceutical industries, and biotechnology, as well as new service industries.

During the 1980s some of the constituent components of the Swedish model were weakened or eliminated. Centralized negotiations and solidaristic wage policy disappeared. Regulations in the capital market were dismantled under the pressure of increasing international capital flows simultaneously with a forceful revival of the stock market. The expansion of public sector services came to an end and the taxation system was reformed with a reduction of marginal tax rates. Thus, Swedish economic policy and welfare system became more adapted to the main European level that facilitated the Swedish application of membership and final entrance into the European Union in 1995.

It is also clear that the period from the 1970s to the early twenty-first century comprise two growth trends, before and after 1990 respectively. During the 1970s and 1980s, growth in Sweden was very slow and marked by the great structural problems that the Swedish economy had to cope with. The slow growth prior to 1990 does not signify stagnation in a real sense, but rather the transformation of industrial structures and the reformulation of economic policy, which did not immediately result in a speed up of growth but rather in imbalances and bottle necks that took years to eliminate. From the 1990s up to 2005 Swedish growth accelerated quite forcefully in comparison with most Western economies.9 Thus, the 1980s may be considered as a Swedish case of “the productivity paradox,” with innovative renewal but with a delayed acceleration of productivity and growth from the 1990s – although a delayed productivity effect of more profound transformation and radical innovative behavior is not paradoxical.

Table 6 Annual Growth Rates per Capita, 1971-2005

Period Sweden Rest of Nordic Countries Rest of Western Europe United States World Economy
1971/1975-1991/1995 1.2 2.1 1.8 1.6 1.4
1991/1995-2001/2005 2.4 2.5 1.7 2.1 2.1

Sources: See Table 1.

The recent acceleration in growth may also indicate that some of the basic traits from early industrialization still pertain to the Swedish economy – an international attitude in a small open economy fosters transformation and adaptation of human skills to new circumstances as a major force behind long term growth.

References

Abramovitz, Moses. “Catching Up, Forging Ahead and Falling Behind.” Journal of Economic History 46, no. 2 (1986): 385-406.

Dahmén, Erik. “Development Blocks in Industrial Economics.” Scandinavian Economic History Review 36 (1988): 3-14.

David, Paul A. “The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox.” American Economic Review 80, no. 2 (1980): 355-61.

Eichengreen, Barry. “Institutions and Economic Growth: Europe after World War II.” In Economic Growth in Europe since 1945, edited by Nicholas Crafts and Gianni Toniolo. New York: Cambridge University Press, 1996.

Krantz, Olle and Lennart Schön. Swedish Historical National Accounts, 1800-2000. Lund: Almqvist and Wiksell International (forthcoming, 2007).

Maddison, Angus. The World Economy, Volumes 1 and 2. Paris: OECD (2006).

Schön, Lennart. “Development Blocks and Transformation Pressure in a Macro-Economic Perspective: A Model of Long-Cyclical Change.” Skandinaviska Enskilda Banken Quarterly Review 20, no. 3-4 (1991): 67-76.

Schön, Lennart. “External and Internal Factors in Swedish Industrialization.” Scandinavian Economic History Review 45, no. 3 (1997): 209-223.

Schön, Lennart. En modern svensk ekonomisk historia: Tillväxt och omvandling under två sekel (A Modern Swedish Economic History: Growth and Transformation in Two Centuries). Stockholm: SNS (2000).

Schön, Lennart. “Total Factor Productivity in Swedish Manufacturing in the Period 1870-2000.” In Exploring Economic Growth: Essays in Measurement and Analysis: A Festschrift for Riitta Hjerppe on Her Sixtieth Birthday, edited by S. Heikkinen and J.L. van Zanden. Amsterdam: Aksant, 2004.

Schön, Lennart. “Swedish Industrialization 1870-1930 and the Heckscher-Ohlin Theory.” In Eli Heckscher, International Trade, and Economic History, edited by Ronald Findlay et al. Cambridge, MA: MIT Press (2006).

Svennilson, Ingvar. Growth and Stagnation in the European Economy. Geneva: United Nations Economic Commission for Europe, 1954.

Temin, Peter. “The Golden Age of European Growth Reconsidered.” European Review of Economic History 6, no. 1 (2002): 3-22.

Williamson, Jeffrey G. “The Evolution of Global Labor Markets since 1830: Background Evidence and Hypotheses.” Explorations in Economic History 32, no. 2 (1995): 141-96.

Citation: Schön, Lennart. “Sweden – Economic Growth and Structural Change, 1800-2000″. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/sweden-economic-growth-and-structural-change-1800-2000/