EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Arresting Contagion: Science, Policy, and Conflicts over Animal Disease Control

Author(s):Olmstead, Alan L.
Rhode, Paul W.
Reviewer(s):Craig, Lee A.

Published by EH.Net (October 2016)

Alan L. Olmstead and Paul W. Rhode, Arresting Contagion: Science, Policy, and Conflicts over Disease Control. Cambridge, MA: Harvard University Press, 2015. x + 465 pp. $50 (cloth), ISBN: 978-0-674-72877-6.

Reviewed for EH.Net by Lee A. Craig, Department of Economics, North Carolina State University.

In Arresting Contagion, Alan Olmstead and Paul Rhode, both well-known economic historians, provide readers with a narrative of the history of the war on food-borne microorganisms.  It is a gory affair.  The warriors are, on the one side, veterinarians, public health officials, and U.S. Department of Agriculture scientists, and on the other, Boophilus microplus (the Texas Fever tick), Mycobacterium bovis (the source of Bovine Tuberculosis), and a few other bugs that make us sick, or even kill us, when we consume their livestock hosts.  The bugs are aided by two sets of humans: One is a group the authors call “deniers,” who deny the bugs are a major problem.  They resist eradication because of its perceived costs.  The other group is dominated by free-market economists, who resist eradication because it typically requires the all too visible hand of government.

The authors date the beginning of serious efforts at eradication and prevention with the establishment, by the U.S. Congress, of the Bureau of Animal Industry (BAI) in 1884.   Prior to that event, the hodgepodge of state laws and state and federal court rulings that covered livestock inspection allowed ranchers, shippers, and packers to pass along adulterated products.  The BAI was not an immediate panacea.  It took decades of scientific investigation, bureaucratic maneuvering, and the evolution of case law before eradication and prevention became the norms rather than anomalies.  Olmstead and Rhode give us a detailed narrative of various aspects of the history of eradication and prevention, documenting the process for several of the deadliest and most costly bugs.   To follow the scientific account of the BAI’s efforts, readers may occasionally find themselves searching for their freshman biology text (or Wikipedia on their I-phones), but the effort is rewarding.  If you’re having no trouble digesting that brisket sandwich you ate at lunch, this volume will tell you why.

Readers looking for one-stop shopping on the history of the eradication of meat- and milk-borne microorganisms should look elsewhere.  This is not an encyclopedia.  Some bugs get a few chapters, others a chapter or less, and some little more than a mention.  Rather, the authors focus on a handful of the more troublesome critters that were particularly acute in the United States.

A theme that runs through the volume is the juxtaposition between two conflicting approaches to government-enforced inspection: the public choice school and the public interest school.  Proponents of the former are represented most prominently by economists who argue that federal meat inspection initiatives were the result of bureaucratic meddling to the benefit of rent-seeking suppliers, often small-scale suppliers who found their businesses swamped by the rise of the major packers.  Proponents of the public interest view come from a broader set of academic fields and argue meat inspection was a logical, and valuable, effort to address a market imperfection.

Olmstead and Rhode are not coy when it comes to choosing between the two arguments.  They unambiguously side with the public interest folks.  In the war on bugs, the authors’ heroes are the vets, scientists and government bureaucrats who gave us (forced upon us, in the public choice view) federal meat inspection.  To support their case, the volume offers estimates of the rates of return on eradication and inspection efforts.  Even when biased downward, the figures are enormous.  For example, the authors’ estimate of the benefit-to-cost ratio of the eradication of foot-and-mouth disease is greater than 40 to 1 (p. 136).  The comparable figure for Texas Fever is between 9 and 20, depending upon the years in question (p. 273).  While these figures might seem high, the authors bias downward their estimates, and it is worth noting that in the late nineteenth-century the aggregate value of the nation’s livestock was greater than the capitalized value of its railroads (p. 9).  (Perhaps Irene Neu, George Rogers Taylor, and Carter Goodrich chose the wrong industry in the pre-dawn of the Cliometrics era.  One can only imagine the subsequent course of the discipline had they chosen to analyze livestock rather than railroads.)  By any reasonable standard the magnitude of the returns to eradication represent large net benefits.  In the authors’ view those benefits were not “led by an invisible hand.”  Indeed, near the end of the volume, the authors summarize the U.S. eradication experience: “Success in the United States required the unflinching use of police power and great expense, but the net benefits were enormous” (p. 301).  That is a good, concise summary of the volume itself.

Some EH.Net patrons will probably lament the volume’s lack of explicit cliometric rigor.  For example, the estimates of the returns to various eradication efforts are little more than back-of-the-envelope exercises, albeit relatively sophisticated back-of-the-envelope exercises.  For their part, the authors make no apologies for trying to provide a history that is accessible to an audience beyond EH.Neters.  How you feel about such efforts will help guide you toward or away from this well-written history of an important but oft-overlooked topic.

Vox populi suggests a receptive lay audience for the authors’ message.  I have taught undergraduates off and on for over thirty years and frequently poll them on their support for various public policies.  Support for free trade consistently runs about fifty-fifty; roughly a third of the students will openly denounce the minimum wage; and only a small and declining few will defend agricultural price supports; but I’ve never heard a student say meat inspection is a bad idea!

Lee A. Craig is Alumni Distinguished Professor and Head of the Department of Economics at North Carolina State University.  He is the author of Josephus Daniels: His Life and Times, Chapel Hill: University of North Carolina Press, 2013.

Copyright (c) 2016 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (October 2016). All EH.Net reviews are archived at http://eh.net/book-reviews/

Subject(s):Agriculture, Natural Resources, and Extractive Industries
Economic Planning and Policy
Geographic Area(s):North America
Time Period(s):19th Century
20th Century: Pre WWII

Institutions and Small Settler Economies: A Comparative Study of New Zealand and Uruguay, 1870−2008

Author(s):Schlueter, Andre
Reviewer(s):Bértola, Luis

Published by EH.Net (August 2016)

Andre Schlueter, Institutions and Small Settler Economies: A Comparative Study of New Zealand and Uruguay, 1870−2008.  New York: Palgrave MacMillan, 2014. xvi + 290 pp. $115 (hardcover), ISBN: 978-1-137-44828-6.

Reviewed for EH.Net by Luis Bértola, Department of Economic and Social History, Universidad de la República (Uruguay).

The aim of the book, based on a doctoral dissertation, is to test whether the social order approach developed by North, Wallis and Weingast (NWW) (2009), fits the particular development of two small settler societies, New Zealand and Uruguay, which otherwise were not considered in the referred authors’ work.

The book presents the problem in chapter 1, discusses the research strategy in chapter 2 and discusses three different periods in chapters 3-5 (“The Golden Age of the Two Settler Economies” up to the 1920s; “The Great Divergence between New Zealand and Uruguay,” the period 1930-1970; and “Decades of Stop and Go” since the 1970s). Chapter 6 concludes and adds some references to later research on social orders in a somewhat confusing way. This discussion should be placed in the introduction, or in chapter 2.

The book attempts to provide further answers to the question “Why the West?,” considering that “the transfer of institutions from the European core to overseas colonies before the industrial revolution facilitates a natural experiment concerning the impact of initial institutional frameworks on long-term economic development.” New Zealand represents the British institutional framework, and Uruguay the Spanish.

The research strategy, presented in the second chapter is as follows. As North, Wallis and Weingast’s verbal model “needs to be applied more consistently in order to remedy its perceived disadvantages in comparison with other more formal theories” (p. 45), the author summarizes a set of key hypotheses with the help of which ideal types are constructed. The historical cases are then read in the light of these ideal types searching for similarities and anomalies.  The stories must find the stylized causality: beliefs → institutions → organizations → policies → outcomes.

The analysis is performed at two levels, both in a comparative way. First, and for the general propositions to hold, New Zealand should show higher and more stable growth rates and levels of per capita income than Uruguay. The former should also show a more sophisticated institutional framework and higher degrees of national organized violence than the latter. These aspects are studied in the following chapters with use of macroeconomic data and different proxies for institutional quality and violence.

While the first level of analysis is somewhat descriptive of the main features of both societies, the second level presents the historical narrative in order to find the causality between organizations, policies and outcomes. The narrative tries to identify the three doorstep conditions for the transition from Limited Access Society to an Open Access one:  1) the rule of law for elites; 2) perpetual forms of public and private elite organizations, including the state itself; and 3) consolidated political control of the military.

Schlueter concludes that, in broad terms, the general approach is valid. However, he finds some anomalies that seem to contradict some hypotheses. For instance, New Zealand achieved some features that should allow for the transition to the Open Access Order much later than expected, and, her relative economic decline after the 1960s is not something to expect. Thus, the author proposes making some adjustments and addenda to the discussed approach. “The reasons for the deviations of both settler societies from their theoretical ideal types lay within and outside of their national borders. NWW’s framework needs further amendments to account for local adaptations to inherited social structures, that is, a theoretically idealized British heritage, as well as for complex influences of exogenous powers on the rules of the game, its players, and their interaction” (pp. 203-05).

The book is well-written and interesting. The author makes an important effort to understand the history of these two countries. He is, however, open to criticism on several details of his interpretation. My main criticism is two-fold, or even contradictory. In the first place, he misses the opportunity to criticize at least two aspects of the NWW approach. One the one hand, he ignores problems in NWW´s approach to international relations and the formal and informal institutions at that level. In NWW´s approach, external forces act only at time zero, establishing the local institutional environment that later reproduces itself, while the international sphere almost completely disappears from the analysis, even when it experiences radical changes. On the other hand, the book is missing a more critical discussion of the deterministic way in which causal relations are considered and a more serious evaluation of alternative explanations. An example is the use of contract-intensive money as a proxy for institutional stability. At least in the case of Uruguay, this proxy does not explain economic stagnation, but is the result of it. This is just an example of the need for a more in-depth discussion of causal relations, which are presented in a very deterministic way. Nevertheless, my second criticism goes in the other direction; I want to make a defense of NWW. It seems somewhat unfair to 1) criticize these authors for being vague and imprecise, 2) reduce their approach to a set of testable hypothesis, and 3) show that the facts do not completely fit the testable hypothesis. From my point of view, NWW had very good reasons for not being too precise. And this is because they have tried to understand real historical processes. Their courageous attempt to construct a comprehensive conceptual framework, even if subject to many criticisms, requires a flexible approach that tries to explain a wide variety of particular developments. Maybe the right question to ask is how these authors would have tried to explain these anomalies.

Reference:

Douglass C. North, John J. Wallis and Barry Weingast (2009), Violence and Social Orders: A Conceptual Framework for Interpreting Recorded Human History, New York: Cambridge University Press.
Luis Bértola is the author of “An Overview of the Economic History of Uruguay since the 1870s” in the EH.Net Encyclopedia of Economic and Business History at https://eh.net/encyclopedia/bertola-uruguay-final/.
Copyright (c) 2016 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (August 2016). All EH.Net reviews are archived at http://eh.net/book-reviews/

Subject(s):Economic Development, Growth, and Aggregate Productivity
Economywide Country Studies and Comparative History
Geographic Area(s):Australia/New Zealand, incl. Pacific Islands
Latin America, incl. Mexico and the Caribbean
Time Period(s):19th Century
20th Century: Pre WWII
20th Century: WWII and post-WWII

The Economic History of Mexico

The Economic History of Mexico

Richard Salvucci, Trinity University

 

Preface[1]

This article is a brief interpretive survey of some of the major features of the economic history of Mexico from pre-conquest to the present. I begin with the pre-capitalist economy of Mesoamerica. The colonial period is divided into the Habsburg and Bourbon regimes, although the focus is not really political: the emphasis is instead on the consequences of demographic and fiscal changes that colonialism brought.  Next I analyze the economic impact of independence and its accompanying conflict. A tentative effort to reconstruct secular patterns of growth in the nineteenth century follows, as well as an account of the effects of foreign intervention, war, and the so-called “dictatorship” of Porfirio Diaz.  I then examine the economic consequences of the Mexican Revolution down through the presidency of Lázaro Cárdenas, before considering the effects of the Great Depression and World War II. This is followed by an examination of the so-called Mexican Miracle, the period of import-substitution industrialization after World War II. The end of the “miracle” and the rise of economic instability in the 1970s and 1980s are discussed in some detail. I conclude with structural reforms in the 1990s, the North American Free Trade Agreement (NAFTA), and slow growth in Mexico since then. It is impossible to be comprehensive and the references appearing in the citations are highly selective and biased (where possible) in favor of English-language works, although Spanish is a must for getting beyond the basics. This is especially true in economic history, where some of the most innovative and revisionist work is being done, as it should be, by historians and economists in Mexico.[2]

 

Where (and What) is Mexico?

For most of its long history, Mexico’s boundaries have been shifting, albeit broadly stable. Colonial Mexico basically stretched from Guatemala, across what is now California and the Southwestern United States, and vaguely into the Pacific Northwest.  There matters stood for more than three centuries[3]. The big shock came at the end of the War of 1847 (“the Mexican-American War” in U.S. history). The Treaty of Guadalupe Hidalgo (1848) ended the war, but in so doing, ceded half of Mexico’s former territory to the United States—recall Texas had been lost in 1836. The northern boundary now ran on a line beginning with the Rio Grande to El Paso, and thence more or less west to the Pacific Ocean south of San Diego. With one major adjustment in 1853 (the Gadsden Purchase or Treaty of the Mesilla) and minor ones thereafter, because of the shifting of the Rio Grande, there it has remained.

Prior to the arrival of the Europeans, Mexico was a congeries of ethnic and city states whose own boundaries were unstable. Prior to the emergence of the most powerful of these states in the fifteenth century, the so-called Triple Alliance (popularly “Aztec Empire”), Mesoamerica consisted of cultural regions determined by political elites and spheres of influence that were dominated by large ceremonial centers such as La Venta, Teotihuacán, and Tula.

While such regions may have been dominant at different times, they were never “economically” independent of one another. At Teotihuacan, there were living quarters given over to Olmec residents from the Veracruz region, presumably merchants. Mesoamerica was connected, if not unified, by an ongoing trade in luxury goods and valuable stones such as jade, turquoise and precious feathers. This was not, however, trade driven primarily by factor endowments and relative costs. Climate and resource endowments did differ significantly over the widely diverse regions and microclimates of Mesoamerica. Yet trade was also political and ritualized in religious belief. For example, calling the shipment of turquoise from the (U.S.) Southwest to Central Mexico the outcome of market activity is an anachronism. In the very long run, such prehistorical exchange facilitated the later emergence of trade routes, roads, and more technologically advanced forms of transport. But arbitrage does not appear to have figured importantly in it.[4]

In sum, what we call “Mexico” in a modern sense is not of much use to the economic historian with an interest in the country before 1870, which is to say, the great bulk of its history. In these years, specificity of time and place, sometimes reaching to the village level, is an indispensable prerequisite for meaningful discussion. At the very least, it is usually advisable to be aware of substantial regional differences which reflect the ethnic and linguistic diversity of the country both before and after the arrival of the Europeans. There are fully ten language families in Mexico, and two of them, Nahuatl and Quiché, number over a million speakers each.[5]

 

Trade and Tribute before the Europeans

In the codices or deerskin folded paintings the Europeans examined (or actually commissioned), they soon became aware of a prominent form of Mesoamerican economic activity: tribute, or taxation in kind, or even labor services. In the absence of anything that served as money, tribute was forced exchange. Tribute has been interpreted as a means of redistribution in a nonmonetary economy. Social and political units formed a basis for assessment, and the goods collected included maize, beans, chile and cotton cloth. It was through the tribute the indigenous “empires” mobilized labor and resources. There is little or no evidence for the existence of labor or land markets to do so, for these were a European import, although marketplaces for goods existed in profusion.

To an extent, the preconquest reliance on barter economies and the absence of money largely accounts for the ubiquity of tribute. The absence of money is much more difficult to explain and was surely an obstacle to the growth of productivity in the indigenous economies.

The tribute was a near-universal attribute of Mesoamerican ceremonial centers and political empires. The city of Teotihuacan (ca. 600 CE, with a population of 125,000 or more) in central Mexico depended on tribute to support an upper stratum of priests and nobles while the tributary population itself lived at subsistence. Tlatelolco (ca 1520, with a population ranging from 50 to 100 thousand) drew maize, cotton, cacao, beans and precious feathers from a wide swath of territory that broadly extended from the Pacific to Gulf coasts that supported an upper stratum of priests, warriors, nobles, and merchants. It was this urban complex that sat atop the lagoons that filled the Valley of Mexico that so awed the arriving conquerors.

While the characterization of tribute as both a corvée and a tax in kind to support nonproductive populations is surely correct, its persistence in altered (i.e., monetized) form under colonial rule does suggest an important question. The tributary area of the Mexica (“Aztec” is a political term, not an ethnic one) broadly comprised a Pacific slope, a central valley, and a Gulf slope. These embrace a wide range of geographic features ranging from rugged volcanic highlands (and even higher snow-capped volcanoes) to marshy, humid coastal plains. Even today, travel through these regions is challenging. Lacking both the wheel and draught animals, the indigenous peoples relied on human transport, or, where possible, waterborne exchange. However we measure the costs of transportation, they were high. In the colonial period, they typically circumscribed the subsistence radius of markets to 25 to 35 miles. Under the circumstances, it is not easy to imagine that voluntary exchange, particularly between the coastal lowlands and the temperate to cold highlands and mountains, would be profitable for all but the most highly valued goods. In some parts of Mexico–as in the Andean region—linkages of family and kinship bound different regions together in a cult of reciprocal economic obligations. Yet absent such connections, it is not hard to imagine, for example, transporting woven cottons from the coastal lowlands to the population centers of the highlands could become a political obligation rather than a matter of profitable, voluntary exchange. The relatively ambiguous role of markets in both labor and goods that persisted into the nineteenth century may perhaps derive from just this combination of climatic and geographical characteristics. It is what made voluntary exchange under capitalistic markets such a puzzlingly problematic answer to the ordinary demands of economic activity.

 

[See the relief map below for the principal physical features of Mexico.]

image1

http://www.igeograf.unam.mx/sigg/publicaciones/atlas/anm-2007/muestra_mapa.php?cual_mapa=MG_I_1.jpg

[See the political map below for Mexican states and state capitals.]

image2

 

 

Used by permission of the University of Texas Libraries, The University of Texas at Austin.

 

“New Spain” or Colonial Mexico: The First Phase

Mexico was established by military conquest and civil war. In the process, a civilization with its own institutions and complex culture was profoundly modified and altered, if not precisely destroyed, by the European invaders. The catastrophic elements of conquest, including the sharp decline of the existing indigenous population, from perhaps 25 million to fewer than a million within a century due to warfare, disease, social disorganization and the imposition of demands for labor and resources should nevertheless not preclude some assessment, however tentative, of its economic level in 1519, when the Europeans arrived.[6]

Recent thinking suggests that Spain was far from poor when it began its overseas expansion. If this were so, the implications of the Europeans’ reactions to what they found on the mainland of Mexico (not, significantly in the Caribbean, and, especially, in Cuba, where they were first established) is important. We have several accounts of the conquest of Mexico by the European participants, of which Bernal Díaz del Castillo is the best known, but not the only one. The reaction of the Europeans was almost uniformly astonishment by the apparent material wealth of Tenochtitlan. The public buildings, spacious residences of the temple precinct, the causeways linking the island to the shore, and the fantastic array of goods available in the marketplace evoked comparisons to Venice, Constantinople, and other wealthy centers of European civilization. While it is true that this was a view of the indigenous elite, the beneficiaries of the wealth accumulated from numerous tributaries, it hardly suggests anything other than a kind of storied opulence. Of course, the peasant commoners lived at subsistence and enjoyed no such privileges, but then so did the peasants of the society from which Bernal Díaz, Cortés, Pedro de Alvarado and the other conquerors were drawn. It is hard to imagine that the average standard of living in Mexico was any lower than that of the Iberian Peninsula. The conquerors remarked on the physical size and apparent robust health of the people whom they met, and from this, scholars such as Woodrow Borah and Sherburne Cook concluded that the physical size of the Europeans and the Mexicans was about the same. Borah and Cook surmised that caloric intake per individual in Central Mexico was around 1,900 calories per day, which certainly seems comparable to European levels.[7]

Certainly, the technological differences with Europe hampered commercial exchange, such as the absence of the wheel for transportation, metallurgy that did not include iron, and the exclusive reliance on pictographic writing systems. Yet by the same token, Mesoamerican agricultural technology was richly diverse and especially oriented toward labor-intensive techniques, well suited to pre-conquest Mexico’s factor endowments. As Gene Wilken points out, Bernardino de Sahagún explained in his General History of the Things of New Spain that the Nahua farmer recognized two dozen soil types related to origin, source, color, texture, smell, consistency and organic content.  They were expert at soil management.[8] So it is possible not only to misspecify, but to mistake the technological “backwardness” of Mesoamerica relative to Europe, and historians routinely have.

The essentially political and clan-based nature of economic activity made the distribution of output somewhat different from standard neoclassical models. Although no one seriously maintains that indigenous civilization did not include private property and, in fact, property rights in humans, the distribution of product tended to emphasize average rather than marginal product. If responsibility for tribute was collective, it is logical to suppose that there was some element of redistribution and collective claim on output by the basic social groups of indigenous society, the clans or calpulli.[9] Whatever the case, it seems clear that viewing indigenous society and economy as strained by population growth to the point of collapse, as the so-called “Berkeley school” did in the 1950s, is no longer tenable. It is more likely that the tensions exploited by the Europeans to divide and conquer their native hosts and so erect a colonial state on pre-existing native entities were mainly political rather than socioeconomic. It was through the assistance of native allies such as the Tlaxcalans, as well as with the help of previously unknown diseases such as smallpox that ravaged the indigenous peoples, that the Europeans were able to place a weakened Tenochtitlan under siege and finally defeat it.

 

Colonialism and Economic Adjustment to Population Decline

With the subjection first of Tenochtitlan and Tlatelolco and then of other polities and peoples, a process that would ultimately stretch well into the nineteenth century and was never really completed, the Europeans turned their attention to making colonialism pay. The process had several components: the modification or introduction of institutions of rule and appropriation; the introduction of new flora and fauna that could be turned to economic use; the reorientation of a previously autarkic and precapitalist economy to the demands of trade and commercial exploitation; and the implementation of European fiscal sovereignty. These processes were complex, required much time, and were, in many cases, only partly successful. There is considerable speculation regarding how long it took before Spain (arguably a relevant term by the mid-sixteenth century) made colonialism pay. The best we can do is present a schematic view of what occurred. Regional variations were enormous: a “typical” outcome or institution of colonialism may well have been an outcome visible in central Mexico. Moreover, all generalizations are fragile, rest on limited quantitative evidence, and will no doubt be substantially modified eventually. The message is simple: proceed with caution.

The Europeans did not seek to take Mesoamerica as a tabula rasa. In some ways, they would have been happy to simply become the latest in a long line of ruling dynasties established by decapitating native elites and assuming control. The initial demand of the conquerors for access to native labor in the so-called encomienda was precisely that, with the actual task of governing be left to the surviving and collaborating elite: the principle of “indirect rule.”[10] There were two problems with this strategy: the natives resisted and the natives died. They died in such large numbers as to make the original strategy impracticable.

The number of people who lived in Mesoamerica has long been a subject of controversy, but there is no point in spelling it out once again. The numbers are unknowable and, in an economic sense, not really important. The population of Tenochtitlan has been variously estimated between 50 and 200 thousand individuals, depending on the instruments of estimation.  As previously mentioned, some estimates of the Central Mexican population range as high as 25 million on the eve of the European conquest, and virtually no serious student accepts the small population estimates based on the work of Angel Rosenblatt. The point is that labor was abundant relative to land, and that the small surpluses of a large tributary population must have supported the opulent elite that Bernal Díaz and his companions described.

By 1620, or thereabouts, the indigenous population had fallen to less than a million according to Cook and Borah. This is not just the quantitative speculation of modern historical demographers. Contemporaries such as Jerónimo de Mendieta in his Historia eclesiástica Indiana (1596) spoke of towns formerly densely populated now witness to “the palaces of those former Lords ruined or on the verge of. The homes of the commoners mostly empty, roads and streets deserted, churches empty on feast days, the few Indians who populate the towns in Spanish farms and factories.” Mendieta was an eyewitness to the catastrophic toll that European microbes and warfare took on the native population. There was a smallpox epidemic in 1519-20 when 5 to 8 million died. The epidemic of hemorrhagic fever in 1545 to 1548 was one of the worst demographic catastrophes in human history, killing 5 to 15 million people. And then again in 1576 to 1578, when 2 to 2.5 million people died, we have clear evidence that land prices in the Valley of Mexico (Coyoacán, a village outside Mexico City, as the reconstructed Tenochtitlán was called) collapsed. The death toll was staggering. Lesser outbreaks were registered in 1559, 1566, 1587, 1592, 1601, 1604, 1606, 1613, 1624, and 1642. The larger point is that the intensive use of native labor, such as the encomienda, had to come to an end, whatever its legal status had become by virtue of the New Laws (1542). The encomienda or the simple exploitation of massive numbers of indigenous workers was no longer possible. There were too few “Indians” by the end of the sixteenth century.[11]

As a result, the institutions and methods of economic appropriation were forced to change. The Europeans introduced pastoral agriculture – the herding of cattle and sheep – and the use of now abundant land and scarce labor in the form of the hacienda while the remaining natives were brought together in “villages” whose origins were not essentially pre- but post-conquest, the so-called congregaciones, at the same time that the titles to now-vacant lands were created, regularized and “composed.”[12] (Land titles were a European innovation as well). Sheep and cattle, which the Europeans introduced, became part of the new institutional backbone of the colony. The natives would continue to rely on maize for the better part of their subsistence, but the Europeans introduced wheat, olives (oil), grapes (wine) and even chickens, which the natives rapidly adopted. On the whole, the results of these alterations were complex. Some scholars argue that the native diet improved even in the face of their diminishing numbers, a consequence of increased land per person and of greater variety of foodstuffs, and that the agricultural potential of the colony now called New Spain was enhanced. By the beginning of the seventeenth century, the combined indigenous, European immigrant, and new mixed blood populations could largely survive on the basis of their own production. The introduction of sheep lead to the introduction and manufacture of woolens in what were called obrajes or manufactories in Puebla, Querétaro, and Coyoacán. The native peoples continued to produce cottons (a domestic crop) under the stimulus of European organization, lending, and marketing. Extensive pastoralism, the cultivation of cereals and even the incorporation of native labor then characterized the emergence of the great estates or haciendas, which became a characteristic rural institution through the twentieth century, when the Mexican Revolution put an end to many of them. Thus the colony of New Spain continued to feed, clothe and house itself independent of metropolitan Spain’s direction. Certainly, Mexico before the Conquest was self-sufficient. The extent to which the immigrant and American Spaniard or creole population depended on imports of wine, oil and other foodstuffs and textiles in the decades immediately following the conquest is much less clear.

At the same time, other profound changes accompanied the introduction of Europeans, their crops and their diseases into what they termed the “kingdom” (not colony, for constitutional reasons) of New Spain.[13] Prior to the conquest, land and labor had been commoditized, but not to any significant extent, although there was a distinction recognized between possession and ownership.  Scholars who have closely examined the emergence of land markets after the conquest—mainly in the Valley of Mexico—are virtually unanimous in this conclusion. To the extent that markets in labor and commodities had emerged, it took until the 1630s (and later elsewhere in New Spain) for the development to reach maturity. Even older mechanisms of allocation of labor by administrative means (repartimiento) or by outright coercion persisted. Purely economic incentives in the form of money wages and prices never seemed adequate to the job of mobilizing resources and those with access to political power were reluctant to pay a competitive wage. In New Spain, the use of some sort of political power or rent-seeking nearly always accompanied labor recruitment. It was, quite simply, an attempt to evade the implications of relative scarcity, and renders the entire notion of “capitalism” as a driving economic force in colonial Mexico quite inexact.

 

Why the Settlers Resisted the Implications of Scarce Labor

The reasons behind this development are complex and varied. The evidence we have for the Valley of Mexico demonstrates that the relative price of labor rose while the relative price of land fell even when nominal movements of one or the other remained fairly limited. For instance, the table constructed below demonstrates that from 1570-75 through 1591-1606, the price of unskilled labor in the Valley of Mexico nearly tripled while the price of land in the Valley (Coyoacán) fell by nearly two thirds. On the whole, the price of labor relative to land increased by nearly 800 percent. The evolution of relative prices would have inevitably worked against the demanders of labor (Europeans and increasingly, creoles or Americans of largely European ancestry) and in favor of the supplier (native labor, or people of mixed race generically termed mestizo). This was not of course what the Europeans had in mind and by capture of legal institutions (local magistrates, in particularly), frequently sought to substitute compulsion for what would have been costly “free labor.” What has been termed the “depression” of the seventeenth century may well represent one of the consequences of this evolution: an abundance of land, a scarcity of labor, and the attempt of the new rulers to adjust to changing relative prices. There were repeated royal prohibitions on the use of forced indigenous labor in both public and private works, and thus a reduction in the supply of labor. All highly speculative, no doubt, but the adjustment came during the central decades of the seventeenth century, when New Spain increasingly produced its own woolens and cottons, and largely assumed the tasks of providing itself with foodstuffs and was thus required to save and invest more.  No doubt, the new rulers felt the strain of trying to do more with less.[14]

 

Years Land Price Index Labor Price Index (Labor/Land) Index
1570-1575 100 100 100
1576-1590 50 143 286
1591-1606 33 286 867

 

Source: Calculated from Rebecca Horn, Postconquest Coyoacan: Nahua-Spanish Relations in Central Mexico, 1519-1650 (Stanford: Stanford University Press, 1997), p. 208 and José Ignacio Urquiola Permisan, “Salarios y precios en la industria manufacturer textile de la lana en Nueva España, 1570-1635,” in Virginia García Acosta, (ed.), Los precios de alimentos y manufacturas novohispanos (México, DF: CIESAS, 1995), p. 206.

 

The overall role of Mexico within the Hapsburg Empire was in flux as well. Nothing signals the change as much as the emergence of silver mining as the principal source of Mexican exportables in the second half of the sixteenth century. While Mexico would soon be eclipsed by Peru as the most productive center of silver mining—at least until the eighteenth century—the discovery of significant silver mines in Zacatecas in the 1540s transformed the economy of the Spanish empire and the character of New Spain’s as well.

 

 

 

Silver Mining

While silver mining and smelting was practiced before the conquest, it was never a focal point of indigenous activity. But for the Europeans, Mexico was largely about silver mining. From the mid- sixteenth century onward, it was explicitly understood by the viceroys that they were to do all in their power to “favor the mines,” as one memorable royal instruction enjoined. Again, there has been much controversy of the precise amounts of silver that Mexico sent to the Iberian Peninsula. What we do know certainly is that Mexico (and the Spanish Empire) became the leading source of silver, monetary reserves, and thus, of high-powered money. Over the course of the colonial period, most sources agree that Mexico provided nearly 2 billion pesos (dollars) or roughly 1.6 billion troy ounces to the world economy. The graph below provides a picture of the remissions of all Mexican silver to both Spain and to the Philippines taken from the work of John TePaske.[15]

page16

Since the population of Mexico under Spanish rule was at most 6 million people by the end of the colonial period, the kingdom’s silver output could only be considered astronomical.

This production has to be considered in both its domestic and international dimensions. From a domestic perspective, the mines were what a later generation of economists would call “growth poles.” They were markets in which inputs were transformed into tradable outputs at a much higher rate of productivity (because of mining’s relatively advanced technology) than Mexico’s other activities. Silver thus became Mexico’s principal exportable good, and remained so well into the late nineteenth century.  The residual claimants on silver production were many and varied.  There were, of course the silver miners themselves in Mexico and their merchant financiers and suppliers. They ranged from some of the wealthiest people in the world at the time, such as the Count of Regla (1710-1781), who donated warships to Spain in the eighteenth century, to individual natives in Zacatecas smelting their own stocks of silver ore.[16] While the conditions of labor in Mexico’s silver mines were almost uniformly bad, the compensation ranged from above market wages paid to free labor in the prosperous larger mines  of the Bajío and the North to the use of forced village  labor drafts in more marginal (and presumably less profitable) sites such as Taxco. In the Iberian Peninsula, income from American silver mines ultimately supported not only a class of merchant entrepreneurs in the large port cities, but virtually the core of the Spanish political nation, including monarchs, royal officials, churchmen, the military and more. And finally, silver flowed to those who valued it most highly throughout the world. It is generally estimated that 40 percent of Spain’s American (not just Mexican, but Peruvian as well) silver production ended up in hoards in China.

Within New Spain, mining centers such as Guanajuato, San Luis Potosí, and Zacatecas became places where economic growth took place rapidly, in which labor markets more readily evolved, and in which the standard of living became obviously higher than in neighboring regions. Mining centers tended to crowd out growth elsewhere because the rate of return for successful mines exceeded what could be gotten in commerce, agriculture and manufacturing. Because silver was the numeraire for Mexican prices—Mexico was effectively on a silver standard—variations in silver production could and did have substantial effects on real economic activity elsewhere in New Spain. There is considerable evidence that silver mining saddled Mexico with an early case of “Dutch disease” in which irreducible costs imposed by the silver standard ultimately rendered manufacturing and the production of other tradable goods in New Spain uncompetitive. For this reason, the expansion of Mexican silver production in the years after 1750 was never unambiguously accompanied by overall, as opposed to localized prosperity. Silver mining tended to absorb a disproportional quantity of resources and to keep New Spain’s price level high, even when the business cycle slowed down—a fact that was to impress visitors to Mexico well into the nineteenth century. Mexican silver accounted for well over three-quarters of exports by value into the nineteenth century as well. The estimates vary widely, for silver was by no means the only, or even the most important source of revenue to the Crown, but by the end of the colonial era, the Kingdom of New Spain probably accounted for 25 percent of the Crown’s imperial income.[17] That is why reformist proposals circulating in governing circles in Madrid in the late eighteenth century fixed on Mexico. If there was any threat to the American Empire, royal officials thought that Mexico, and increasingly, Cuba, were worth holding on to. From a fiscal standpoint, Mexico had become just that important.[18]

 

“New Spain”: The Second Phase                of the Bourbon “Reforms”

In 1700, the last of the Spanish Hapsburgs died and a disputed succession followed. The ensuring conflict, known as the War of Spanish Succession, came to an end in 1714. The grandson of French king Louis XIV came to the Spanish throne as King Philip V. The dynasty he represented was known as the Bourbons. For the next century of so, they were to determine the fortunes of New Spain. Traditionally, the Bourbons, especially the later ones, have been associated with an effort to “renationalize” the Spanish empire in America after it had been thoroughly penetrated by French, Dutch, and lastly, British commercial interests.[19]

There were at least two areas in which the Bourbon dynasty, “reformist” or no, affected the Mexican economy. One of them dealt with raising revenue and the other was the international position of the imperial economy, specifically, the volume and value of trade. A series of statistics calculated by Richard Garner shows that the share of Mexican output or estimated GDP taken by taxes grew by 167 percent between 1700 and 1800. The number of taxes collected by the Royal Treasury increased from 34 to 112 between 1760 and 1810. This increase, sometimes labelled as a Bourbon “reconquest” of Mexico after a century and a half of drift under the Hapsburgs, occurred because of Spain’s need to finance increasingly frequent and costly wars of empire in the eighteenth century. An entire array of new taxes and fiscal placemen came to Mexico. They affected (and alienated) everyone, from the wealthiest merchant to the humblest villager. If they did nothing else, the Bourbons proved to be expert tax collectors.[20]

The second and equally consequential change in imperial management lay in the revision and “deregulation” of New Spain’s international trade, or the evolution from a “fleet” system to a regime of independent sailings, and then, finally, of voyages to and from a far larger variety of metropolitan and colonial ports. From the mid-sixteenth century onwards, ocean-going trade between Spain and the Americas was, in theory, at least, closely regulated and supervised. Ships in convoy (flota) sailed together annually under license from the monarchy and returned together as well. Since so much silver specie was carried, the system made sense, even if the flotas made a tempting target and the problem of contraband was immense. The point of departure was Seville and later, Cadiz. Under pressure from other outports in the late eighteenth century, the system was finally relaxed. As a consequence, the volume and value of trade to Mexico increased as the price of importables fell. Import-competing industries in Mexico, especially textiles, suffered under competition and established merchants complained that the new system of trade was too loose. But to no avail. There is no measure of the barter terms of trade for the eighteenth century, but anecdotal evidence suggests they improved for Mexico. Nevertheless, it is doubtful that these gains could have come anywhere close to offsetting the financial cost of Spain’s “reconquest” of Mexico.[21]

On the other hand, the few accounts of per capita real income growth in the eighteenth century that exist suggest little more than stagnation, the result of population growth and a rising price level. Admittedly, looking for modern economic growth in Mexico in the eighteenth century is an anachronism, although there is at least anecdotal evidence of technological change in silver mining, especially in the use of gunpowder for blasting and excavating, and of some productivity increase in silver mining. So even though the share of international trade outside of goods such as cochineal and silver was quite small, at the margin, changes in the trade regime were important. There is also some indication that asset income rose and labor income fell, which fueled growing social tensions in New Spain. In the last analysis, the growing fiscal pressure of the Spanish empire came when the standard of living for most people in Mexico—the native and mixed blood population—was stagnating. During periodic subsistence crisis, especially those propagated by drought and epidemic disease, and mostly in the 1780s, living standards fell. Many historians think of late colonial Mexico as something of a powder keg waiting to explode. When it did, in 1810, the explosion was the result of a political crisis at home and a dynastic failure abroad. What New Spain had negotiated during the Wars of Spanish Succession—regime change– provide impossible to surmount during the Napoleonic Wars (1794-1815). This may well be the most sensitive indicator of how economic conditions changed in New Spain under the heavy, not to say clumsy hand, of the Bourbon “reforms.”[22]

 

The War for Independence, the Insurgency, and Their Legacy

The abdication of the Bourbon monarchy to Napoleon Bonaparte in 1808 produced a series of events that ultimately resulted in the independence of New Spain. The rupture was accompanied by a violent peasant rebellion headed by the clerics Miguel Hidalgo and José Morelos that, one way or another, carried off 10 percent of the population between 1810 and 1820. Internal commerce was largely paralyzed. Silver mining essentially collapsed between 1810 and 1812 and a full recovery of mining output was delayed until the 1840s. The mines located in zones of heavy combat, such as Guanajuato and Querétaro, were abandoned by fleeing workers. Thus neglected, they quickly flooded.

At the same time, the fiscal and human costs of this period, the Insurgency, were even greater.[23] The heavy borrowings in which the Bourbons engaged to finance their military alliances left Mexico with a considerable legacy of internal debt, estimated at £16 million at Independence. The damage to the fiscal, bureaucratic and administrative structure of New Spain in the face of the continuing threat of Spanish reinvasion (Spain did not recognize the Independence of Mexico (1821)) in the 1820s drove the independent governments into foreign borrowing on the London market to the tune of £6.4 million in order to finance continuing heavy military outlays. With a reduced fiscal capacity, in part the legacy of the Insurgency and in part the deliberate effort of Mexican elites to resist any repetition Bourbon-style taxation, Mexico defaulted on its foreign debt in 1827. For the next sixty years, through a serpentine history of moratoria, restructuring and repudiation (1867), it took until 1884 for the government to regain access to international capital markets, at what cost can only be imagined. Private sector borrowing and lending continued, although to what extent is currently unknown. What is clear is that the total (internal plus external) indebtedness of Mexico relative to late colonial GDP was somewhere in the range of 47 to 56 percent.[24]

This was, perhaps, not an insubstantial amount for a country whose mechanisms of public finance were in what could be mildly termed chaotic condition in the 1820s and 1830s as the form, philosophy, and mechanics of government oscillated from federalist to centralist and back into the 1850s.  Leaving aside simple questions of uncertainty, there is the very real matter that the national government—whatever the state of private wealth—lacked the capacity to service debt because national and regional elites denied it the means to do so. This issue would bedevil successive regimes into the late nineteenth century, and, indeed, into the twentieth.[25]

At the same time, the demographic effects of the Insurgency exacted a cost in terms of lost output from the 1810s through the 1840s. Gaping holes in the labor force emerged, especially in the fertile agricultural plains of the Bajío that created further obstacles to the growth of output. It is simply impossible to generalize about the fortunes of the Mexican economy in this period because of the dramatic regional variations in the Republic’s economy. A rough estimate of output per head in the late colonial period was perhaps 40 pesos (dollars).[26] After a sharp contraction in the 1810s, income remained in that neighborhood well into the 1840s, at least until the eve of the war with the United States in 1846. By the time United States troops crossed the Rio Grande, a recovery had been under way, but the war arrested it. Further political turmoil and civil war in the 1850s and 1860s represented setbacks as well. In this way, a half century or so of potential economic growth was sacrificed from the 1810s through the 1870s. This was not an uncommon experience in Latin America in the nineteenth century, and the period has even been called The Stage of the Great Delay.[27] Whatever the exact rate of real per capita income growth was, it is hard to imagine it ever exceeded two percent, if indeed it reached much more than half that.

 

Agricultural Recovery and War

On the other hand, it is clear that there was a recovery in agriculture in the central regions of the country, most notably in the staple maize crop and in wheat. The famines of the late colonial era, especially of 1785-86, when massive numbers perished, were not repeated. There were years of scarcity and periodic corresponding outbreaks of epidemic disease—the cholera epidemic of 1832 affected Mexico as it did so many other places—but by and large, the dramatic human wastage of the colonial period ceased, and the death rate does appear to have begun to fall. Very good series on wheat deliveries and retail sales taxes for the city of Puebla southeast of Mexico City show a similarly strong recovery in the 1830s and early 1840s, punctuated only by the cholera epidemic whose effects were felt everywhere.[28]

Ironically, while the Panic of 1837 appears to have at least hit the financial economy in Mexico hard with a dramatic fall in public borrowing (and private lending), especially in the capital,[29] an incipient recovery of the real economy was ended by war with the United States. It is not possible to put numbers on the cost of the war to Mexico, which lasted intermittently from 1846 to 1848, but the loss of what had been the Southwest under Mexico is most often emphasized. This may or may not be accurate. Certainly, the loss of California, where gold was discovered in January 1848, weighs heavily on the historical imaginations of modern Mexicans. There is also the sense that the indemnity paid by the United States–$15 million—was wholly inadequate, which seems at least understandable when one considers that Andrew Jackson offered $5 million to purchase Texas alone in 1829.

It has been estimated that the agricultural output of the Mexican “cession” as it was called in 1900, was nearly $64 million, and that the value of livestock in the territory was over $100 million. The value of gold and silver produced was about $35 million. Whether it is reasonable to employ the numbers in estimating the present value of output relative to the indemnity paid is at least debatable as a counterfactual, unless one chooses to regard this as the annuitized value on a perpetuity “purchased” from Mexico at gunpoint, which seems more like robbery than exchange.  In the long run, the loss may have been staggering, but in the short run, much less so. The northern territories Mexico lost had really yielded very little up until the War. In fact, the balance of costs and revenues to the Mexican government may well have been negative.[30]

Whatever the case, the decades following the war with the United States until the beginning of the administration of Porfirio Díaz (1876) are typically regarded as a step backward. The reasons are several. In 1850, the government essentially went broke. While it is true that its financial position had disintegrated since the mid-1830s, 1850 marked a turning point. The entire indemnity payment from the United States was consumed in debt service, but this made no appreciable dent in the outstanding principal, which hovered around 50 million pesos (dollars).  The limits of debt sustainability had been reached: governing was turned into a wild search for resources, which proved fruitless. Mexico continued to sell of parts of its territory, such as the Treaty of the Mesilla (1853), or Gadsden Purchase, whose proceeds largely ended up in the hands of domestic financiers rather than foreign creditors’.[31] Political divisions, if anything, terrible before the war with the United States, turned catastrophic. A series of internal revolts, uprisings and military pronouncements segued into yet another violent civil war between liberals and conservatives—now a formal party—the so-called Three Years’ War (1856-58). In 1862, frustrated by Mexico’s suspension of foreign debt service, Great Britain, Spain and France seized Veracruz. A Hapsburg prince, Maximilian, was installed as Mexico’s second “emperor.” (Agustín de Iturbide was the first). While only the French actively prosecuted the war within Mexico, and while they never controlled more than a very small part of the country, the disruption was substantial. By 1867, with Maximillian deposed and the French army withdrawn, the country required serious reconstruction. [32]

 

Juárez, Díaz and the Porfiriato: authoritarian development.

To be sure, the origins of authoritarian development in nineteenth century Mexico were not with Porfirio Díaz, as is often asserted. Their beginnings actually went back several decades earlier, to the last presidency of Santa Anna, generally known as the Dictatorship (1853-54). But Santa Anna was overthrown too quickly, and now for the last time, for much to have actually occurred. A ministry for development (Fomento) had been created, but the Liberal revolution of Ayutla swept Santa Anna and his clique away for good. Serious reform seems to have begun around 1870, when the Finance Minister was Matías Romero. Romero was intent on providing Mexico with a modern Treasury, and on ending the hand-to- mouth financing that had mostly characterized the country’s government since Independence, or at least since the mid-1830s. So it is appropriate to pick up with the story here. Where did Mexico stand in 1870?[33]

The most revealing data that we have on the state of economic development come from various anthropometric and cost of living studies by Amilcar Challu, Aurora Gómez Galvarriato, and Moramay López Alonso.[34] Their research overlaps in part, and gives a fascinating picture of Mexico in the long run, from 1735 to 1940. For the moment, let us look at the period leading up to 1867, when the French withdrew from Mexico. If we look at the heights of the “literate” population, Challu’s research suggests that the standard of living stagnated between 1750 and 1840. If we look at the “illiterate” population, there was a consistent decline until 1850. Since the share of the illiterate population was clearly larger, we might infer that living standards for most Mexicans declined after 1750, however we interpret other quantitative and anecdotal evidence.

López Alonso confines her work to the period after the 1840s. From 1850 through 1890, her work generally corroborates Challu’s. The period after the Mexican War was clearly a difficult one for most Mexicans, and the challenge that both Juárez and Díaz faced was a macroeconomy in frank contraction after 1850. The regimes after 1867 were faced with stagnation.

The real wage study of by Amilcar Challu and Aurora Gómez Galvarriato, when combined with the existing anthropometric work, offers a pretty clear correlation between movements in real wages (down) and height (falling). [35]

It would then appear growth from the 1850s through the 1870s was slow—if there was any at all—and perhaps inferior to what had come between the 1820s and the 1840s. Given the growth of import substitution during the Napoleonic Wars, roughly 1790-1810, coupled with the commercial opening brought by the Bourbons’   post-1789 extension of “free trade” to Mexico, we might well see a pattern of mixed performance (1790-1810), sharp contraction (the 1810s), rebound and recovery, with a sharp financial shocks coming in the mid-1820s and mid -1830s (1820s-1840s), and stagnation once more (1850s-1870s). Real per capita output oscillated, sometimes sharply, around an underlying growth rate of perhaps one percent; changes in the distribution of income and wealth are more or less impossible to identify consistently, because studies conflict.

Far less speculative is that the foundations for modern economic growth were laid down in Mexico during the era of Benito Juárez. Its key elements were the creation of a secular, bourgeois state and secular institutions embedded in the Constitution of 1857. The titanic ideological struggles between liberals and conservatives were ultimately resolved in favor of a liberal, but nevertheless centralizing form of government under Porfirio Diáz. This was the beginning of the end of the Ancien Regime. Under Juárez, corporate lands of the Church and native villages were privatized in favor of individual holdings and their former owners compensated in bonds. This was effectively the largest transfer of land title since the late sixteenth century (not including the war with the United States) and it cemented the idea of individual property rights. With the expulsion of the French and the outright repudiation of the French debt, the Treasury was reorganized along more modern lines. The country got additional breathing room by the suspension of debt service to Great Britain until the terms of the 1825 loans were renegotiated under the Dublán Convention (1884). Equally, if not more important, Mexico now entered the railroad age in 1876, nearly forty years after the first tracks were laid in Cuba in 1837. The educational system was expanded in an attempt to create at least a core of literate citizens who could adopt the tools of modern finance and technology. Literacy still remained in the neighborhood of 20 percent, and life expectancy at birth scarcely reached 40 years of age, if that. Yet by the end of the Restored Republic (1876), Mexico had turned a corner. There would be regressions, but the nineteenth century had finally arrived, aptly if brutally signified by Juárez’ execution of Maximilian in Querétaro in 1867.[36]

Porfirian Mexico

Yet when Díaz came to power, Mexico was, in many ways, much as it had been a century earlier. It was a rural, agrarian nation whose primary agricultural output per person was maize, followed by wheat and beans. These were produced on haciendas and ranchos in Jalisco, Guanajuato, Michoacán, Mexico, Puebla as well as Oaxaca, Veracruz, Aguascalientes, Chihuahua and Sonora. Cotton, which with great difficulty had begun to supply a mechanized factory regime (first in spinning, then weaving) was produced in Oaxaca, Yucatán, Guerrero and Chiapas as well as in parts of Durango and Coahuila. Domestic production of raw cotton rarely sufficed to supply factories in Michoacán, Querétaro, Puebla and Veracruz, so imports from the Southern United States were common. For the most part, the indigenous population lived on maize, beans, and chile, producing its own subsistence on small, scattered plots known as milpas. Perhaps 75 percent of the population was rural, with the remainder to be found in cities like Mexico, Guadalajara, San Luis Potosí, and later, Monterrey. Population growth in the Southern and Eastern parts of the country had been relatively slow in the nineteenth century. The North and the center North grew more rapidly.  The Center of the country, less so. Immigration from abroad had been of no consequence.[37]

It is a commonplace to see the presidency of Porfirio Díaz (1876-1910) as a critical juncture in Mexican history, and this would be no less true of economic or commercial history as well. By 1910, when the Díaz government fell and Mexico descended into two decades of revolution, the first one extremely violent, the face of the country had been changed for good. The nature and effect of these changes remain not only controversial, but essential for understanding the subsequent evolution of the country, so we should pause here to consider some of their essential features.

While mining and especially, silver mining, had long held a privileged place in the economy, the nineteenth century had witnessed a number of significant changes. Until about 1889, the coinage of gold, silver, and copper—a very rough proxy for production given how much silver had been illegally exported—continued on a steadily upward track. In 1822, coinage was about 10 million pesos. By 1846, it had reached roughly 15 million pesos. There was something of a structural break after the war with the United States (its origins are unclear), and coinage continued upward to about 25 million pesos in 1888. Then, the falling international price of silver, brought on by large increases in supply elsewhere, drove the trend after 1889 sharply downward. By 1909-10, coinage had collapsed to levels previously unrecorded since the 1820s, although in 1904 and 1905, it had skyrocketed to nearly 45 million pesos.[38]

It comes as no surprise that these variations in production corresponded to sharp changes in international relative prices. For example, the market price of silver declined sharply relative to lead, which in turn encountered a large increase in Mexican production and a diversification into other metals including zinc, antinomy, and copper. Mexico left the silver standard (for international transactions, but continued to use silver domestically) in 1905, which contributed to the eclipse of this one crucial industry, which would never again have the status it had when Díaz became president in 1876, when precious metals represented 75 percent of Mexican exports by value. By the time he had decamped in exile to Paris, precious metals accounted for less than half of all exports.

The reason for this relative decline was the diversification of agricultural exports that had been slowly occurring since the 1870s. Coffee, cotton, sugar, sisal and vanilla were the principal crops, and some regions of the country such as Yucatán (henequen) and Durango and Tamaulipas (cotton) supplied new export crops.

 

Railroads and Infrastructure

None of be of this would have occurred without the massive changes in land tenure that had begun in the 1850s, but most of all, without the construction of railroads financed by the migration of foreign capital to Mexico under Díaz. At one level, it is a well-known story of social savings, which were substantial in Mexico because the terrain was difficult and the alternative modes of carriage few. One way or another, transportation has always been viewed as an “obstacle” to Mexican economic development. That must be true at some level, although recent studies (especially by Sandra Kuntz) have raised important qualifications. Railroads may not have been gateways to foreign dependency, as historians once argued, but there were limits to their ability to effect economic change, even internally. They tended to enlarge the internal market for some commodities more than others. The peculiarities of rate-making produced other distortions, while markets for some commodities were inevitably concentrated in major cities or transshipment points which afforded some monopoly power to distributors even as a national market in basic commodities became more of a reality. Yet, in general, the changes were far reaching.[39]

Conventional figures confirm conventional wisdom. When Díaz assumed the presidency, there were 660 km (410 miles) of track. In 1910, there were 19,280 km (about 12,000 miles). Seven major lines linked the cities of Mexico, Veracruz, Acapulco, Juárez, Laredo, Puebla, Oaxaca. Monterrey and Tampico in 1892. The lines were built by foreign capital (e.g., the Central Mexicano was built by the Atchison, Topeka and Santa Fe), which is why resolving the long-standing questions of foreign debt service were critical. Large government subsidies on the order of 3,500 to 8,000 pesos per km were granted, and financing the subsidies amounted to over 30 million pesos by 1890. While the railroads were successful in creating more of a national market, especially in the North, their finances were badly affected by the depreciation of the silver peso, given that foreign liabilities had to be liquidated in gold.

As a result, the government nationalized the railroads in 1903. At the same time, it undertook an enormous effort to construct infrastructure such as drainage and ports, virtually all of which were financed by British capital and managed by “Don Porfirio’s contactor,” Sir Weetman Pearson.  Between railroads, ports, drainage works and irrigation facilities, the Mexican government borrowed 157 million pesos to finance costs.[40]

The expansion of the railroads, the build-out of infrastructure and the expansion of trade would have normally increased output per capita. Any data we have prior to 1930 are problematic, and before 1895, strictly speaking, we have no official measures of output per capita at all. Most scholars shy away from using levels of GDP in any form, other than for illustrative purposes.  Aside from the usual problems attending national income accounting, Mexico presents a few exceptional challenges. In peasant families, where women were entrusted with converting maize into tortilla, no small job, the omission of their value added from GDP must constitute a sizeable defect in measured output. Moreover, as the commercial radius of Mexican agriculture expanded rapidly as railroads, roads, and later, highways spread extensively, growth rates represented increased commercialization rather than increased growth. We have no idea how important this phenomenon was, but it is worth keeping in mind when we look at very rapid growth rates after 1940.

There are various measures of cumulative growth during the Porfiriato. By and large, the figure from 1900 through 1910 is around 23 percent, which is certainly higher than rates achieved during the nineteenth century, but nothing like what was recorded after 1940. In light of declining real wages, one can only assume that the bulk of “progress” flowed to the recipients of property income. This may well have represented a reversal of trends in the nineteenth century, when some argue that property income contracted in the wake of the Insurgency[41].

There was also significant industrialization in Mexico during the Porfiriato. Some industry, especially textiles, had its origins in the 1840s, but its size, scale and location altered dramatically by the end of the nineteenth century. For example, the cotton textile industry saw the number of workers, spindles and looms more than double from the late 1870s to the first decade of the nineteenth century. Brewing and its associated industry, glassmaking, became well established in Monterrey during the 1890s. The country’s first iron and steel mill, Fundidora Monterrey, was established there as well in 1903. Other industries, such as papermaking and cigarettes followed suit. By the end of the Porfiriato, over 10 percent of Mexico’s output was certainly industrial.[42]

 

From Revolution to “Miracle”

The Mexican Revolution (1910-1940) began as a political upheaval provoked by a crisis in the presidential succession when Porfirio Díaz refused to leave office in the wake of electoral defeat after signaling his willingness to do so in a famous pubic interview of 1908.[43] It was also the result of an agrarian uprising and the insistent demand of Mexico’s growing industrial proletariat for a share of political power. Finally, there was a small (fewer than 10 percent of all households) but upwardly mobile urban middle class created by economic development under Díaz whose access to political power had been effectively blocked by the regime’s mechanics of political control. Precisely how “revolutionary” were the results of the armed revolt—which persisted largely through the 1910s and peaked in a civil war in 1914-1915—has long been contentious, but is only tangentially relevant as a matter of economic history. The Mexican Revolution was no Bolshevik movement (of course, it predated Bolshevism by seven years) but it was not a purely bourgeois constitutional movement either, although it did contain substantial elements of both.

From a macroeconomic standpoint, it has become fashionable to argue that the Revolution had few, if any, profound economic consequences. It seems as if the principal reason was that revolutionary factions were interested in appropriating rather than destroying the means of production. For example, the production of crude oil peaked in Mexico in 1915—at the height of the Revolution—because crude oil could be used as a source of income to the group controlling the wells in Veracruz state. This was a powerful consideration.[44]

Yet in another sense, the conclusion that the Revolution had slight economic effects is not only facile, but obviously wrong. As the demographic historian Robert McCaa showed, the excess mortality occasioned by the Revolution was larger than any similar event in Mexican history other than the conquest in the sixteenth century. There has been no attempt made to measure the output lost by the demographic wastage (including births that never occurred), yet even the effect on the population cohort born between 1910 and 1920 is plain to see in later demographic studies.  [45]

There is also a subtler question that some scholars have raised. The Revolution increased labor mobility and the labor supply by abolishing constraints on the rural population such as debt peonage and even outright slavery. Moreover, the Revolution, by encouraging and ultimately setting into motion a massive redistribution of previously privatized land, contributed to an enlarged supply of that factor of production as well. The true impact of these developments was realized in the 1940s and 1950s, when rapid economic growth began, the so-called Mexican Miracle, which was characterized by rates of real growth of as much as 6 percent per year (1955-1966). Whatever the connection between the Revolution and the Miracle, it will require a serious examination on empirical grounds and not simply a dogmatic dismissal of what is now regarded as unfashionable development thinking: import substitution and inward-oriented growth.[46]

The other major consequence of the Revolution, the agrarian reform and the creation of the ejido, or land granted by the Mexican state to rural population under the authority provided it by the revolutionary Constitution on 1917 took considerable time to coalesce, and were arguably not even high on one of the Revolution’s principal instigators, Francisco Madero’s, list of priorities. The redistribution of land to the peasantry in the form of possession if not ownership – a kind of return to real or fictitious preconquest and colonial forms of land tenure – did peak during the avowedly reformist, and even modestly radical presidency of Lázaro Cárdenas (1934-1940) after making only halting progress under his predecessors since the 1920s. From 1940 to 1965, the cultivated area in Mexico grew at 3.7 percent per year and the rise in productivity in basic food crops was 2.8 percent per year.

Nevertheless, the long-run effects of the agrarian reform and land redistribution have been predictably controversial. Under the presidency of Carlos Salinas (1988-1994) the reform was officially declared over, with no further land redistribution to be undertaken and the legal status of the ejido definitively changed. The principal criticism of the ejido was that, in the long run, it encouraged inefficiently small landholding per farmer and, by virtue of its limitations on property rights, made agricultural credit difficult for peasants to obtain.[47]

There is no doubt these are justifiable criticisms, but they have to be placed in context. Cárdenas’ predecessors in office, Alvaro Obregón (1924-1928) and Plutarco Elías Calles (1928-1932) may well have preferred a more commercial model of agriculture with larger, irrigated holdings. But it is worth recalling that one of the original agrarian leaders of the Revolution, Emiliano Zapata, had an uneasy relationship with Madero, who saw the Revolution in mostly political terms, from the start and quickly rejected Madero’s leadership in favor of restoring peasant lands in his native state of Morelos.  Cárdenas, who was in the midst of several major maneuvers that would require widespread popular support—such as the expropriation of foreign oil companies operating in Mexico in March 1938—was undoubtedly sensitive to the need to mobilize the peasantry on his behalf. The agrarian reform of his presidency, which surpassed that of any other, needs to be considered in those terms as well as in terms of economic efficiency.[48]

Cárdenas’ presidency also coincided with the continuation of the Great Depression. Like other countries in Latin America, Mexico was hard hit by the Great Depression, at least through the early 1930s.  All sorts of consumer goods became scarcer, and the depreciation of the peso raised the relative price of imports. As had happened previously in Mexican history (1790-1810, during the Napoleonic Wars and the disruption of the Atlantic trade), in the medium term domestic industry was nevertheless given a stimulus and import substitution, the subsequent core of Mexico’s industrialization program after World War II, was given a decisive boost. On the other hand, Mexico also experienced the forced “repatriation” of people of Mexican descent, mostly from California, of whom 60 percent were United States citizens. The effects of this movement—the emigration of the Revolution in reverse—has never been properly analyzed. The general consensus is that World War II helped Mexico to prosper. Demand for labor and materials from the United States, to which Mexico was allied, raised real wages and incomes, and thus boosted aggregate demand. From 1939 through 1946, real output in Mexico grew by approximately 50 percent. The growth in population accelerated as well as the country began to move into the later stages of the demographic transition, with a falling death rate, while birth rates remained high.[49]

 

From Miracle to Meltdown: 1950-1982  

The history of import substitution manufacturing did not begin with postwar Mexico, but few countries (especially in Latin America) became as identified with the policy in the 1950s, and with what Mexicans termed the emergence of “stabilizing development.” There was never anything resembling a formal policy announcement, although Raúl Prebisch’s 1949 manifesto, “The Economic Development of Latin America and its Principal Problems” might be regarded as supplying one. Prebisch’s argument, that a directed change in the composition of imports toward capital goods to facilitate domestic industrialization was, in essence, the basis of the policy that Mexico followed. Mexico stabilized the nominal exchange rate at 12.5 pesos to the dollar in 1954, but further movement in the real exchange rate (until the 1970s) were unimportant. The substantive bias of import substitution in Mexico was a high effective rate of protection to both capital and consumer goods. Jaime Ros has calculated these rates in 1960 ranged between 47 and 85 percent, and between 33 and 109 percent in 1980. The result, in the short to intermediate run, was very rapid rates of economic growth, averaging 6.5 percent in 1950 through 1973. Other than Brazil, which also followed an import substitution regime, no country in Latin America experienced higher rates of growth. Mexico’s was substantially above the regional average. [50]

[See the historical graph of population growth in Mexico through 2000 below]

page39

Source: Essentially, Estadísticas Históricas de México (various editions since 1999; the most recent is 2014)

http://dgcnesyp.inegi.org.mx/ehm/ehm.htm (Accessed July 20, 2016)

 

But there were unexpected results as well. The contribution of labor to GDP growth was 14 percent. Capital’s contribution was 53 percent, and the remainder, total factor productivity (TFP) 28 percent.[51] As a consequence, while Mexico’s growth occurred through the accumulation of capital, the distribution of income became extremely skewed. The ratio of the top 10 percent of household income to the bottom 40 percent was 7 in 1960, and 6 in 1968. Even supporters of Mexico’s development program, such as Carlos Tello, conceded that it probable that it was the organized peasants and workers experienced an effective improvement of their relative position. The fruits of the Revolution were unevenly distributed, even among the working class.[52]

By “organized” one means such groups as the most important labor union in the country, the CTM (Confederation of Mexican Workers) or the nationally recognized peasant union, the CNC, both of which formed two of the three organized sectors of the official government party, the PRI, or Party of the Institutional Revolution that was organized in 1946. The CTM in particular was instrumental in supporting the official policy of import substitution, and thus benefited from government wage setting and political support. The leaders of these organizations became important political figures in their own right. One, Fidel Velázquez, as both a federal senator and the head of the CTM from 1941 to his death in 1997. The incorporation of these labor and peasant groups into the political system offered the government both a means of control and a guarantee of electoral support. They became pillars of what the Peruvian writer Mario Vargas Llosa famously called “the perfect dictatorship” of the PRI from 1946 to 2000, during which the PRI held a monopoly of the presidency and the important offices of state. In a sense, import substitution was the economic ideology of the PRI.[53]

Labor and economic development during the years of rapid growth is, like many others, a debated subject. While some have found strong wage growth, others, looking mostly at Mexico City, have found declining real wages. Beyond that, there is the question of informality and a segmented labor market. Were workers in the CTM the real beneficiaries of economic growth, while others in the informal sector (defined as receiving no social security payments, meaning roughly two-thirds of Mexican workers) did far less well? Obviously, the attraction of a segmented labor market model can address one obvious puzzle: why would industry substitute capital for labor, as it obviously did, if real wages were not rising? Postulating an informal sector that absorbed the rapid influx of rural migrants and thus held nominal wages steady while organized labor in the CTM got the benefit of higher negotiated wages, but in so doing, limited their employment is an attractive hypothesis, but would not command universal agreement. Nothing has been resolved, at least for the period of the “Miracle.” After Mexico entered a prolonged series of economic crises in the 1980s—here labelled as “meltdown”—the discussion must change, because many hold that the key to relative political stability and the failure of open unemployment to rise sharply can be explained by falling real wages.

The fiscal basis on which the years of the Miracle were constructed was conventional, not to say conservative.[54] A stable nominal exchange rate, balanced budgets, limited public borrowing, and a predictable monetary policy were all predicated on the notion that the private sector would react positively to favorable incentives. By and large, it did. Until the late 1960s, foreign borrowing was considered inconsequential, even if there was some concern on the horizon that it was starting to rise. No one foresaw serious macroeconomic instability. It is worth consulting a brief memorandum from Secretary of State Dean Rusk to President Lyndon Johnson (Washington, December 11, 1968) –to get some insight into how informed contemporaries viewed Mexico. The instability that existed was seen as a consequence of heavy-handedness on the part of the PRI and overreaction in the security forces. Informed observers did not view Mexico’s embrace of import-substitution industrialization as a train wreck waiting to happen. Historical actors are rarely so prescient.[55]

 

Slowing of the Miracle and Echeverría

The most obvious problems in Mexico were political. They stemmed from the increasing awareness that the limits of the “institutional revolution” had been reached, particularly regarding the growing democratic demands of the urban middle classes. The economic problem, which was far from obvious, was that import substitution had concentrated income in the upper 10 per cent of the population, so that domestic demand had begun to stagnate. Initially at least, public sector borrowing could support a variety of consumption subsidies to the population, and there were also efforts to transfer resources out of agriculture via domestic prices for staples such as maize. Yet Mexico’s population was also growing at the rate of nearly 3 percent per year, so that the long term prospects for any of these measures were cloudy.

At the same time, growing political pressures on the PRI, mostly dramatically manifest in the army’s violent repression of student demonstrators at Tlatelolco in 1968 just prior to the Olympics, had convinced some elements in the PRI, people like Carlos Madrazo, to argue for more radical change. The emergence of an incipient guerilla movement in the state of Guerrero had much the same effect. The new president, Luis Echeverría (1970-76), openly pushed for changes in the distribution of income and wealth, incited agrarian discontent for political purposes, dramatically increased government spending and borrowing, and alienated what had typically been a complaisant, if not especially friendly private sector.

The country’s macroeconomic performance began to deteriorate dramatically. Inflation, normally in the range of about 5 percent, rose into the low 20 percent range in the early 1970s. The public sector deficit, fueled by increasing social spending, rose from 2 to 7 percent of GDP. Money supply growth now averaged about 14 percent per year. Real GDP growth had begun to slip after 1968 and in the early 1970s, in deteriorated more, if unevenly. There had been clear convergence of regional economies in Mexico between 1930 and 1980 because of changing patterns of industrialization in the northern and central regions of the country.  After 1980, that process stalled and regional inequality again widened. [56]

While there is a tendency to blame Luis Echeverria for all or most of these developments, this forgets that his administration coincided with the First OPEC oil shock (1973) and rapidly deteriorating external conditions. Mexico had, as yet, not discovered the oil reserves (1978) that were to provide a temporary respite from economic adjustment after the shock of the peso devaluation of 1976—the first change in its value in over 20 years. At the same time, external demand fell, principally transmitted from the United States, Mexico’s largest trading partner, where the economy had fallen into recession in late 1973. Yet it seems reasonable to conclude that the difficult international environment, while important in bring Mexico’s “miracle” period to a close, was not helped by Echeverría’s propensity for demagoguery, of the loss of fiscal discipline that had long characterized government policy, at least since the 1950s. The only question to be resolved was to what sort of conclusion the period would come. The answer, unfortunately, was disastrous.[57]

 

Meltdown: The Debt Crisis, the Lost Decade and After

In contemporary parlance, Mexico had passed from “stabilizing” to “shared” development under Echeverría. But the devaluation of 1976 from 12.5 to 20.5 pesos to the dollar suggested that something had gone awry. One might suppose that some adjustment in course, especially in public spending and borrowing, would have occurred. But precisely the opposite occurred. Between 1976 and 1979, nominal federal spending doubled. The budget deficit increased by a factor of 15. The reason for this odd performance was the discovery of crude oil in the Gulf of Mexico, perhaps unsurprising in light of the spiking prices of the 1970s (the oil shocks of 1973-74, 1978-79), but nevertheless of considerable magnitude. In 1975, Mexico’s proven reserves were 6 billion barrels of oil. By 1978, they had increased to 40 billion. President López Portillo set himself to the task of “administering abundance” and Mexican analysts confidently predicted crude oil at $100 a barrel (when it stood at $37 in current prices in 1980). The scope of the miscalculation was catastrophic. At the same time, encouraged by bank loan pushing and effectively negative real rates of interest, Mexico borrowed abroad. Consumption subsidies, while vital in the face of slowing import substitution, were also costly, and when supported by foreign borrowing, unsustainable, but foreign indebtedness doubled between 1976 and 1979, and even further thereafter.

Matters came to a head in 1982. By then, Mexico’s foreign indebtedness was estimated at over $80 billion dollars, an increase from less than $20 billion in 1975. Real interest rates had begun to rise in the United States in mid-1981, and with Mexican borrowing tied to international rates, debt service rapidly increased. Oil revenue, which had come to constitute the great bulk of foreign exchange, followed international crude prices downward, driven in large part by a recession that had begun in the United States in mid-1981. Within six months, Mexico, too, had fallen into recession. Real per capital output was to decline by 8 percent in 1982.  Forced to sharply devalue, the real exchange rate fell by 50 percent in 1982 and inflation approached 100 percent. By the late summer, Finance Minister Jesus Silva Herzog admitted that the country could not meet an upcoming payment obligation, and was forced to turn to the US Federal Reserve, to the IMF, and to a committee of bank creditors for assistance. In late August, in a remarkable display of intemperance, President López Portillo nationalized the banking system. By December 20, 1982, Mexico’s incoming President, Miguel de la Madrid (1982-88) appeared, beleaguered, on the cover of Time Magazine framed by the caption, “We are in an Emergency.”  It was, as the saying goes, a perfect storm, and with it, the Debt Crisis and the “Lost Decade” in Mexico had begun. It would be years before anything resembling stability, let alone prosperity, was restored. Even then, what growth there was a pale imitation of what had occurred during the decades of the “Miracle.”

 

The 1980s

The 1980s were a difficult decade.[58]  After 1981, annual real per capita growth would not reach 4 percent again until 1989, and in 1986, it fell by 6 percent. In 1987, inflation reached 159 percent. The nominal exchange rate fell by 139 percent in 1986-1987. By the standards of the years of stabilizing development, the record of the 1980s was disastrous. To complete the devastation, on September 19, 1985, the worst earthquake in Mexican history, 7.8 on the Richter Scale, devastated large parts of central Mexico City and killed 5 thousand (some estimates run as high as 25 thousand), many of whom were simply buried in mass graves. It was as if a plague of biblical proportions had struck the country.

Massive indebtedness produced a dramatic decline in the standard of living as structural adjustment occurred. Servicing the debt required the production of an export surplus in non-oil exports, which in turn, required a reduction in domestic consumption. In an effort to surmount the crisis, the government implemented an agreement between organized labor, the private sector, and agricultural producers called the Economic Solidarity Pact (PSE). The PSE combined an incomes policy with fiscal austerity, trade and financial liberalization, generally tight monetary policy, and debt renegotiation and reduction. The centerpiece of the “remaking” of the previously inward orientation of the domestic economy was the North American Free Trade Agreement (NAFTA, 1993) linking Mexico, the United States, and Canada. While average tariff rates in Mexico had fallen from 34 percent in 1985 to 4 percent in 1992—even before NAFTA was signed—the agreement was generally seen as creating the institutional and legal framework whereby the reforms of Miguel de la Madrid and Carlos Salinas (1988-1994) would be preserved. Most economists thought its effects would be relatively larger in Mexico than in the United States, which generally appears to have been the case. Nevertheless, NAFTA has been predictably controversial, as trade agreements are wont to be. The political furor (and, in some places, euphoria) surrounding the agreement have faded, but never entirely disappeared. In the United States in particular, NAFTA is blamed for deindustrialization, although pressure on manufacturing, like trade liberalization itself, was underway long before NAFTA was negotiated. In Mexico, there has been much hand wringing over the fate of agriculture and small maize producers in particular. While none of this is likely to cease, it is nevertheless the case that there has been a large increase in the volume of trade between the NAFTA partners. To dismiss this is, quite plainly, misguided, even where sensitive and well organized political constituencies are concerned. But the legacy of NAFTA, like most everything in Mexican economic history, remains unsettled.  As a result, the agreement was subject to a controversial renegotiation in 2018, largely fueled by protectionist sentiment in the Trump administration. While the intent was to increase costs in the Mexican automobile industry so as to price labor in the United Stats back into the industry, the long
term effect of the measure—not to say its ratification—remains to be seen.

 

Post Crisis: No Miracles

Still, while some prosperity was restored to Mexico by the reforms of the 1980s and 1990s, the general macroeconomic results have been disappointing, not to say mediocre. The average real compensation per person in manufacturing in 2008 was virtually unchanged from 1993 according to the Instituto Nacional De Estadística  Geografía e Informática, and there is little reason to think the compensation has improved at all since then. It is generally conceded that per capita GDP growth has probably averaged not much more than 1 percent a year. Real GDP growth since NAFTA according to the OECD has rarely reached 5 percent and since 2010, it has been well below that.

 

 

Source: http://www.worldbank.org/en/country/mexico (Accessed July 21, 2016). The vertical scale cuts the horizontal axis at 1982

 

For virtually everyone in Mexico, the question is why, and the answers proposed include virtually any plausible factor: the breakdown of the political system after the PRI’s historic loss of presidential power in 2000; the rise of China as a competitor to Mexico in international markets; the explosive spread of narcoviolence in recent years, albeit concentrated in the states of Sonora, Sinaloa, Tamaulipas, Nuevo León and Veracruz; the results of NAFTA itself; the failure of the political system to undertake further structural economic reforms and privatizations after the initial changes of the 1980s, especially regarding the national oil monopoly, Petroleos Mexicanos (PEMEX); the failure of the border industrialization program (maquiladoras) to develop substantive backward linkages to the rest of the economy. This is by no means an exhaustive list of the candidates for poor economic performance. The choice of a cause tends to reflect the ideology of the critic.[59]

Yet it seems that, at the end of the day, the reason why post-NAFTA Mexico has failed to grow comes down to something much more fundamental: a fear of growing, embedded in the belief that the collapse of the 1980s and early 1990s (including the devastating “Tequila Crisis” of 1994-1995, which resulted in a another enormous devaluation of the peso after an initial attempt to contain the crisis was bungled)  was so traumatic and costly as to render event modest efforts to promote growth, let alone the dirigisme of times past, as essentially unwarranted. The central bank, the Banco de México (Banxico) rules out the promotion of economic growth as part of its remit—even as a theoretical proposition, let alone as a goal of macroeconomic policy– and concerns itself only with price stability. The language of its formulation is striking. “During the 1970s, there was a debate as to whether it was possible to stimulate economic growth via monetary policy.  As a result, some governments and central banks tried to reduce unemployment through expansive monetary policy.  Both economic theory and the experience of economies that tried this prescription demonstrated that it lacked validity. Thus, it became clear that monetary policy could not actively and directly stimulate economic activity and employment. For that reason, modern central banks have as their primary goal the promotion of price stability” (translation mine). Banxico is not the Fed: there is no dual mandate in Mexico.[60]  This may well change during the new presidential administration of Andrés Manuel López Obrador (known colloquially in Mexico as AMLO).

The Mexican banking system has scarcely made things easier. Private credit stands at only about a third of GDP. In recent years, the increase in private sector savings has been largely channeled to government bonds, but until quite recently, public sector deficits were very small, which is to say, fiscal policy has not been expansionary. If monetary and fiscal policy are both relatively tight, if private credit is not easy to come by, and if growth is typically presumed to be an inevitable concomitant to economic stability for which no actor (other than the private sector) is deemed responsible, it should come as no surprise that economic growth over the past two decades has been lackluster.  In the long run, aggregate supply determines real GDP, but in the short run, nominal demand matters: there is no point in creating productive capacity to satisfy demand that does not exist. And, unlike during the period of the Miracle and Stabilizing Development, attention to demand since 1982 has been limited, not to say off the table completely. It may be understandable, but Mexico’s fiscal and monetary authorities seem to suffer from what could be termed, “Fear of Growth.” For better or worse, the results are now on display. After its current (2016) return to a relatively austere budget, it remains to be seen how the economic and political system in contemporary Mexico handles slow economic growth.

The response of the Mexican public to a generation of stagnation in living standards, as well as to rising insecurity and the perception of widespread public corruption, was the victory of AMLO in the presidential election of July 2018.

AMLO had previously run for President with a different party. After two unsuccessful attempts, he started a new one, called MORENA. He then proceeded to win 53 percent of the vote, virtually obliterating the opposition parties, the incumbent PRI, and the PAN. MORENA also won majorities in both houses of Congress. To most observers, this signified that AMLO would be a potentially strong president, assuming his congressional party remained loyal to him. His somewhat checkered “leftist” past guaranteed that not everyone was thrilled at the prospect of a strong AMLO presidency.

Expectations for AMLO’s presidency are thus high, perhaps unrealistically so. While his initial budget has been generally well received by the financial markets, there is little question as to where AMLO’s priorities lie. He has advocated increases in spending on infrastructure, has moved to restore the real minimum wage to its level in 1994, and pledged to revitalize domestic agriculture. Whether these and a number of other reforms that AMLO has somewhat paradoxically labelled “Republican Austerity” will restore the country to its pre-1982 growth path now constitutes one of the most watched economic experiments in Latin America. [61]

[1] I am grateful to Ivan Escamilla and Robert Whaples for their careful readings and thoughtful criticisms.

[2] The standard reference work is Sandra Kuntz Ficker, (ed), Historia económica general de México. De la Colonia a nuestros días (México, DF: El Colegio de Mexico, 2010).

[3] Oscar Martinez, Troublesome Border (rev. ed., University of Arizona Press: Tucson, AZ, 2006) is the most helpful general account in English.

[4] There are literally dozens of general accounts of the pre-conquest world. A good starting point is Richard E.W. Adams, Prehistoric Mesoamerica (3d ed., University of Oklahoma Press: Norman, OK, 2005). More advanced is Richard E.W. Adams and Murdo J. Macleod, The Cambridge History of the Mesoamerican Peoples: Mesoamerica. (2 parts, New York: Cambridge University Press, 2000).

[5] Nora C. England and Roberto Zavala Maldonado, “Mesoamerican Languages” Oxford Bibliographies http://www.oxfordbibliographies.com/view/document/obo-9780199772810/obo-9780199772810-0080.xml

(Accessed July 10, 2016)

[6] For an introduction to the nearly endless controversy over the pre- and post-contact population of the Americas, see William M. Denevan (ed.), The Native Population of the Americas in 1492 (2d rev ed., Madison: University of Wisconsin Press, 1992).

[7] Sherburne F Cook and Woodrow Borah, Essays in Population History: Mexico and California (Berkeley, CA: University of California Press, 1979), p. 159.

[8]Gene C. Wilken, Good Farmers Traditional Agricultural Resource Management in Mexico and Central America (Berkeley: University of California Press, 1987), p. 24.

[9] Bernard Ortiz de Montellano, Aztec Medicine Health and Nutrition (New Brunswick, NJ: Rutgers University Press, 1990).

[10] Bernardo García Martínez, “Encomenderos españoles y British residents: El sistema de dominio indirecto desde la perspectiva novohispana”, in Historia Mexicana, LX: 4 [140] (abr-jun 2011), pp. 1915-1978.

[11] These epidemics are extensively and exceedingly well documented. One of the most recent examinations is Rodofo Acuna-Soto, David W. Stahle, Matthew D. Therrell , Richard D. Griffin,  and Malcolm K. Cleaveland, “When Half of the Population Died: The Epidemic of Hemorrhagic Fevers of 1576 in Mexico,” FEMS Microbiology Letters 240 (2004) 1–5. (http:// femsle.oxfordjournals.org/content/femsle/240/1/1.full.pdf, accessed July 10, 2016.) See in particular the exceptional map and table on pp. 2-3.

[12] See in particular, Bernardo García Martínez. Los pueblos de la Sierrael poder y el espacio entre los indios del norte de Puebla hasta 1700 (Mexico, DF: El Colegio de México, 1987) and Elinor G.K. Melville, A Plague of Sheep: Environmental Consequences of the Conquest of Mexico (New York: Cambridge University Press, 1997).

[13] J. H. Elliott, “A Europe of Composite Monarchies,” Past & Present 137 (The Cultural and Political Construction of Europe): 48–71; Guadalupe Jiménez Codinach, “De Alta Lealtad: Ignacio Allende y los sucesos de 1808-1811,” in Marta Terán and José Antonio Serrano Ortega, eds., Las guerras de independencia en la América Española (La Piedad, Michoacán, MX: El Colegio de Michoacán, 2002), p. 68.

[14] Richard Salvucci, “Capitalism and Dependency in Latin America,” in Larry Neal and Jeffrey G. Williamson, eds., The Cambridge History of Capitalism (2 vols.), New York: Cambridge University Press, 2014), 1: pp. 403-408.

[15] Source: TePaske Page, http://www.insidemydesk.com/hdd.html (Accessed July 19, 2016)

[16]  Edith Boorstein Couturier, The Silver King: The Remarkable Life of the Count of Regla in Colonial Mexico (Albuquerque, NM: University of New Mexico Press, 2003).  Dana Velasco Murillo, Urban Indians in a Silver City: Zacatecas, Mexico, 1546-1810 (Stanford, CA: Stanford University Press, 2015), p. 43. The standard work on the subject is David Brading, Miners and Merchants in Bourbon Mexico, 1763-1810 (New York: Cambridge University Press, 1971) But also see Robert Haskett, “Our Suffering with the Taxco Tribute: Involuntary Mine Labor and Indigenous Society in Central New Spain,” Hispanic American Historical Review, 71:3 (1991), pp. 447-475. For silver in China see http://afe.easia.columbia.edu/chinawh/web/s5/s5_4.html (accessed July 13, 2016). For the rents of empire question, see Michael Costeloe, Response to Revolution: Imperial Spain and the Spanish American Revolutions, 1810-1840 (New York: Cambridge University Press, 1986).

[17] This is an estimate. David Ringrose concluded that in the 1780s, the colonies accounted for 45 percent of Crown income, and one would suppose that Mexico would account for at least about half of that. See David R. Ringrose, Spain, Europe and the ‘Spanish Miracle’, 1700-1900 (New York: Cambridge University Press, 1996), p. 93; Mauricio Drelichman, “The Curse of Moctezuma: American Silver and the Dutch Disease,” Explorations in Economic History 42:3 (2005), pp. 349-380.

[18] José Antonio Escudero, El supuesto memorial del Conde de Aranda sobre la Independencia de América) México, DF: Universidad Nacional Autónoma de México, 2014) (http://bibliohistorico.juridicas.unam.mx/libros/libro.htm?l=3637, accessed July 13, 2016)

[19] Allan J. Kuethe and Kenneth J. Andrien, The Spanish Atlantic World in the Eighteenth Century. War and the Bourbon Reforms, 1713-1796 (New York: Cambridge University Press, 2014) is the most recent account of this period.

[20] Richard J. Salvucci, “Economic Growth and Change in Bourbon Mexico: A Review Essay,” The Americas, 51:2 (1994), pp. 219-231; William B Taylor, Magistrates of the Sacred: Priests and Parishioners in Eighteenth Century Mexico (Palo Alto: Stanford University Press, 1996), p. 24; Luis Jáuregui, La Real Hacienda de Nueva España. Su Administración en la Época de los Intendentes, 1786-1821 (México, DF: UNAM, 1999), p. 157.

[21] Jeremy Baskes, Staying AfloatRisk and Uncertainty in Spanish Atlantic World Trade, 1760-1820 (Stanford, CA: Stanford University Press, 2013); Xabier Lamikiz, Trade and Trust in the Eighteenth-century Atlantic World: Spanish Merchants and their Overseas Networks (Suffolk, UK: The Boydell Press., 2013). The starting point of all these studies is Clarence Haring, Trade and Navigation between Spain and the Indies in the Time of the Hapsburgs (Cambridge, MA: Harvard University Press, 1918).

[22] The best, and indeed, virtually unique starting point for considering these changes in their broadest dimensions   are the joint works of Stanley and Barbara Stein: Silver, Trade, and War (2003); Apogee of Empire (2004), and Edge of Crisis (2010), All were published by Johns Hopkins University Press and do for the Spanish Empire what Laurence Henry Gipson did for the First British Empire.

[23] The key work is María Eugenia Romero Sotelo, Minería y Guerra. La economía de Nueva España, 1810-1821 (México, DF: UNAM, 1997)

[24] Calculated from José María Luis Mora, Crédito Público ([1837] México, DF: Miguel Angel Porrúa, 1986), pp. 413-460. Also see Richard J. Salvucci, Politics, Markets, and Mexico’s “London Debt,” 1823-1887 (NY: Cambridge University Press, 2009).

[25] Jesús Hernández Jaimes, La Formación de la Hacienda Pública Mexicana y las Tensiones Centro -Periferia, 1821-1835  (México, DF: El Colegio de México, 2013). Javier Torres Medina, Centralismo y Reorganización. La Hacienda Pública Durante la Primera República Central de México, 1835-1842 (México, DF: Instituto Mora, 2013). The only treatment in English is Michael P. Costeloe, The Central Republic in Mexico, 1835-1846 (New York: Cambridge University Press, 1993).

[26] An agricultural worker who worked full time, 6 days a week, for the entire year (a strong assumption), in Central Mexico could have expected cash income of perhaps 24 pesos. If food, such as beans and tortilla were added, the whole pay might reach 30. The figure of 40 pesos comes from considerably richer agricultural lands around the city of Querétaro, and includes as an average income from nonagricultural employment as well, which was higher.  Measuring Worth would put the relative historic standard of living value in 2010 prices at $1.040, with the caveat that this is relative to a bundle of goods purchased in the United States. (https://www.measuringworth.com/uscompare/relativevalue.php).

[27]The phrase comes from Guido di Tella and Manuel Zymelman. See Colin Lewis, “Explaining Economic Decline: A review of recent debates in the economic and social history literature on the Argentine,” European Review of Latin American and Caribbean Studies, 64 (1998), pp. 49-68.

[28] Francisco Téllez Guerrero, De reales y granos. Las finanzas y el abasto de la Puebla de los Angeles, 1820-1840 (Puebla, MX: CIHS, 1986). Pp. 47-79.

[29]This is based on an analysis of government lending contracts. See Rosa María Meyer and Richard Salvucci, “The Panic of 1837 in Mexico: Evidence from Government Contracts” (in progress).

[30] There is an interesting summary of this data in U.S Govt., 57th Cong., 1 st sess., House, Monthly Summary of Commerce and Finance of the United States (September 1901) (Washington, DC: GPO, 1901), pp. 984-986.

[31] Salvucci, Politics and Markets, pp. 201-221.

[32] Miguel Galindo y Galindo, La Gran Década Nacional o Relación Histórica de la Guerra de Reforma, Intervención Extranjera, y gobierno del archiduque Maximiliano, 1857-1867 ([1902], 3 vols., México, DF: Fondo de Cultura Económica, 1987).

[33] Carmen Vázquez Mantecón, Santa Anna y la encrucijada del Estado. La dictadura, 1853-1855 (México, DF: Fondo de Cultura Económica, 1986).

[34] Moramay López-Alonso, Measuring Up: A History of Living Standards in Mexico, 1850-1950 (Stanford, CA: Stanford University Press, 2012);  Amilcar Challú and Auroro Gómez Galvarriato, “Mexico’s Real Wages in the Age of the Great Divergence, 1730-1930,” Revista de Historia Económica 33:1 (2015), pp. 123-152; Amílcar E. Challú, “The Great Decline: Biological Well-Being and Living Standards in Mexico, 1730-1840,” in Ricardo Salvatore, John H. Coatsworth, and Amilcar E. Challú, Living Standards in Latin American History: Height, Welfare, and Development, 1750-2000 (Cambridge, MA: Harvard University Press, 2010), pp. 23-67.

[35]See Challú and Gómez Galvarriato, “Real Wages,” Figure 5, p. 101.

[36] Luis González et al, La economía mexicana durante la época de Juárez (México, DF: 1976).

[37] Teresa Rojas Rabiela and Ignacio Gutiérrez Ruvalcaba, Cien ventanas a los países de antaño: fotografías del campo mexicano de hace un siglo) (México, DF: CONACYT, 2013), pp. 18-65.

[38] Alma Parra, “La Plata en la Estructura Económica Mexicana al Inicio del Siglo XX,” El Mercado de Valores 49:11 (1999), p. 14.

[39] Sandra Kuntz Ficker, Empresa Extranjera y Mercado Interno: El Ferrocarril Central Mexicano (1880-1907) (México, DF: El Colegio de México, 1995).

[40] Priscilla Connolly, El Contratista de Don Porfirio. Obras públicas, deuda y desarrollo desigual (México, DF: Fondo de Cultura Económica, 1997).

[41] Most notably John Tutino, From Insurrection to Revolution in Mexico: Social Bases of Agrarian Violence, 1750-1940 (Princeton, NJ: Princeton University Press, 1986). p. 229. My growth figures are based on the INEGI, Estadísticas Historicas de México, 2014) (http://dgcnesyp.inegi.org.mx/cgi-win/ehm2014.exe/CI080010, Accessed July 15, 2016).

[42] Stephen H. Haber, Industry and Underdevelopment: The Industrialization of Mexico, 1890-1940 (Stanford, CA: Stanford University Press, 1989); Aurora Gómez-Galvarriato, Industry and Revolution: Social and Economic Change in the Orizaba Valley (Cambridge, MA: Harvard University Press, 2013).

[43] There are literally dozens of accounts of the Revolution. The usual starting point, in English, is Alan Knight, The Mexican Revolution (reprint ed., 2 vols., Lincoln, NE: 1990).

[44] This argument has been made most insistently in Armando Razo and Stephen Haber, “The Rate of Growth of Productivity in Mexico, 1850-1933: Evidence from the Cotton Textile Industry,” Journal of Latin American Studies 30:3 (1998), pp. 481-517.

[45]Robert McCaa, “Missing Millions: The Demographic Cost of the Mexican revolution,” Mexican Studies/Estudios Mexicanos 19:2 (Summer 2003): 367-400; Virgilio Partida-Bush, “Demographic Transition, Demographic Bonus, and Ageing in Mexico, “ Proceedings of the United Nations Expert Group Meeting on Social and Economic Implications of Changing Population Age Structures. (http://www.un.org/esa/population/meetings/Proceedings_EGM_Mex_2005/partida.pdf) (Accessed July 15, 2016), pp. 287-290.

[46] An implication of the studies of Alan Knight, and of Clark Reynolds, The Mexican Economy: Twentieth Century Structure and Growth (New Haven, CT: Yale University Press, 1971).

[47] An interesting summary of revisionist thinking on the nature and history of the ejido appears in Emilio Kuri, “La invención del ejido, Nexos, January 2015.

[48]Alan Knight, “Cardenismo: Juggernaut or Jalopy?” Journal of Latin American Studies, 26:1 (1994), pp. 73-107.

[49] Stephen Haber, “The Political Economy of Industrialization,” in Victor Bulmer-Thomas, John Coatsworth, and Roberto Cortes-Conde, eds., The Cambridge Economic History of Latin America (2 vols., New York: Cambridge University Press, 2006), 2:  537-584.

[50]Again, there are dozens of studies of the Mexican economy in this period. Ros’ figures come from “Mexico’s Trade and Industrialization Experience Since 1960: A Reconsideration of Past Policies and Assessment of Current Reforms,” Kellogg Institute (Working Paper 186, January 1993). For a more general study, see Juan Carlos Moreno-Brid and Jaime Ros, Development and Growth in the Me3xican Economy. A Historical Perspective (New York: Oxford University Press, 2009). A recent Spanish language treatment is Enrique Cárdenas Sánchez, El largo curso de la economía mexicana. De 1780 a nuestros días (México, DF: Fondo de Cultura Económica, 2015). A view from a different perspective is Carlos Tello, Estado y desarrollo económico. México 1920-2006 (México, DF, UNAM, 2007).

[51]André A. Hoffman, Long Run Economic Development in Latin America in a Comparative Perspective: Proximate and Ultimate Causes (Santiago, Chile: CEPAL, 2001), p. 19.

[52]Tello, Estado y desarrollo, pp. 501-505.

[53] Mario Vargas Llosa, “Mexico: The Perfect Dictatorship,” New Perspectives Quarterly 8 (1991), pp. 23-24.

[54] Rafael Izquierdo, Política Hacendario del Desarrollo Estabilizador, 1958-1970 (México, DF: Fondo de Cultura Económica, 1995. The term stabilizing development was itself termed by Izquierdo as a government minister.

[55]See Foreign Relations of the United States, 1964-1968. Mexico and Central America http://2001-2009.state.gov/r/pa/ho/frus/johnsonlb/xxxi/36313.htm (Accessed July 15, 2016).

[56] José Aguilar Retureta, “The GDP Per Capita of the Mexican Regions (1895:1930): New Estimates, Revista de Historia Económica, 33: 3 (2015), pp. 387-423.

[57] For a contemporary account with a sense of the immediacy of the end of the Echeverría regime, see “Así se devaluó el peso,” Proceso, November 13, 1976.

[58] The standard account is Stephen Haber, Herbert Klein, Noel Maurer, and Kevin Middlebrook, Mexico since 1980 (New York: Cambridge University Press, 2008). A particularly astute economic account is Nora Lustig, Mexico: The Remaking of an Economy (2d ed., Washington, DC: The Brookings Institution, 1998).  But also Louise E. Walker, Waking from the Dream. Mexico’s Middle Classes After 1968 (Stanford, CA: Stanford University Press, 2013).

[59] See, for example, Jaime Ros Bosch, Algunas tesis equivocadas sobre el estancamiento económico de México (México, DF: El Colegio de México, 2013).

[60] La Banca Central y la Importancia de la Estabilidad Económica  June 16, 2008.  (http://www.banxico.org.mx/politica-monetaria-e-inflacion/material-de-referencia/intermedio/politica-monetaria/%7B3C1A08B1-FD93-0931-44F8-96F5950FC926%7D.pdf, Accessed July 15, 2016.). Also see Brian Winter, “This Man is Brilliant: So Why Doesn’t Mexico’s Economy Grow Faster?” Americas Quarterly (http://americasquarterly.org/content/man-brilliant-so-why-doesnt-mexicos-economy-grow-faster) (Accessed July 21, 2016)

[61]   For AMLO in his own words, see his A New Hope For Mexico: Saying No to Corruption, Violence, and Trump’s Wall. Translated by Natascha Uhlman (New York: O/R Books, 2018).

Citation: Salvucci, Richard . “Mexico: Economic History” EH.Net Encyclopedia, edited by Robert Whaples. December 27, 2018. URL http://eh.net/encyclopedia/the-economic-history-of-mexico/

 

Encyclopedia – Sorted by Author

The EH.Net Encyclopedia of Economic and Business History is designed to provide students and laymen with high quality reference articles in the field. Articles for the Online Encyclopedia are written by experts, screened by a group of authorities, and carefully edited. A distinguished Advisory Board recommends entry topics, assists in the selection of authors, and defines the project’s scope.

The information contained within EH.Net’s Encyclopedia of Economic and Business History is for personal, individual use only and may not be reproduced or transmitted in any form or by any means without the express permission of EH.Net. See the Terms of Use.

EH.Net * Encyclopedia * Index

The online encyclopedia articles are indexed alphabetically:

AuthorTitle
Aaronson, Susan ArielFrom GATT to WTO: The Evolution of an Obscure Agency to One Perceived as Obstructing Democracy
Adams, Sean PatrickThe US Coal Industry in the Nineteenth Century
Adams, Sean PatrickThe US Coal Industry in the Nineteenth Century
Aldrich, MarkHistory of Workplace Safety in the United States, 1880-1970
Alexander, BarbaraThe National Recovery Administration
Alexander, BarbaraThe National Recovery Administration
Allen, SarahUrban Decline (and Success) in the United States
Amaral, LucianoEconomic History of Portugal
Attard, BarnardThe Economic History of Australia from 1788: An Introduction
Baack, BenThe Economics of the American Revolutionary War
Baack, BenThe Economics of the American Revolutionary War
Bakker, GerbenThe Economic History of the International Film Industry
Baranoff, DalitFire Insurance in the United States
Bértola, LuisAn Overview of the Economic History of Uruguay since the 1870s
Bierman, Harold Jr.The 1929 Stock Market Crash
Boal, William M.Monopsony in American Labor Markets
Bodenhorn, HowardAntebellum Banking in the United States
Bodenhorn, HowardAntebellum Banking in the United States
Bourne, JennySlavery in the United States
Boyd, Lawrence W.The Company Town
Boyer, GeorgeEnglish Poor Laws
Brown, Stephen A.A History of the Bar Code
Bugos, Glenn E.The History of the Aerospace Industry
Burnette, JoyceWomen Workers in the British Industrial Revolution
Burnette, JoyceWomen Workers in the British Industrial Revolution
Butkiewicz, JamesReconstruction Finance Corporation
Cain, LouCliometrics
Carlos, Ann M.The Economic History of the Fur Trade: 1670 to 1870
Castaneda, ChristopherManufactured and Natural Gas Industry
Chandra, SiddharthEconomic Histories of the Opium Trade
Chapra, M. UmerIslamic Economics: What It Is and How It Developed
Cohen, Benjamin J.Monetary Unions
Cohn, Raymond L.Immigration to the United States
Collins, William J.Fair Housing Laws
Couch, JimThe Works Progress Administration
Cowen, DavidThe First Bank of the United States
Craft, Erik D.An Economic History of Weather Forecasting
Craig, Lee A.Public Sector Pensions in the United States
Crowley, TerryOscar Douglas Skelton and Canada’s Economic History
Cuff, TimothyHistorical Anthropometrics
Cunfer, GeoffThe Dust Bowl
Daniel, JacobyApprenticeship in the United States
Deng, KentEconomic History of Premodern China (from 221 BC to c. 1800 AD)
Di Matteo, LivioThe Use of Quantitative Micro-data in Canadian Economic History: A Brief Survey
Doti, Lynne PiersonBanking in the Western U.S.
Drabble, John H.Economic History of Malaysia
Eloranta, JariMilitary Spending Patterns in History
Emery, HerbFraternal Sickness Insurance
Engen, Darel TaiThe Economy of Ancient Greece
Fishback, Price V.Workers’ Compensation
Fisher, Glenn W.History of Property Taxes in the United States
Flynn, David T.Credit in the Colonial American Economy
Frank, ZephyrThe International Natural Rubber Market, 1870-1930
Frey, DonaldThe Protestant Ethic Thesis
Friedman, GeraldLabor Unions in the United States
Gardner, BruceU.S. Agriculture in the Twentieth Century
Grossman, RichardUS Banking History, Civil War to World War II
Grytten, Ola HonningdalThe Economic History of Norway
Gupta, BishnupriyaThe History of the International Tea Market, 1850-1945
Hackelman, Jac C.Historical Political Business Cycles in the United States
Haines, MichaelFertility and Mortality in the United States
Halevi, NadavA Brief Economic History of Modern Israel
Hansen, BradleyBankruptcy Law in the United States
Harreld, Donald J.The Dutch Economy in the Golden Age (16th – 17th Centuries)
Henriksen, IngridAn Economic History of Denmark
Herren, Robert StanleyCouncil of Economic Advisers
Hjerppe, RiittaAn Economic History of Finland
Holley, DonaldMechanical Cotton Picker
Jones, NormanUsury
Kaplan, Edward S.The Fordney-McCumber Tariff of 1922
Khan, B. ZorinaAn Economic History of Copyright in Europe and the United States
Khan, B. ZorinaAn Economic History of Patent Institutions
Klein, Daniel B. Turnpikes and Toll Roads in Nineteenth-Century America
La Croix, SumnerEconomic History of Hawai’i
Law, Marc T.History of Food and Drug Regulation in the United States
Lawrence H. OfficerPurchasing Power Parity
Lewis, Frank D.The Economic History of the Fur Trade: 1670 to 1870
Lyons, JohnCliometrics
Majewski, JohnTurnpikes and Toll Roads in Nineteenth-Century America
Malone, Laurence J.Rural Electrification Administration
Maloney, Thomas N.African Americans in the Twentieth Century
Mason, DavidSavings and Loan Industry (U.S.)
McDonald, JohnEconomy of England at the Time of the Norman Conquest
McGuire, Robert A.Economic Interests and the Adoption of the United States Constitution
Meyer, David R.The Roots of American Industrialization, 1790-1860
Michener, RonMoney in the American Colonies
Miron, Jeffrey A.Alcohol Prohibition
Mitch, DavidEducation and Economic Growth in Historical Perspective
Moen, JonThe Panic of 1907
Mosk, CarlJapanese Industrialization and Economic Growth
Murphy, Sharon AnnLife Insurance in the United States through World War I
Murray, John E.Industrial Sickness Funds
Mushin, JerryThe Euro and Its Antecedents
Mushin, JerryThe Sterling Area
Mushinski, DavidMorris Plan Banks
Myung Soo ChaThe Economic History of Korea
Neill, RobinHarold Adams Innis
Nelson, Jon P.Advertising Bans in the United States
Noll, FranklinThe United States Public Debt, 1861 to 1975
Nonnenmacher, TomasHistory of the U.S. Telegraph Industry
Ó Gráda, Cormac Ireland’s Great Famine
O'Brien, AnthonySmoot-Hawley Tariff
Officer, Lawrence H.Gold Standard
Olds, KellyThe Economic History of Taiwan
Owen, LauraHistory of Labor Turnover in the U.S.
Parker, RandallAn Overview of the Great Depression
Patton, Randall L.A History of the U.S. Carpet Industry
Persson, Karl GunnarThe Law of One Price
Phillips, Ronnie J.Morris Plan Banks
Phillips, William H.Cotton Gin
Puffert, DouglasPath Dependence
Quinn, StephenThe Glorious Revolution of 1688
Ransom, Michael R.Monopsony in American Labor Markets
Ransom, Roger L.The Economics of the Civil War
Redish, AngelaBimetallism
Richardson, GaryMedieval Guilds
Ritschl, AlbrechtThe Marshall Plan, 1948-1951
Rockoff, HughU.S. Economy in World War I
Rosenbloom, Joshua Indentured Servitude in the Colonial U.S.
Rosenbloom, JoshuaThe History of American Labor Market Institutions and Outcomes
Routt, DavidThe Economic Impact of the Black Death
Salvucci, Richard The Economic History of Mexico
Santos, JosephA History of Futures Trading in the United States
Schenk, Catherine R.Economic History of Hong Kong
Schön, LennartSweden – Economic Growth and Structural Change, 1800-2000
Schrag, Zachary M.Urban Mass Transit In The United States
Scott, Carole E.The History of the Radio Industry in the United States to 1940
Selgin, GeorgeGresham’s Law
Short, JoannaEconomic History of Retirement in the United States
Sicotte, RichardInternational Shipping Cartels
Siklos, Pierre L.Deflation
Singleton, John An Economic History of New Zealand in the Nineteenth and Twentieth Centuries
Smiley, GeneThe U.S. Economy in the 1920s
Smith, FredUrban Decline (and Success) in the United States
Stack, Martin H.A Concise History of America’s Brewing Industry
Stead, David R.Agricultural Tenures and Tithes
Stead, David R.Common Agricultural Policy
Stead, David R.Thomas Robert Malthus
Stead, David R.William Marshall
Stead, David R.David Ricardo
Stead, David R.Arthur Young
Steckel, Richard H.A History of the Standard of Living in the United States
Steindl, Frank G.Economic Recovery in the Great Depression
Stewart, James I.The Economics of American Farm Unrest, 1865-1900
Tassava, Christopher J.The American Economy during World War II
Thomasson, MelissaHealth Insurance in the United States
Toma, MarkFederal Reserve System
Touwen, JeroenThe Economic History of Indonesia
Troost, WilliamThe Freedmen’s Bureau
Tuttle, CarolynChild Labor during the British Industrial Revolution
Walsh, MargaretThe Bus Industry in the United States
Weidenmier, MarcMoney and Finance in the Confederate States of America
Whaples, RobertCarnegie, Andrew
Whaples, RobertChild Labor in the United States
Whaples, RobertCalifornia Gold Rush
Whaples, RobertHours of Work in U.S. History
White, William JEconomic History of Tractors in the United States
Whitten, David O.The Depression of 1893
Wicker, ElmusBanking Panics in the US: 1873-1933
Williamson, SamCliometrics
Wright, RobertOrigins of Commercial Banking in the United States, 1781-1830

An Economic History of Weather Forecasting

Erik D. Craft, University of Richmond

Introduction

The United States Congress established a national weather organization in 1870 when it instructed the Secretary of War to organize the collection of meteorological observations and forecasting of storms on the Great Lakes and Atlantic Seaboard. Large shipping losses on the Great Lakes during the 1868 and 1869 seasons, growing acknowledgement that storms generally traveled from the West to the East, a telegraphic network that extended west of the Great Lakes and the Atlantic Seaboard, and an eager Army officer promising military discipline are credited with convincing Congress that a storm-warning system was feasible. The United States Army Signal Service weather organization immediately dwarfed its European counterparts in budget and geographical size and shortly thereafter created storm warnings that on the Great Lakes alone led to savings in shipping losses that exceeded the entire network’s expenses.

Uses of Weather Information

Altering Immediate Behavior

The most obvious use of weather information is to change behavior in response to expected weather outcomes. The motivating force behind establishing weather organizations in England, France, Germany, and the United States was to provide warnings to ships of forthcoming storms, so that the ships might remain in harbor. But it soon became obvious that agricultural and commercial interests would benefit from weather forecasts as well. Farmers could protect fruit sensitive to freezes, and shippers could limit spoilage of produce while en route. Beyond preparation for severe weather, weather forecasts are now created for ever more specialized activities: implementing military operations, scheduling operation of power generation facilities, routing aircraft safely and efficiently, planning professional sports teams’ strategies, estimating demand for commodities sensitive to weather outcomes, planning construction projects, and optimizing the use of irrigation and reservoir systems’ resources.

Applying Climatological Knowledge

Climatological data can be used to match crop varieties, construction practices, and other activities appropriately to different regions. For example, in 1947 the British Government planned to grow groundnuts on 3.2 million acres in East and Central Africa. The groundnut was chosen because it was suited to the average growing conditions of the chosen regions. But due a lack of understanding of the variance in amount and timing of rainfall, the project was abandoned after five years and initial capital outlays of 24 million British pounds and annual operating costs of 7 million pounds. The preparation of ocean wind and weather charts in the 1850s by Matthew Fontaine Maury, Superintendent of the U.S. Navy’s Depot of Charts and Instruments, identified better routes for vessels sailing between America and Europe and from the United States East Cost to United States West Coast. The reduced sailing durations are alleged to have saved millions of dollars annually. Climatological data can also be used in modern environmental forecasts of air quality and how pollution is dispersed in the air. There are even forensic meteorologists who specialize in identifying weather conditions at a given point in time after accidents and subsequent litigation. Basic climatological information is also one reason why the United States cinema industry became established in Southern California; it was known that a high percentage of all days were sunny, so that outdoor filming would not be delayed.

Smoothing Consumption of Weather-Sensitive Commodities

An indirect use of weather forecasts and subsequent weather occurrences is their influence on the prices of commodities that are affected by weather outcomes. Knowledge that growing conditions will be poor or have been poor will lead to expectations of a smaller crop harvest. This causes expected prices of the crop to rise, thereby slowing consumption. This is socially efficient, since the present inventory and now smaller future harvest will have to be consumed more slowly over the time period up until the next season’s crop can be planted, cultivated, and harvested. Without an appropriate rise in price after bad weather outcomes, an excessive depletion of the crop’s inventory could result, leading to more variability in the consumption path of the commodity. People generally prefer consuming their income and individual products in relatively smooth streams, rather than in large amounts in some periods and small amounts in other periods. Both improved weather forecasts and United State Department of Agriculture crop forecasts help buyers more effectively consume a given quantity of a crop.

The History Weather Forecasts in the United States

An important economic history question is whether or not it was necessary for the United States Federal Government to found a weather forecasting organization. There are two challenges in answering that question: establishing that the weather information was socially valuable and determining if private organizations were incapable of providing the appropriate level of services. Restating the latter issue, did weather forecasts and the gathering of climatological information possess enough attributes of a public good such that private organizations would create an insufficiently large amount of socially- beneficial information? There are also two parts to this latter public good problem: nonexcludability and nonrivalry. Could private producers of weather information create a system whereby they earned enough money from users of weather information to cover the costs of creating the information? Would such a weather system be of the socially optimal size?

Potential Organizational Sources of Weather Forecasts

There were many organizations during the 1860s that the observer might imagine would benefit from the creation of weather forecasts. After the consolidation of most telegraphic service in the United States into Western Union in 1866, an organization with employees throughout the country existed. The Associated Press had a weather-reporting network, but there is no evidence that it considered supplementing its data with forecasts. One Ebenezer E. Merriam began supplying New York newspapers with predictions in 1856. Many years later, astronomer turned Army Signal Service forecaster Cleveland Abbe concluded that Merriam made his predictions using newspaper weather reports. The Chicago Board of Trade declined an invitation in 1869 to support a weather forecasting service based in Cincinnati. Neither ship-owners nor marine insurers appear to have expressed any interest in creating or buying weather information. Great Lakes marine insurers had even already overcome organizational problems by forming the Board of Lake Underwriters in 1855. For example, the group incurred expenses of over $11,000 in 1861 inspecting vessels and providing ratings on behalf of its members in the annual Lake Vessel Register. The Board of Lake Underwriters even had nine inspectors distributed on the Great Lakes to inspect wrecks on behalf of its members. Although there was evidence that storms generally traveled in a westerly direction, none of these groups apparently expected the benefits to itself to exceed the costs of establishing the network necessary to provide useful weather information.

Cleveland Abbe at the Cincinnati Observatory began the most serious attempt to establish a quasi-private meteorological organization in 1868 when he sought financial support from the Associated Press, Western Union, local newspapers, and the Cincinnati Chamber of Commerce. His initial plan included a system of one hundred reporting stations with the Associated Press covering the $100 instrument costs at half of the stations and the dispatch costs. In the following year, he widened his scope to include the Chicago Board of Trade and individual subscribers and proposed a more limited network of between sixteen and twenty-two stations. The Cincinnati Chamber of Commerce, whose president published the Cincinnati Commercial, funded the experiment from September through November of 1869. Abbe likely never had more than ten observers report on any given day and could not maintain more than about thirty local subscribers for his service, which provided at most only occasional forecasts. Abbe continued to receive assistance from Western Union in the collection and telegraphing of observations after the three-month trial, but he fell short in raising funds to allow the expansion of his network to support weather forecasts. His ongoing “Weather Bulletin of the Cincinnati Observatory” was not even published in the Cincinnati Commercial.

Founding of the Army Signal Service Weather Organization

Just as the three-month trial of Abbe’s weather bulletin concluded, Increase A. Lapham, a Milwaukee natural scientist, distributed his second list of Great Lakes shipping losses, entitled “Disaster on the Lakes.” The list included 1,164 vessel casualties, 321 deaths, and $3.1 million in property damaged in 1868 and 1,914 vessel casualties, 209 lives lost, and $4.1 million in financial losses in 1869. The number of ships that were totally destroyed was 105 and 126 in each year, respectively. According to a separate account, the storm of November 16-19, 1869 alone destroyed vessels whose value exceeded $420,000. Lapham’s list of losses included a petition to establish a weather forecasting service. In 1850, he had prepared a similar proposal alongside a list of shipping of losses, and twice during the 1850s he had tracked barometric lows across Wisconsin to provide evidence that storms could be forecast.

Recipients of Lapham’s petitions included the Wisconsin Academy of Sciences, the Chicago Academy of Sciences, the National Board of Trade meeting in Richmond, a new Chicago monthly business periodical entitled The Bureau, and Congressman Halbert E. Paine of Milwaukee. Paine had studied meteorological theories under Professor Elias Loomis at Western Reserve College and would introduce storm-warning service bills and eventually the final joint resolution in the House that gave the Army Signal Service storm-warning responsibilities. In his book Treatise on Meteorology (1868), Loomis claimed that the approach of storms to New York could be predicted reliably given telegraphic reports from several locations in the Mississippi Valley. From December 1869 through February 1870, Lapham’s efforts received wider attention. The Bureau featured nine pieces on meteorology from December until March, including at least two by Lapham.

Following the Civil War, the future of a signaling organization in the Army was uncertain. Having had budget requests for telegraph and signal equipment for years 1870 and 1871 cut in half to $5000, Colonel Albert J. Myer, Chief Signal Officer, led a small organization seeking a permanent existence. He visited Congressman Paine’s office in December of 1869 with maps showing proposed observation stations throughout the United Stations. Myer’s eagerness for the weather responsibilities, as well as the discipline of the Army organization and a network of military posts in the West, many linked via telegraph, would appear to have made the Army Signal Service a natural choice. The marginal costs of an Army weather organization using Signal Service personnel included only instruments and commercial telegraphy expenses. On February 4, 1870, Congress approved the Congressional Joint Resolution which “authorizes and requires the Secretary of War to provide for taking of meteorological observations . . . and for giving notice on the northern lakes and on the sea-coast of the approach and force of storms.” Five days later, President Grant signed the bill.

Expansion of the Army Signal Service’s Weather Bureau

Observer-sergeants in the Signal Service recorded their first synchronous observations November 1, 1870, 7:35 a.m. Washington time at twenty-four stations. The storm-warning system began formal operation October 23, 1871 with potential flag displays at eight ports on the Great Lakes and sixteen ports on the Atlantic seaboard. At that time, only fifty general observation stations existed. Already by June 1872, Congress expanded the Army Signal Service’s explicit forecast responsibilities via an appropriations act to most of the United States “for such stations, reports, and signal as may be found necessary for the benefit of agriculture and commercial interests.” In 1872, the Signal Service also began publication of the Weekly Weather Chronicle during the growing seasons. It disappeared in 1877, reemerging in 1887 as the Weather Crop Bulletin. As the fall of 1872 began, confidence in the utility of weather information was so high that 89 agricultural societies and 38 boards of trade and chambers of commerce had appointed meteorological committees to communicate with the Army Signal Service. In addition to dispensing general weather forecasts for regions of the country three times a day, the Signal Service soon sent special warnings to areas in danger of cold waves and frosts.

The original method of warning ships of dangerous winds was hoisting a single red flag with a black square located in the middle. This was known as a cautionary signal, and Army personnel at Signal Service observation stations or civilians at display stations would raise the flag on a pole “whenever the winds are expected to be as strong as twenty-five miles per hour, and to continue so for several hours, within a radius of one hundred miles from the station.” In the first year of operation ending 1 September 1872, 354 cautionary signals were flown on both the Great Lakes and the Atlantic Seaboard, approximately 70% of which were verified as having met the above definition. Such a measure of accuracy is incomplete, however, as it can always be raised artificially by not forecasting storms under marginal conditions, even though such a strategy might diminish the value of the service.

The United States and Canada shared current meteorological information beginning in 1871. By 1880, seventeen Canadian stations reported meteorological data to the United States at least twice daily by telegraph. The number of Army Signal Service stations providing telegraphic reports three times a day stabilized at 138 stations in 1880, dipped to 121 stations in 1883, and grew to approximately 149 stations by 1888. (See Table 1 for a summary of the growth of the Army Signal Service Meteorological Network from 1870 to 1890.) Additional display stations only provided storm warnings at sea and lake ports. River stations monitored water levels in order to forecast floods. Special cotton-region stations, beginning in 1883, comprised a dense network of daily reporters of rainfall and maximum and minimum temperatures. Total Army Signal Service expenditures grew from a $15,000 supplemental appropriation for weather operations in fiscal year 1870 to about one million dollars for all Signal Service costs around 1880 and stabilized at that level. Figure 1 shows the extent geographical extent of the Army Signal Service telegraphic observation network in 1881.

Figure 1: Army Signal Service Observation Network in 1881
Click on the image for the larger, more detailed image (~600K)Source: Map between pages 250-51, Annual Report of the Chief Signal Officer, October 1, 1881, Congressional Serial Set Volume 2015. See the detailed map between pages 304-05 for the location of each of the different types of stations listed in Table 1.

Table 1: Growth of the United States Army Signal Service Meteorological Network

Budget (Real 1880 Dollars)

Stations of the Second Order

Stations of the Third Order

Repair Stations

Display Stations

Special River Stations

Special Cotton-Region Stations

1870

32,487

25

1871

112,456

54

1872

220,269

65

1873

549,634

80

9

1874

649,431

92

20

1875

749,228

98

20

1876

849,025

106

38

23

1877

849,025

116

29

10

9

23

1878

978,085

136

36

12

11

23

1879

1,043,604

158

30

17

46

30

1880

1,109,123

173

39

49

50

29

1881

1,080,254

171

47

44

61

29

87

1882

937,077

169

45

3

74

30

127

1883

950,737

143

42

27

7

30

124

1884

1,014,898

138

68

7

63

40

138

1885

1,085,479

152

58

8

64

66

137

1886

1,150,673

146

33

11

66

69

135

1887

1,080,291

145

31

13

63

70

133

1888

1,063,639

149

30

24

68

78

116

1889

1,022,031

148

32

23

66

72

114

1890

994,629

144

34

15

73

72

114

Sources: Report of the Chief Signal Officer: 1888, p. 171; 1889, p. 136; 1890, p. 203 and “Provision of Value of Weather Information Services,” Craft (1995), p. 34.

Notes: The actual total budgets for years 1870 through 1881 are estimated. Stations of the second order recorded meteorological conditions three times per day. Most immediately telegraphed the data. Stations of the third order recorded observations at sunset. Repair stations maintained Army telegraph lines. Display stations displayed storm warnings on the Great Lakes and Atlantic seaboard. Special river stations monitored water levels in order to forecast floods. Special cotton-region stations collected high temperature, low temperature, and precipitation data from a denser network of observation locations

Early Value of Weather Information

Budget reductions in the Army Signal Service’s weather activities in 1883 led to the reduction of fall storm-warning broadcast locations on the Great Lakes from 80 in 1882 to 43 in 1883. This one-year drop in the availability of storm-warnings creates a special opportunity to measure the value of warnings of extremely high winds on the Great Lakes (see Figure 2). Many other factors can be expected to affect the value of shipping losses on the Great Lakes: the level of commerce in a given season, the amount of shipping tonnage available to haul a season’s commerce, the relative composition of the tonnage (steam versus sail), the severity of the weather, and long-term trends in technological change or safety. Using a statistical technique know as multiple regression, in which the effect of these many factors on shipping losses are analyzed concurrently, Craft (1998) argued that each extra storm-warning location on the Great Lakes lowered losses by about one percent. This implies that the storm-warning system reduced losses on the Great Lakes by approximately one million dollars annually in the mid 1870s and between $1 million and $4.5 million dollars per year by the early 1880s.

Source: The data are found in the following: Chicago Daily Inter Ocean (December 5, 1874 p. 2; December 18, 1875; December 27, 1876 p. 6; December 17, 1878; December 29, 1879 p. 6; February 3, 1881 p. 12; December 28, 1883 p. 3; December 5, 1885 p. 4); Marine Record (December 27, 1883 p. 5; December 25, 1884 pp. 4-5; December 24, 1885 pp. 4-5; December 30, 1886 p. 6; December 15, 1887 pp 4-5); Chief Signal Officer, Annual Report of the Chief Signal Officer, 1871- 1890.

Note: Series E 52 of the Historical Statistics of the United States (U.S. Bureau of the Census, 1975) was used to adjust all values to real 1880 dollars.

There are additional indirect methods with which to confirm the preceding estimate of the value of early weather information. If storm-warnings actually reduced the risk of damage to cargo and ships due to bad weather, then the cost of shipping cargo would be expected to decline. In particular, such reductions in shipping prices due to savings in losses caused by storms can be differentiated from other types of technological improvements by studying how fall shipping prices changed relative to summer shipping prices. It was during the fall that ships were particularly vulnerable to accidents caused by storms. Changes is shipping prices of grain from Chicago to Buffalo during the summers and falls from the late 1860s to late 1880s imply that storm-warnings were valuable and are consistent with the more direct method estimating reductions in shipping losses. Although marine insurance premia data for shipments on the Great Lakes are limited and difficult to interpret due the waning and waxing of the insurance cartel’s cohesion, such data are also supportive of the overall interpretation.

Given Army Signal Service budgets of about one million dollars for providing meteorological services to the entire United States, a reasonable minimum bound for the rate of return to the creation of weather information from 1870 to 1888 is 64 percent. The figure includes no social benefits from any weather information other than Great Lakes storm warnings. This estimate of nineteenth century information implies that the creation and distribution of storm warnings by the United States Federal Government were a socially beneficial investment.

Transfer of Weather Services to the Department of Agriculture

The Allison Commission hearings in 1884 and 1885 sought to determine the appropriate organization of Federal agencies whose activities included scientific research. The Allison Commission’s long report included testimony and discussion relating to the organization of the Army Signal Service, the United States Geological Survey, the Coast and Geodetic Survey, and the Navy Hydrographic Office. Weather forecasting required a reliable network of observers, some of whom were the sole Army personnel at a location. Advantages of a military organizational structure included a greater range of disciplinary responses, including court-martials for soldiers, for deficient job performance. Problems, however, of the military organization included the limited ability to increase one’s rank while working for the Signal Service and tension between the civilian and military personnel. In 1891, after an unsuccessful Congressional attempt at reform in 1887, the Weather Bureau became a civilian organization when it joined the young Department of Agriculture.

Aviation and World War I

Interest in upper air weather conditions grew rapidly after the turn of the century on account of two related events: the development of aviation and World War I. Safe use of aircraft depended on more precise knowledge of weather conditions (winds, storms, and visibility) between takeoff and landing locations. Not only were military aircraft introduced during World War I, but understanding wind conditions was also crucial to the use of poison gas on the front lines. In the most important change of the Weather Bureau’s organizational direction since transfer to the Department of Agricultural, Congress passed the Air Commerce Act in 1926, which by 1932 led to 38% of the Weather Bureau’s budget being directed toward aerology research and support.

Transfer of the Weather Bureau to the Department of Commerce

Even though aerological expenditures by the Weather Bureau in support of aviation rivaled funding for general weather services by the late 1930s, the Weather Bureau came under increasing criticism from aviation interests. The Weather Bureau was transferred to the Department of Commerce in 1940 where other support for aviation already originated. This transition mirrored the declining role of agriculture in the United States and movement toward a more urban economy. Subsequently known as the United States Weather Service, it has remained there since.

World War II

During World War II, weather forecasts assumed greater importance, as aircraft and rapid troop movements became key parts of military strategy. Accurate long-range artillery use also depended on knowledge of prevailing winds. For extensive use of weather forecasts and climatological information during wartime, consider Allied plans the strike German oil refineries in Ploesti, Romania. In the winter of 1943 military weather teams parachuted into the mountains of Yugoslavia to relay weather data. Bombers from North Africa could only reach the refineries in the absence of headwinds in either direction of the sortie. Cloud cover en route was important for protection, clear skies were helpful for identification of targets, and southerly winds permitted the bombers to drop their ordinance on the first pass on the south side of the area’s infrastructure, allowing the winds to assist in spreading the fire. Historical data indicated that only March or August were possible windows. Though many aircraft were lost, the August 1 raid was considered a success.

Tide, wind, and cloud conditions were also crucial in the planning of the invasion of Normandy (planned for June 5 and postponed until June 6 in 1944). The German High Command had been advised by its chief meteorologist that conditions were not opportune for an Allied invasion on the days following June 4. Dissention among American and British military forecasters nearly delayed the invasion further. Had it been deferred until the next date of favorable tide conditions, the invasion would have taken place during the worst June storm in twenty years in the English Channel.

Forecasting in Europe

A storm on November 14, 1854 destroyed the French warship Henri IV and damaged other British and French vessels on the Black Sea involved in the Crimean War. A report from the state-supported Paris Observatory indicated that barometric readings showed that the storm has passed across Europe in about four days. Urban Leverrier, director of the Paris Observatory, concluded that had there been a telegraph line between Vienna and the Crimea, the British and French fleets could have received warnings. Although the United States weather network was preceded by storm-warning systems in the Netherlands in 1860, Great Britain in 1861, and France in 1863, the new United States observation network immediately dwarfed the European organizations in both financial resources and geographical magnitude.

Robert FitzRoy, captain of the Beagle during Darwin’s famous voyage, was appointed director of the Meteorological Department established by the British Board of Trade (a government organization) in 1854. The wreck of the well-constructed iron vessel Royal Charter in a storm with much loss of life in October of 1859 provided another opportunity for a meteorological leader to argue that storms could be tracked and forecast. With support from the Prince Consort, FitzRoy and the Meteorological Department were granted approval to establish a storm-warning service. On February 6, 1861 the first warnings were issued. By August 1861 weather forecasts were issued regularly. By 1863, the Meteorological Department had a budget of three thousand English pounds. Criticism arose from different groups. Scientists wished to establish meteorology on a sound theoretical foundation and differentiate it from astrology. At the time, many publishers of weather almanacs subscribed to various theories of the influence of the moon or other celestial bodies on weather (This is not as outlandish one might suppose; in 1875, well-known economist William Stanley Jevons studied connections between sunspot activity and meteorology with business cycles). Some members of this second group supported the practice of forecasting but were critical of FitzRoy’s technique, perhaps hoping to become alternative sources of forecasts. Amidst the criticism, FitzRoy committed suicide in 1865. Forecasts and warnings were discontinued in 1866 until the warnings resumed two years later. General forecasts were suspended until 1877.

In 1862, Leverrier wrote the French Ministry of Public Education that French naval and commercial interests might be compromised by their dependence on warnings from the British Board of Trade. A storm-warning service in France commenced in July of 1863. Given the general movement of storms westward, neither France nor Britain had the luxury of tracking storms well before they arrived, as would have been possible with the November 1854 storm in the Crimea and as the Army Signal Service soon would be able to do in America. On account of administrative difficulties that were to hinder effective functioning of the service until 1877, French warnings ceased in October 1865 but resumed in May the next year. The French Central Bureau Meteorology was founded only in 1878 with a budget of only $12,000.

After the initiation of storm warning systems that preceded the Army Signal Service weather network, Europe would not achieve meteorological prominence again until the Bergen School of meteorology developed new storm analysis techniques after World War I, which incorporated cold and warm fronts. In the difficult days in Norway during the conclusion of the Great War, meteorological information from the rest of Europe was unavailable. Theoretical physicist turned meteorological researcher Wilhelm Bjerknes appealed to Norway’s national interests in defense, in the development of commercial aviation, and in increased agricultural output to build a dense observation network, whose data helped yield a new paradigm for meteorology.

Conclusion

The first weather forecasts in the United States that were based on a large network of simultaneous observations provided information to society that was much more valuable than the cost of production. There was discussion in the early winter of 1870 between the scientist Increase Lapham and a businessman in Chicago of the feasibility of establishing a private forecasting organization in Wisconsin or Illinois (see Craft 1999). But previous attempts by private organizations in the United States had been unsuccessful in supporting any private weather-forecasting service. In the contemporary United States, the Federal government both collects data and offers forecasts, while private weather organizations provide a variety of customized services.

Weather Forecasting Timeline

1743

Benjamin Franklin, using reports of numerous postmasters, determined the northeastward path of a hurricane from the West Indies.

1772-1777

Thomas Jefferson at Monticello, Virginia and James Madison at Williamsburg, Virginia collect a series of contemporaneous weather observations.

1814

Surgeon General Tilton issues an order directing Army surgeons to keep a diary of the weather in order to ascertain any influences of weather upon disease.

1817

Josiah Meigs, Commission of the General Land Office, requests officials at their land offices to record meteorological observations.

1846-1848

Matthew F. Maury, Superintendent of the U.S. Naval Observatory, publishes his first charts compiled from ships’ log showing efficient sailing routes.

1847

Barometer used to issue storm warnings in Barbadoes.

1848

J. Jones of New York advertises meteorological reports costing between twelve and one half and twenty-five cents per city per day. There is no evidence the service was ever sold.

1848

Publication in the British Daily News of the first telegraphic daily weather report.

1849

The Smithsonian Institution begins a nearly three decade long project of collecting meteorological data with the goal of understanding storms.

1849

Captain Joseph Brooks, manager of the Portland Steamship Line, receives telegraphic reports three times a day from Albany, New York, and Plattsburg in order to determine if the line’s ships should remain in port in Maine.

1853-1855

Ebenezer E. Merriam of New York, using newspaper telegraphic reports, offers weather forecasts in New York’s newspapers on an apparently irregular basis.

1858

The U.S. Army Engineers begin collecting meteorological observations while surveying the Great Lakes.

1860

Christoph Buys Ballot issues first storm warnings in the Netherlands.

1861

Admiral Robert FitzRoy of the British Meteorological Office begins issuing storm-warnings.

1863

Urban Leverrier, director of the Paris Observatory, organizes a storm-warning service.

1868

Cleveland Abbe of the Cincinnati Observatory unsuccessfully proposes a weather service of one hundred observation stations to be supported by the Cincinnati Chamber of Commerce, Associated Press, Western Union, and local newspapers.

1869

The Cincinnati Chamber of Commerce funds a three-month trial of the Cincinnati Observatory’s weather bulletin. The Chicago Board of Trade declines to participate.

1869

Increase A. Lapham publishes a list of the shipping losses on the Great Lakes during the 1868 and 1869 seasons.

1870

Congress passes a joint resolution directing the Secretary of War to establish a meteorological network for the creation of storm warnings on the Great Lakes and Atlantic Seaboard. Storm-warnings are offered on November 8. Forecasts begin the following February 19.

1872

Congressional appropriations bill extends Army Signal Service duties to provide forecasts for agricultural and commercial interests.

1880

Frost warnings offered for Louisiana sugar producers.

1881-1884

Army Signal Service expedition to Lady Franklin Bay in support of international polar weather research. Only seven of the twenty-five member team survives.

1881

Special cotton-region weather reporting network established.

1891

Weather Bureau transferred to the Department of Agriculture.

1902

Daily weather forecasts sent by radio to Cunard Line steamships.

1905

First wireless weather report from a ship at sea.

1918

Norway expands its meteorological network and organization leading to the development of new forecasting theories centered on three-dimensional interaction of cold and warm fronts.

1919

American Meteorological Society founded.

1926

Air Commerce Act gives the Weather Bureau responsibility for providing weather services to aviation.

1934

First private sector meteorologist hired by a utility company.

1940

The Weather Bureau is transferred from the Department of Agriculture to the Department of Commerce.

1946

First private weather forecast companies begin service.

1960

The first meteorological satellite, Tiros I, enters orbit successfully.

1976

The United States launches its first geostationary weather satellites.

References

Abbe, Cleveland, Jr. “A Chronological Outline of the History of Meteorology in the United States.” Monthly Weather Review 37, no. 3-6 (1909): 87-89, 146- 49, 178-80, 252-53.

Alter, J. Cecil. “National Weather Service Origins.” Bulletin of the Historical and Philosophical Society of Ohio 7, no. 3 (1949): 139-85.

Anderson, Katharine. “The Weather Prophets: Science and Reputation in Victorian Meteorology.” History of Science 37 (1999): 179-216.

Burton, Jim. “Robert Fitzroy and the Early History of the Meteorological Office.” British Journal for the History of Science 19 (1986): 147-76.

Chief Signal Officer. Report of the Chief Signal Officer. Washington: GPO, 1871-1890.

Craft, Erik. “The Provision and Value of Weather Information Services in the United States during the Founding Period of the Weather Bureau with Special Reference to Transportation on the Great Lakes.” Ph.D. diss., University of Chicago, 1995.

Craft, Erik. “The Value of Weather Information Services for Nineteenth-Century Great Lakes Shipping.” American Economic Review 88, no.5 (1998): 1059-1076.

Craft, Erik. “Private Weather Organizations and the Founding of the United States Weather Bureau.” Journal of Economic History 59, no. 4 (1999): 1063- 1071.

Davis, John L. “Weather Forecasting and the Development of Meteorological Theory at the Paris Observatory.” Annals of Science 41 (1984): 359-82.

Fleming, James Rodger. Meteorology in America, 1800-1870. Baltimore: Johns Hopkins University Press, 1990.

Fleming, James Rodger, and Roy E. Goodman, editors. International Bibliography of Meteorology. Upland, Pennsylvania: Diane Publishing Co., 1994.

Friedman, Robert Marc. Appropriating the Weather: Vilhelm Bjerknes and the Construction of a Modern Meteorology. Ithaca: Cornell University Press, 1989.

Hughes, Patrick. A Century of Weather Service. New York: Gordon and Breach, 1970.

Miller, Eric R. “The Evolution of Meteorological Institutions in United States.” Monthly Weather Review 59 (1931): 1-6.

Miller, Eric R. “New Light on the Beginnings of the Weather Bureau from the Papers of Increase A. Lapham.” Monthly Weather Review 59 (1931): 65-70.

Sah, Raaj. “Priorities of Developing Countries in Weather and Climate.” World Development 7 no. 3 (1979): 337-47.

Spiegler, David B. “A History of Private Sector Meteorology.” In Historical Essays on Meteorology, 1919-1995, edited by James Rodger Fleming, 417- 41. Boston: American Meteorological Society, 1996.

Weber, Gustavus A. The Weather Bureau: Its History, Activities and Organization. New York: D. Appleton and Company, 1922.

Whitnah, Donald R. A History of the United States Weather Bureau. Urbana: University of Illinois Press, 1961.

Citation: Craft, Erik. “Economic History of Weather Forecasting”. EH.Net Encyclopedia, edited by Robert Whaples. October 6, 2001. URL http://eh.net/encyclopedia/an-economic-history-of-weather-forecasting/

Urban Decline (and Success) in the United States

Fred Smith and Sarah Allen, Davidson College

Introduction

Any discussion of urban decline must begin with a difficult task – defining what is meant by urban decline. Urban decline (or “urban decay”) is a term that evokes images of abandoned homes, vacant storefronts, and crumbling infrastructure, and if asked to name a city that has suffered urban decline, people often think of a city from the upper Midwest like Cleveland, Detroit, or Buffalo. Yet, while nearly every American has seen or experienced urban decline, the term is one that is descriptive and not easily quantifiable. Further complicating the story is this simple fact – metropolitan areas, like greater Detroit, may experience the symptoms of severe urban decline in one neighborhood while remaining economically robust in others. Indeed, the city of Detroit is a textbook case of urban decline, but many of the surrounding communities in metropolitan Detroit are thriving. An additional complication comes from the fact that modern American cities – cities like Dallas, Charlotte, and Phoenix – don’t look much like their early twentieth century counterparts. Phoenix of the early twenty-first century is an economically vibrant city, yet the urban core of Phoenix looks very, very different from the urban core found in “smaller” cities like Boston or San Francisco.[1] It is unlikely that a weekend visitor to downtown Phoenix would come away with the impression that Phoenix is a rapidly growing city, for downtown Phoenix does not contain the housing, shopping, or recreational venues that are found in downtown San Francisco or Boston.

There isn’t a single variable that will serve as a perfect choice for measuring urban decline, but this article will take an in depth look at urban decline by focusing on the best measure of a city’s well being – population. In order to provide a thorough understanding of urban decline, this article contains three additional sections. The next section employs data from a handful of sources to familiarize the reader with the location and severity of urban decline in the United States. Section three is dedicated to explaining the causes of urban decline in the U.S. Finally, the fourth section looks at the future of cities in the United States and provides some concluding remarks.

Urban Decline in the United States – Quantifying the Population Decline

Between 1950 and 2000 the population of the United States increased by approximately 120 million people, from 152 million to 272 million. Despite the dramatic increase in population experienced by the country as a whole, different cities and states experienced radically different rates of growth. Table 1 shows the population figures for a handful of U.S. cities for the years 1950 to 2000. (It should be noted that these figures are population totals for the cities in the list, not for the associated metropolitan areas.)

Table 1: Population for Selected U.S. Cities, 1950-2000

City

Population

% Change

1950 – 2000

1950

1960

1970

1980

1990

2000

New York

7,891,957

7,781,984

7,895,563

7,071,639

7,322,564

8,008,278

1.5

Philadelphia

2,071,605

2,002,512

1,949,996

1,688,210

1,585,577

1,517,550

-26.7

Boston

801,444

697,177

641,071

562,994

574,283

589,141

-26.5

Chicago

3,620,962

3,550,404

3,369,357

3,005,072

2,783,726

2,896,016

-20.0

Detroit

1,849,568

1,670,144

1,514,063

1,203,339

1,027,974

951,270

-48.6

Cleveland

914,808

876,050

750,879

573,822

505,616

478,403

-47.7

Kansas City

456,622

475,539

507,330

448,159

435,146

441,545

-3.3

Denver

415,786

493,887

514,678

492,365

467,610

554,636

33.4

Omaha

251,117

301,598

346,929

314,255

335,795

390,007

55.3

Los Angeles

1,970,358

2,479,015

2,811,801

2,966,850

3,485,398

3,694,820

87.5

San Francisco

775,357

740,316

715,674

678,974

723,959

776,733

0.2

Seattle

467,591

557,087

530,831

493,846

516,259

563,374

20.5

Houston

596,163

938,219

1,233,535

1,595,138

1,630,553

1,953,631

227.7

Dallas

434,462

679,684

844,401

904,078

1,006,877

1,188,580

173.6

Phoenix

106,818

439,170

584,303

789,704

983,403

1,321,045

1136.7

New Orleans

570,445

627,525

593,471

557,515

496,938

484,674

-15.0

Atlanta

331,314

487,455

495,039

425,022

394,017

416,474

25.7

Nashville

174,307

170,874

426,029

455,651

488,371

545,524

213.0

Washington

802,178

763,956

756,668

638,333

606,900

572,059

-28.7

Miami

249,276

291,688

334,859

346,865

358,548

362,470

45.4

Charlotte

134,042

201,564

241,178

314,447

395,934

540,828

303.5

Source: U.S. Census Bureau.

Several trends emerge from the data in Table 1. The cities in the table are clustered together by region, and the cities at the top of the table – cities from the Northeast and Midwest – experience no significant population growth (New York City) or experience dramatic population loss (Detroit and Cleveland). These cities’ experiences stand in stark contrast to that of the cities located in the South and West – cities found farther down the list. Phoenix, Houston, Dallas, Charlotte, and Nashville all experience triple digit population increases during the five decades from 1950 to 2000. Figure 1 displays this information even more dramatically:

Figure 1: Percent Change in Population, 1950 – 2000

Source: U.S. Census Bureau.

While Table 1 and Figure 1 clearly display the population trends within these cities, they do not provide any information about what was happening to the metropolitan areas in which these cities are located. Table 2 fills this gap. (Please note – these metropolitan areas do not correspond directly to the metropolitan areas identified by the U.S. Census Bureau. Rather, Jordan Rappaport – an economist at the Kansas City Federal Reserve Bank – created these metropolitan areas for his 2005 article “The Shared Fortunes of Cities and Suburbs.”)

Table 2: Population of Selected Metropolitan Areas, 1950 to 2000

Metropolitan Area

1950

1960

1970

2000

Percent Change 1950 to 2000

New York-Newark-Jersey City, NY

13,047,870

14,700,000

15,812,314

16,470,048

26.2

Philadelphia, PA

3,658,905

4,175,988

4,525,928

4,580,167

25.2

Boston, MA

3,065,344

3,357,607

3,708,710

4,001,752

30.5

Chicago-Gary, IL-IN

5,612,248

6,805,362

7,606,101

8,573,111

52.8

Detroit, MI

3,150,803

3,934,800

4,434,034

4,366,362

38.6

Cleveland, OH

1,640,319

2,061,668

2,238,320

1,997,048

21.7

Kansas City, MO-KS

972,458

1,232,336

1,414,503

1,843,064

89.5

Denver, CO

619,774

937,677

1,242,027

2,414,649

289.6

Omaha, NE

471,079

568,188

651,174

803,201

70.5

Los Angeles-Long Beach, CA

4,367,911

6,742,696

8,452,461

12,365,627

183.1

San Francisco-Oakland, CA

2,531,314

3,425,674

4,344,174

6,200,867

145.0

Seattle, WA

920,296

1,191,389

1,523,601

2,575,027

179.8

Houston, TX

1,021,876

1,527,092

2,121,829

4,540,723

344.4

Dallas, TX

780,827

1,119,410

1,555,950

3,369,303

331.5

Phoenix, AZ

NA

663,510

967,522

3,251,876

390.1*

New Orleans, LA

754,856

969,326

1,124,397

1,316,510

74.4

Atlanta, GA

914,214

1,224,368

1,659,080

3,879,784

324.4

Nashville, TN

507,128

601,779

704,299

1,238,570

144.2

Washington, DC

1,543,363

2,125,008

2,929,483

4,257,221

175.8

Miami, FL

579,017

1,268,993

1,887,892

3,876,380

569.5

Charlotte, NC

751,271

876,022

1,028,505

1,775,472

136.3

* The percentage change is for the period from 1960 to 2000.

Source: Rappaport; http://www.kc.frb.org/econres/staff/jmr.htm

Table 2 highlights several of the difficulties in conducting a meaningful discussion about urban decline. First, by glancing at the metro population figures for Cleveland and Detroit, it becomes clear that while these cities were experiencing severe urban decay, the suburbs surrounding them were not. The Detroit metropolitan area grew more rapidly than the Boston, Philadelphia, or New York metro areas, and even the Cleveland metro area experienced growth between 1950 and 2000. Next, we can see from Tables 1 and 2 that some of the cities experiencing dramatic growth between 1950 and 2000 did not enjoy similar increases in population at the metro level. The Phoenix, Charlotte, and Nashville metro areas experienced tremendous growth, but their metro growth rates were not nearly as large as their city growth rates. This raises an important question – did these cities experience tremendous growth rates because the population was growing rapidly or because the cities were annexing large amounts of land from the surrounding suburbs? Table 3 helps to answer this question. In Table 3, land area, measured in square miles, is provided for each of the cities initially listed in Table 1. The data in Table 3 clearly indicate that Nashville and Charlotte, as well as Dallas, Phoenix, and Houston, owe some of their growth to the expansion of their physical boundaries. Charlotte, Phoenix, and Nashville are particularly obvious examples of this phenomenon, for each city increased its physical footprint by over seven hundred percent between 1950 and 2000.

Table 3: Land Area for Selected U.S. Cities, 1950 – 2000

Metropolitan Area

1950

1960

1970

2000

Percent Change 1950 to 2000

New York, NY

315.1

300

299.7

303.3

-3.74

Philadelphia, PA

127.2

129

128.5

135.1

6.21

Boston, MA

47.8

46

46

48.4

1.26

Chicago, IL

207.5

222

222.6

227.1

9.45

Detroit, MI

139.6

138

138

138.8

-0.57

Cleveland, OH

75

76

75.9

77.6

3.47

Kansas City, MO

80.6

130

316.3

313.5

288.96

Denver, CO

66.8

68

95.2

153.4

129.64

Omaha, NE

40.7

48

76.6

115.7

184.28

Los Angeles, CA

450.9

455

463.7

469.1

4.04

San Francisco, CA

44.6

45

45.4

46.7

4.71

Seattle, WA

70.8

82

83.6

83.9

18.50

Houston, TX

160

321

433.9

579.4

262.13

Dallas, TX

112

254

265.6

342.5

205.80

Phoenix, AZ

17.1

187

247.9

474.9

2677.19

New Orleans, LA

199.4

205

197.1

180.6

-9.43

Atlanta, GA

36.9

136

131.5

131.7

256.91

Nashville, TN

22

29

507.8

473.3

2051.36

Washington, DC

61.4

61

61.4

61.4

0.00

Miami, FL

34.2

34

34.3

35.7

4.39

Charlotte, NC

30

64.8

76

242.3

707.67

Sources: Rappaport, http://www.kc.frb.org/econres/staff/jmr.htm; Gibson, Population of the 100 Largest Cities.

Taken together, Tables 1 through 3 paint a clear picture of what has happened in urban areas in the United States between 1950 and 2000: Cities in the Southern and Western U.S. have experienced relatively high rates of growth when they are compared to their neighbors in the Midwest and Northeast. And, as a consequence of this, central cities in the Midwest and Northeast have remained the same size or they have experienced moderate to severe urban decay. But, to complete this picture, it is worth considering some additional data. Table 4 presents regional population and housing data for the United States during the period from 1950 to 2000.

Table 4: Regional Population and Housing Data for the U.S., 1950 – 2000

1950

1960

1970

1980

1990

2000

Population Density – persons/(square mile)

50.9

50.7

57.4

64

70.3

79.6

Population by Region

West

19,561,525

28,053,104

34,804,193

43,172,490

52,786,082

63,197,932

South

47,197,088

54,973,113

62,795,367

75,372,362

85,445,930

100,236,820

Midwest

44,460,762

51,619,139

56,571,663

58,865,670

59,668,632

64,392,776

Northeast

39,477,986

44,677,819

49,040,703

49,135,283

50,809,229

53,594,378

Population by Region – % of Total

West

13

15.6

17.1

19.1

21.2

22.5

South

31.3

30.7

30.9

33.3

34.4

35.6

Midwest

29.5

28.8

27.8

26

24

22.9

Northeast

26.2

24.9

24.1

21.7

20.4

19

Population Living in non-Metropolitan Areas (millions)

66.2

65.9

63

57.1

56

55.4

Population Living in Metropolitan Areas (millions)

84.5

113.5

140.2

169.4

192.7

226

Percent in Suburbs in Metropolitan Area

23.3

30.9

37.6

44.8

46.2

50

Percent in Central City in Metropolitan Area

32.8

32.3

31.4

30

31.3

30.3

Percent Living in the Ten Largest Cities

14.4

12.1

10.8

9.2

8.8

8.5

Percentage Minority by Region

West

26.5

33.3

41.6

South

25.7

28.2

34.2

Midwest

12.5

14.2

18.6

Northeast

16.6

20.6

26.6

Housing Units by Region

West

6,532,785

9,557,505

12,031,802

17,082,919

20,895,221

24,378,020

South

13,653,785

17,172,688

21,031,346

29,419,692

36,065,102

42,382,546

Midwest

13,745,646

16,797,804

18,973,217

22,822,059

24,492,718

26,963,635

Northeast

12,051,182

14,798,360

16,642,665

19,086,593

20,810,637

22,180,440

Source: Hobbs and Stoops (2002).

There are several items of particular interest in Table 4. Every region in the United States becomes more diverse between 1980 and 2000. No region has a minority population greater than 26.5 percent minority in 1980, but only the Midwest remains below 26.5 percent minority by 2000. The U.S. population becomes increasingly urbanized over time, yet the percentage of Americans who live in central cities remains nearly constant. Thus, it is the number of Americans living in suburban communities that has fueled the dramatic increase in “urban” residents. This finding is reinforced by looking at the figures for average population density for the United States as a whole, the figures listing the numbers of Americans living in metropolitan versus non-metropolitan areas, and the figures listing the percentage of Americans living in the ten largest cities in the United States.

Other Measures of Urban Decline

While the population decline documented in the first part of this section suggests that cities in the Northeast and Midwest experienced severe urban decline, anyone who has visited the cities of Detroit and Boston would be able to tell you that the urban decline in these cities has affected their downtowns in very different ways. The central city in Boston is, for the most part, economically vibrant. A visitor to Boston would fine manicured public spaces as well as thriving retail, housing, and commercial sectors. Detroit’s downtown is still scarred by vacant office towers, abandoned retail space, and relatively little housing. Furthermore, the city’s public spaces would not compare favorably to those of Boston. While the leaders of Detroit have made some needed improvements to the city’s downtown in the past several years, the central city remains a mere shadow of its former self. Thus, the loss of population experienced by Detroit and Boston do not tell the full story about how urban decline has affected these cities. They have both lost population, yet Detroit has lost a great deal more – it no longer possesses a well-functioning urban economy.

To date, there have been relatively few attempts to quantify the loss of economic vitality in cities afflicted by urban decay. This is due, in part, to the complexity of the problem. There are few reliable historical measures of economic activity available at the city level. However, economists and other social scientists are beginning to better understand the process and the consequences of severe urban decline.

Economists Edward Glaeser and Joseph Gyourko (2005) developed a model that thoroughly explains the process of urban decline. One of their principal insights is that the durable nature of housing means that the process of urban decline will not mirror the process of urban expansion. In a growing city, the demand for housing is met through the construction of new dwellings. When a city faces a reduction in economic productivity and the resulting reduction in the demand for labor, workers will begin to leave the city. Yet, when population in a city begins to decline, housing units do not magically disappear from the urban landscape. Thus, in Glaeser and Gyourko’s model a declining city is characterized by a stock of housing that interacts with a reduction in housing demand, producing a rapid reduction in the real price of housing. Empirical evidence supports the assertions made by the model, for in cities like Cleveland, Detroit, and Buffalo the real price of housing declined in the second half of the twentieth century. An important implication of the Glaeser and Gyourko model is that declining housing prices are likely to attract individuals who are poor and who have acquired relatively little human capital. The presence of these workers makes it difficult for a declining city – like Detroit – to reverse its economic decline, for it becomes relatively difficult to attract businesses that need workers with high levels of human capital.

Complementing the theoretical work of Glaeser and Gyourko, Fred H. Smith (2003) used property values as a proxy for economic activity in order to quantify the urban decline experienced by Cleveland, Ohio. Smith found that the aggregate assessed value for the property in the downtown core of Cleveland fell from its peak of nearly $600 million in 1930 to a mere $45 million by 1980. (Both figures are expressed in 1980 dollars.) Economists William Collins and Robert Margo have also examined the impact of urban decline on property values. Their work focuses on how the value of owner occupied housing declined in cities that experienced a race riot in the 1960s, and, in particular, it focuses on the gap in property values that developed between white and black owned homes. Nonetheless, a great deal of work still remains to be done before the magnitude of urban decay in the United States is fully understood.

What Caused Urban Decline in the United States?

Having examined the timing and the magnitude of the urban decline experienced by U.S. cities, it is now necessary to consider why these cities decayed. In the subsections that follow, each of the principal causes of urban decline is considered in turn.

Decentralizing Technologies

In “Sprawl and Urban Growth,” Edward Glaeser and Matthew Kahn (2001) assert that “while many factors may have helped the growth of sprawl, it ultimately has only one root cause: the automobile” (p. 2). Urban sprawl is simply a popular term for the decentralization of economic activity, one of the principal symptoms of urban decline. So it should come as no surprise that many of the forces that have caused urban sprawl are in fact the same forces that have driven the decline of central cities. As Glaeser and Kahn suggest, the list of causal forces must begin with the emergence of the automobile.

In order to maximize profit, firm owners must choose their location carefully. Input prices and transportation costs (for inputs and outputs) vary across locations. Firm owners ultimately face two important decisions about location, and economic forces dictate the choices made in each instance. First, owners must decide in which city they will do business. Then, the firm owners must decide where the business should be located within the chosen city. In each case, transportation costs and input costs must dominate the owners’ decision making. For example, a business owner whose firm will produce steel must consider the costs of transporting inputs (e.g. iron ore), the costs of transporting the output (steel), and the cost of other inputs in the production process (e.g. labor). For steel firms operating in the late nineteenth century these concerns were balanced out by choosing locations in the Midwest, either on the Great Lakes (e.g. Cleveland) or major rivers (e.g. Pittsburgh). Cleveland and Pittsburgh were cities with plentiful labor and relatively low transport costs for both inputs and the output. However, steel firm owners choosing Cleveland or Pittsburgh also had to choose a location within these cities. Not surprisingly, the owners chose locations that minimized transportation costs. In Cleveland, for example, the steel mills were built near the shore of Lake Erie and relatively close to the main rail terminal. This minimized the costs of getting iron ore from ships that had come to the city via Lake Erie, and it also provided easy access to water or rail transportation for shipping the finished product. The cost of choosing a site near the rail terminal and the city’s docks was not insignificant: Land close to the city’s transportation hub was in high demand, and, therefore, relatively expensive. It would have been cheaper for firm owners to buy land on the periphery of these cities, but they chose not to do this because the costs associated with transporting inputs and outputs to and from the transportation hub would have dominated the savings enjoyed from buying cheaper land on the periphery of the city. Ultimately, it was the absence of cheap intra-city transport that compressed economic activity into the center of an urban area.

Yet, transportation costs and input prices have not simply varied across space; they’ve also changed over time. The introduction of the car and truck had a profound impact on transportation costs. In 1890, moving a ton of goods one mile cost 18.5 cents (measured in 2001 dollars). By 2003 the cost had fallen to 2.3 cents (measured in 2001 dollars) per ton-mile (Glaeser and Kahn 2001, p. 4). While the car and truck dramatically lowered transportation costs, they did not immediately affect firm owners’ choices about which city to choose as their base of operations. Rather, the immediate impact was felt in the choice of where within a city a firm should choose to locate. The intra-city truck made it easy for a firm to locate on the periphery of the city, where land was plentiful and relatively cheap. Returning to the example from the previous paragraph, the introduction of the intra-city truck allowed the owners of steel mills in Cleveland to build new plants on the periphery of the urban area where land was much cheaper (Encyclopedia of Cleveland History). Similarly, the car made it possible for residents to move away from the city center and out to the periphery of the city – or even to newly formed suburbs. (The suburbanization of the urban population had begun in the late nineteenth century when streetcar lines extended from the central city out to the periphery of the city or to communities surrounding the city; the automobile simply accelerated the process of decentralization.) The retail cost of a Ford Model T dropped considerably between 1910 and 1925 – from approximately $1850 to $470, measuring the prices in constant 1925 dollars (these values would be roughly $21,260 and $5400 in 2006 dollars), and the market responded accordingly. As Table 5 illustrates, the number of passenger car registrations increased dramatically during the twentieth century.

Table 5: Passenger Car Registrations in the United States, 1910-1980

Year

Millions of Registered Vehicles

1910

.5

1920

8.1

1930

23.0

1940

27.5

1950

40.4

1960

61.7

1970

89.2

1980

131.6

Source: Muller, p. 36.

While changes in transportation technology had a profound effect on firms’ and residents’ choices about where to locate within a given city, they also affected the choice of which city would be the best for the firm or resident. Americans began demanding more and improved roads to capitalize on the mobility made possible by the car. Also, the automotive, construction, and tourism related industries lobbied state and federal governments to become heavily involved in funding road construction, a responsibility previously relegated to local governments. The landmark National Interstate and Defense Highway Act of 1956 signified a long-term commitment by the national government to unite the country through an extensive network of interstates, while also improving access between cities’ central business district and outlying suburbs. As cars became affordable for the average American, and paved roads became increasingly ubiquitous, not only did the suburban frontier open up to a rising proportion of the population; it was now possible to live almost anywhere in the United States. (However, it is important to note that the widespread availability of air conditioning was a critical factor in Americans’ willingness to move to the South and West.)

Another factor that opened up the rest of the United States for urban development was a change in the cost of obtaining energy. Obtaining abundant, cheap energy is a concern for firm owners and for households. Historical constraints on production and residential locations continued to fall away in the late nineteenth and early twentieth century as innovations in energy production began to take hold. One of the most important of these advances was the spread of the alternating-current electric grid, which further expanded firms’ choices regarding plant location and layout. Energy could be generated at any site and could travel long distances through thin copper wires. Over a fifty-year period from 1890 to 1940, the proportion of goods manufactured using electrical power soared from 0.1 percent to 85.6 percent (Nye 1990). With the complementary advancements in transportation, factories now had the option of locating outside of the city where they could capture savings from cheaper land. The flexibility of electrical power also offered factories new freedom in the spatial organization of production. Whereas steam engines had required a vertical system of organization in multi-level buildings, the AC grid made possible a form of production that permanently transformed the face of manufacturing – the assembly line (Nye 1990).

The Great Migration

Technological advances were not bound by urban limits; they also extended into rural America where they had sweeping social and economic repercussions. Historically, the vast majority of African Americans had worked on Southern farms, first as slaves and then as sharecroppers. But progress in the mechanization of farming – particularly the development of the tractor and the mechanical cotton-picker – reduced the need for unskilled labor on farms. The dwindling need for farm laborers coupled with continuing racial repression in the South led hundreds of thousands of southern African Americans to migrate North in search of new opportunities. The overall result was a dramatic shift in the spatial distribution of African Americans. In 1900, more than three-fourths of black Americans lived in rural areas, and all but a handful of rural blacks lived in the South. By 1960, 73% of blacks lived in urban areas, and the majority of the urban blacks lived outside of the South (Cahill 1974).

Blacks had begun moving to Northern cities in large numbers at the onset of World War I, drawn by the lure of booming wartime industries. In the 1940s, Southern blacks began pouring into the industrial centers at more than triple the rate of the previous decade, bringing with them a legacy of poverty, poor education, and repression. The swell of impoverished and uneducated African Americans rarely received a friendly reception in Northern communities. Instead they frequently faced more of the treatment they had sought to escape (Groh 1972). Furthermore, the abundance of unskilled manufacturing jobs that had greeted the first waves of migrants had begun to dwindle. Manufacturing firms in the upper Midwest (the Rustbelt) faced increased competition from foreign firms, and many of the American firms that remained in business relocated to the suburbs or the Sunbelt to take advantage of cheap land. African Americans had difficulty accessing jobs at locations in the suburbs, and the result for many was a “spatial mismatch” – they lived in the inner city where employment opportunities were scarce, yet lacked access to transportation and that would allow them to commute to the suburban jobs (Kain 1968). Institutionalized racism, which hindered blacks’ attempts to purchase real estate in the suburbs, as well as the proliferation of inner city public housing projects, reinforced the spatial mismatch problem. As inner city African Americans coped with high unemployment rates, high crime rates and urban disturbances such as the race riots of the 1960s were obvious symptoms of economic distress. High crime rates and the race riots simply accelerated the demographic transformation of Northern cities. White city residents had once been “pulled” to the suburbs by the availability of cheap land and cheap transportation when the automobile became affordable; now white residents were being “pushed” by racism and the desire to escape the poverty and crime that had become common in the inner city. Indeed, by 2000 more than 80 percent of Detroit’s residents were African American – a stark contrast from 1950 when only 16 percent of the population was black.

The American City in the Twenty-First Century

Some believe that technology – specifically advances in information technology – will render the city obsolete in the twenty-first century. Urban economists find their arguments unpersuasive (Glaeser 1998). Recent history shows that the way we interact with one another has changed dramatically in a very short period of time. E-mail, cell phones, and text messages belonged to the world science fiction as recently as 1980. Clearly, changes in information technology no longer make it a requirement that we locate ourselves in close proximity to the people we want to interact with. Thus, one can understand the temptation to think that we will no longer need to live so close to one another in New York, San Francisco or Chicago. Ultimately, a person or a firm will only locate in a city if the benefits from being in the city outweigh the costs. What is missing from this analysis, though, is that people and firms locate in cities for reasons that are not immediately obvious.

Economists point to economies of agglomeration as one of the main reasons that firms will continue to choose urban locations over rural locations. Economics of agglomeration exist when a firm’s productivity is enhanced (or its cost of doing business is lowered) because it is located in a cluster of complementary firms of in a densely populated area. A classic example of an urban area that displays substantial economies of agglomeration is “Silicon Valley” (near San Jose, California). Firms choosing to locate in Silicon Valley benefit from several sources of economies of agglomeration, but two of the most easily understood are knowledge spillovers and labor pooling. Knowledge spillovers in Silicon Valley occur because individuals who work at “computer firms” (firms producing software, hardware, etc.) are likely to interact with one another on a regular basis. These interactions can be informal – playing together on a softball team, running into one another at a child’s soccer game, etc. – but they are still very meaningful because they promote the exchange of ideas. By exchanging ideas and information it makes it possible for workers to (potentially) increase their productivity at their own job. Another example of economies of agglomeration in Silicon Valley is the labor pooling that occurs there. Because workers who are trained in computer related fields know that computer firms are located in Silicon Valley, they are more likely to choose to live in or around Silicon Valley. Thus, firms operating in Silicon Valley have an abundant supply of labor in close proximity, and, similarly, workers enjoy the opportunities associated with having several firms that can make use of their skills in a small geographic area. The clustering of computer industry workers and firms allows firms to save money when they need to hire another worker, and it makes it easier for workers who need a job to find one.

In addition to economies of agglomeration, there are other economic forces that make the disappearance of the city unlikely. Another of the benefits that some individuals will associate with urban living is the diversity of products and experiences that are available in a city. For example, in a large city like Chicago it is possible to find deep dish pizza, thin crust pizza, Italian food, Persian food, Greek food, Swedish food, Indian food, Chinese food… literally almost any type of food that you might imagine. Why is all of this food available in Chicago but not in a small town in southern Illinois? Economists answer this question using the concept of demand density. Lots of people like Chinese food, so it is not uncommon to find a Chinese restaurant in a small town. Fewer people, though, have been exposed to Persian cuisine. While it is quite likely that the average American would like Persian food if it were available, most Americans haven’t had the opportunity to try it. Hence, the average American is unlikely to demand much Persian food in a given time period. So, individuals who are interested in operating a Persian food restaurant logically choose to operate in Chicago instead of a small town in southern Illinois. While each individual living in Chicago may not demand Persian food any more frequently than the individuals living in the small town, the presence of so many people in a relatively small area makes it possible for the Persian food restaurant to operate and thrive. Moreover, exposure to Persian food may change people’s tastes and preferences. Over time, the amount of Persian food demand (on average) from each inhabitant of the city may increase.

Individuals who value Persian food – or any of the other experiences that can only be found in a large city – will value the opportunity to live in a large city more than they will value the opportunity to live in a rural area. But the incredible diversity that a large city has to offer is a huge benefit to some individuals, not to everyone. Rural areas will continue to be populated as long as there are people who prefer the pleasures of low-density living. For these individuals, the pleasure of being able to walk in the woods or hike in the mountains may be more than enough compensation for living in a part of the country that doesn’t have a Persian restaurant.

As long as there are people (and firm owners) who believe that the benefits from locating in a city outweigh the costs, cities will continue to exist. The data shown above make it clear that Americans continue to value urban living. Indeed, the population figures for Chicago and New York suggest that in the 1990s more people were finding that there are net benefits to living in very large cities. The rapid expansion of cities in the South and Southwest simply reinforces this idea. To be sure, the urban living experienced in Charlotte is not the same as the urban living experience in Chicago or New York. So, while the urban cores of cities like Detroit and Cleveland are not likely to return to their former size anytime soon, and urban decline will continue to be a problem for these cities in the foreseeable future, it remains clear that Americans enjoy the benefits of urban living and that the American city will continue to thrive in the future.

References

Cahill, Edward E. “Migration and the Decline of the Black Population in Rural and Non-Metropolitan Areas.” Phylon 35, no. 3, (1974): 284-92.

Casadesus-Masanell, Ramon. “Ford’s Model-T: Pricing over the Product Life Cycle,” ABANTE –

Studies in Business Management 1, no. 2, (1998): 143-65.

Chudacoff, Howard and Judith Smith. The Evolution of American Urban Society, fifth edition. Upper Saddle River, NJ: Prentice Hall, 2000.

Collins, William and Robert Margo. “The Economic Aftermath of the 1960s Riots in American Cities: Evidence from Property Values.” Journal of Economic History 67, no. 4 (2007): 849 -83.

Collins, William and Robert Margo. “Race and the Value of Owner-Occupied Housing, 1940-1990.”

Regional Science and Urban Economics 33, no. 3 (2003): 255-86.

Cutler, David et al. “The Rise and Decline of the American Ghetto.” Journal of Political Economy 107, no. 3 (1999): 455-506.

Frey, William and Alden Speare, Jr. Regional and Metropolitan Growth and Decline in the United States. New York: Russell Sage Foundation, 1988.

Gibson, Campbell. “Population of the 100 Largest Cities and Other Urban Places in the United States: 1790 to 1990.” Population Division Working Paper, no. 27, U.S. Bureau of the Census, June 1998. Accessed at: http://www.census.gov/population/www/documentation/twps0027.html

Glaeser, Edward. “Are Cities Dying?” Journal of Economic Perspectives 12, no. 2 (1998): 139-60.

Glaeser, Edward and Joseph Gyourko. “Urban Decline and Durable Housing.” Journal of Political Economy 113, no. 2 (2005): 345-75.

Glaeser, Edward and Matthew Kahn. “Decentralized Employment and the Transformation of the American City.” Brookings-Wharton Papers on Urban Affairs, 2001.

Glaeser, Edward and Janet Kohlhase. “Cities, Regions, and the Decline of Transport Costs.” NBER Working Paper Series, National Bureau of Economic Research, 2003.

Glaeser, Edward and Albert Saiz. “The Rise of the Skilled City.” Brookings-Wharton Papers on Urban Affairs, 2004.

Glaeser, Edward and Jesse Shapiro. “Urban Growth in the 1990s: Is City Living Back?” Journal of Regional Science 43, no. 1 (2003): 139-65.

Groh, George. The Black Migration: The Journey to Urban America. New York: Weybright and Talley, 1972.

Gutfreund, Owen D. Twentieth Century Sprawl: Highways and the Reshaping of the American Landscape. Oxford: Oxford University Press, 2004.

Hanson, Susan, ed. The Geography of Urban Transportation. New York: Guilford Press, 1986.

Hobbs, Frank and Nicole Stoops. Demographic Trends in the Twentieth Century: Census 2000 Special Reports. Washington, DC: U.S. Census Bureau, 2002.

Kim, Sukkoo. “Urban Development in the United States, 1690-1990.” NBER Working Paper Series, National Bureau of Economic Research, 1999.

Mieszkowski, Peter and Edwin Mills. “The Causes of Metropolitan Suburbanization.” Journal of Economic Perspectives 7, no. 3 (1993): 135-47.

Muller, Peter. “Transportation and Urban Form: Stages in the Spatial Evolution of the American Metropolis.” In The Geography of Urban Transportation, edited by Susan Hanson. New York: Guilford Press, 1986.

Nye, David. Electrifying America: Social Meanings of a New Technology, 1880-1940. Cambridge, MA: MIT Press, 1990.

Nye, David. Consuming Power: A Social History of American Energies. Cambridge, MA: MIT Press, 1998.

Rae, Douglas. City: Urbanism and Its End. New Haven: Yale University Press, 2003.

Rappaport, Jordan. “U.S. Urban Decline and Growth, 1950 to 2000.” Economic Review: Federal Reserve Bank of Kansas City, no. 3, 2003: 15-44.

Rodwin, Lloyd and Hidehiko Sazanami, eds. Deindustrialization and Regional Economic Transformation: The Experience of the United States. Boston: Unwin Hyman, 1989.

Smith, Fred H. “Decaying at the Core: Urban Decline in Cleveland, Ohio.” Research in Economic History 21 (2003): 135-84.

Stanback, Thomas M. Jr. and Thierry J. Noyelle. Cities in Transition: Changing Job Structures in Atlanta, Denver, Buffalo, Phoenix, Columbus (Ohio), Nashville, Charlotte. Totowa, NJ: Allanheld, Osmun, 1982.

Van Tassel, David D. and John J. Grabowski, editors, The Encyclopedia of Cleveland History. Bloomington: Indiana University Press, 1996. Available at http://ech.case.edu/


[1] Reporting the size of a “city” should be done with care. In day-to-day usage, many Americans might talk about the size (population) of Boston and assert that Boston is a larger city than Phoenix. Strictly speaking, this is not true. The 2000 Census reports that the population of Boston was 589,000 while Phoenix had a population of 1.3 million. However, the Boston metropolitan area contained 4.4 million inhabitants in 2000 – substantially more than the 3.3 million residents of the Phoenix metropolitan area.

Citation: Smith, Fred and Sarah Allen. “Urban Decline (and Success), US”. EH.Net Encyclopedia, edited by Robert Whaples. June 5, 2008. URL http://eh.net/encyclopedia/urban-decline-and-success-in-the-united-states/

Turnpikes and Toll Roads in Nineteenth-Century America

Daniel B. Klein, Santa Clara University and John Majewski, University of California – Santa Barbara 1

Private turnpikes were business corporations that built and maintained a road for the right to collect fees from travelers.2 Accounts of the nineteenth-century transportation revolution often treat turnpikes as merely a prelude to more important improvements such as canals and railroads. Turnpikes, however, left important social and political imprints on the communities that debated and supported them. Although turnpikes rarely paid dividends or other forms of direct profit, they nevertheless attracted enough capital to expand both the coverage and quality of the U. S. road system. Turnpikes demonstrated how nineteenth-century Americans integrated elements of the modern corporation – with its emphasis on profit-taking residual claimants – with non-pecuniary motivations such as use and esteem.

Private road building came and went in waves throughout the nineteenth century and across the country, with between 2,500 and 3,200 companies successfully financing, building, and operating their toll road. There were three especially important episodes of toll road construction: the turnpike era of the eastern states 1792 to 1845; the plank road boom 1847 to 1853; and the toll road of the far West 1850 to 1902.

The Turnpike Era, 1792–1845

Prior to the 1790s Americans had no direct experience with private turnpikes; roads were built, financed and managed mainly by town governments. Typically, townships compelled a road labor tax. The State of New York, for example, assessed eligible males a minimum of three days of roadwork under penalty of fine of one dollar. The labor requirement could be avoided if the worker paid a fee of 62.5 cents a day. As with public works of any kind, incentives were weak because the chain of activity could not be traced to a residual claimant – that is, private owners who claim the “residuals,” profit or loss. The laborers were brought together in a transitory, disconnected manner. Since overseers and laborers were commonly farmers, too often the crop schedule, rather than road deterioration, dictated the repairs schedule. Except in cases of special appropriations, financing came in dribbles deriving mostly from the fines and commutations of the assessed inhabitants. Commissioners could hardly lay plans for decisive improvements. When a needed connection passed through unsettled lands, it was especially difficult to mobilize labor because assessments could be worked out only in the district in which the laborer resided. Because work areas were divided into districts, as well as into towns, problems arose coordinating the various jurisdictions. Road conditions thus remained inadequate, as New York’s governors often acknowledged publicly (Klein and Majewski 1992, 472-75).

For Americans looking for better connections to markets, the poor state of the road system was a major problem. In 1790, a viable steamboat had not yet been built, canal construction was hard to finance and limited in scope, and the first American railroad would not be completed for another forty years. Better transportation meant, above all, better highways. State and local governments, however, had small bureaucracies and limited budgets which prevented a substantial public sector response. Turnpikes, in essence, were organizational innovations borne out of necessity – “the states admitted that they were unequal to the task and enlisted the aid of private enterprise” (Durrenberger 1931, 37).

America’s very limited and lackluster experience with the publicly operated toll roads of the 1780s hardly portended a future boom in private toll roads, but the success of private toll bridges may have inspired some future turnpike companies. From 1786 to 1798, fifty-nine private toll bridge companies were chartered in the northeast, beginning with Boston’s Charles River Bridge, which brought investors an average annual return of 10.5 percent in its first six years (Davis 1917, II, 188). Private toll bridges operated without many of the regulations that would hamper the private toll roads that soon followed, such as mandatory toll exemptions and conflicts over the location of toll gates. Also, toll bridges, by their very nature, faced little toll evasion, which was a serious problem for toll roads.

The more significant predecessor to America’s private toll road movement was Britain’s success with private toll roads. Beginning in 1663 and peaking from 1750 to 1772, Britain experienced a private turnpike movement large enough to acquire the nickname “turnpike mania” (Pawson 1977, 151). Although the British movement inspired the future American turnpike movement, the institutional differences between the two were substantial. Most important, perhaps, was the difference in their organizational forms. British turnpikes were incorporated as trusts – non-profit organizations financed by bonds – while American turnpikes were stock-financed corporations seemingly organized to pay dividends, though acting within narrow limits determined by the charter. Contrary to modern sensibilities, this difference made the British trusts, which operated under the firm expectation of fulfilling bond obligations, more intent and more successful in garnering residuals. In contrast, for the American turnpikes the hope of dividends was merely a faint hope, and never a legal obligation. Odd as it sounds, the stock-financed “business” corporation was better suited to operating the project as a civic enterprise, paying out returns in use and esteem rather than cash.

The first private turnpike in the United States was chartered by Pennsylvania in 1792 and opened two years later. Spanning 62 miles between Philadelphia and Lancaster, it quickly attracted the attention of merchants in other states, who recognized its potential to direct commerce away from their regions. Soon lawmakers from those states began chartering turnpikes. By 1800, 69 turnpike companies had been chartered throughout the country, especially in Connecticut (23) and New York (13). Over the next decade nearly six times as many turnpikes were incorporated (398). Table 1 shows that in the mid-Atlantic and New England states between 1800 and 1830, turnpike companies accounted for 27 percent of all business incorporations.

Table 1: Turnpikes as a Percentage of All Business Incorporations,
by Special and General Acts, 1800-1830

As shown in Table 2, a wider set of states had incorporated 1562 turnpikes by the end of 1845. Somewhere between 50 to 70 percent of these succeeded in building and operating toll roads. A variety of regulatory and economic conditions – outlined below – account for why a relatively low percentage of chartered turnpikes became going concerns. In New York, for example, tolls could be collected only after turnpikes passed inspections, which were typically conducted after ten miles of roadway had been built. Only 35 to 40 percent of New York turnpike projects – or about 165 companies – reached operational status. In Connecticut, by contrast, where settlement covered the state and turnpikes more often took over existing roadbeds, construction costs were much lower and about 87 percent of the companies reached operation (Taylor 1934, 210).

Table 2: Turnpike Incorporation, 1792-1845

State 1792-1800 1801-10 1811-20 1821-30 1831-40 1841-45 Total
NH 4 45 5 1 4 0 59
VT 9 19 15 7 4 3 57
MA 9 80 8 16 1 1 115
RI 3 13 8 13 3 1 41
CT 23 37 16 24 13 0 113
NY 13 126 133 75 83 27 457
PA 5 39 101 59 101 37 342
NJ 0 22 22 3 3 0 50
VA 0 6 7 8 25 0 46
MD 3 9 33 12 14 7 78
OH 0 2 14 12 114 62 204
Total 69 398 362 230 365 138 1562

Source: Klein and Fielding 1992: 325.

Although the states of Pennsylvania, Virginia and Ohio subsidized privately-operated turnpike companies, most turnpikes were financed solely by private stock subscription and structured to pay dividends. This was a significant achievement, considering the large construction costs (averaging around $1,500 to $2,000 per mile) and the typical length (15 to 40 miles). But the achievement was most striking because, as New England historian Edward Kirkland (1948, 45) put it, “the turnpikes did not make money. As a whole this was true; as a rule it was clear from the beginning.” Organizers and “investors” generally regarded the initial proceeds from sale of stock as a fund from which to build the facility, which would then earn enough in toll receipts to cover operating expenses. One might hope for dividend payments as well, but “it seems to have been generally known long before the rush of construction subsided that turnpike stock was worthless” (Wood 1919, 63).3

Turnpikes promised little in the way of direct dividends and profits, but they offered potentially large indirect benefits. Because turnpikes facilitated movement and trade, nearby merchants, farmers, land owners, and ordinary residents would benefit from a turnpike. Gazetteer Thomas F. Gordon aptly summarized the relationship between these “indirect benefits” and investment in turnpikes: “None have yielded profitable returns to the stockholders, but everyone feels that he has been repaid for his expenditures in the improved value of his lands, and the economy of business” (quoted in Majewski 2000, 49). Gordon’s statement raises an important question. If one could not be excluded from benefiting from a turnpike, and if dividends were not in the offing, what incentive would anyone have to help finance turnpike construction? The turnpike communities faced a serious free-rider problem.

Nevertheless, hundreds of communities overcame the free-rider problem, mostly through a civic-minded culture that encouraged investment for long-term community gain. Alexis de Tocqueville observed that, excepting those of the South, Americans were infused with a spirit of public-mindedness. Their strong sense of community spirit resulted in the funding of schools, libraries, hospitals, churches, canals, dredging companies, wharves, and water companies, as well as turnpikes (Goodrich 1948). Vibrant community and cooperation sprung, according to Tocqueville, from the fertile ground of liberty:

If it is a question of taking a road past his property, [a man] sees at once that this small public matter has a bearing on his greatest private interests, and there is no need to point out to him the close connection between his private profit and the general interest. … Local liberties, then, which induce a great number of citizens to value the affection of their kindred and neighbors, bring men constantly into contact, despite the instincts which separate them, and force them to help one another. … The free institutions of the United States and the political rights enjoyed there provide a thousand continual reminders to every citizen that he lives in society. … Having no particular reason to hate others, since he is neither their slave nor their master, the American’s heart easily inclines toward benevolence. At first it is of necessity that men attend to the public interest, afterward by choice. What had been calculation becomes instinct. By dint of working for the good of his fellow citizens, he in the end acquires a habit and taste for serving them. … I maintain that there is only one effective remedy against the evils which equality may cause, and that is political liberty (Alexis de Tocqueville, 511-13, Lawrence/Mayer edition).

Tocqueville’s testimonial is broad and general, but its accuracy is seen in the archival records and local histories of the turnpike communities. Stockholder’s lists reveal a web of neighbors, kin, and locally prominent figures voluntarily contributing to what they saw as an important community improvement. Appeals made in newspapers, local speeches, town meetings, door-to-door solicitations, correspondence, and negotiations in assembling the route stressed the importance of community improvement rather than dividends.4 Furthermore, many toll road projects involved the effort to build a monument and symbol of the community. Participating in a company by donating cash or giving moral support was a relatively rewarding way of establishing public services; it was pursued at least in part for the sake of community romance and adventure as ends in themselves (Brown 1973, 68). It should be noted that turnpikes were not entirely exceptional enterprises in the early nineteenth century. In many fields, the corporate form had a public-service ethos, aimed not primarily at paying dividends, but at serving the community (Handlin and Handlin 1945, 22, Goodrich 1948, 306, Hurst 1970, 15).

Given the importance of community activism and long-term gains, most “investors” tended to be not outside speculators, but locals positioned to enjoy the turnpikes’ indirect benefits. “But with a few exceptions, the vast majority of the stockholders in turnpike were farmers, land speculators, merchants or individuals and firms interested in commerce” (Durrenberger 1931, 104). A large number of ordinary households held turnpike stock. Pennsylvania compiled the most complete set of investment records, which show that more than 24,000 individuals purchased turnpike or toll bridge stock between 1800 and 1821. The average holding was $250 worth of stock, and the median was less than $150 (Majewski 2001). Such sums indicate that most turnpike investors were wealthier than the average citizen, but hardly part of the urban elite that dominated larger corporations such as the Bank of the United States. County-level studies indicate that most turnpike investment came from farmers and artisans, as opposed to the merchants and professionals more usually associated with early corporations (Majewski 2000, 49-53).

Turnpikes became symbols of civic pride only after enduring a period of substantial controversy. In the 1790s and early 1800s, some Americans feared that turnpikes would become “engrossing monopolists” who would charge travelers exorbitant tolls or abuse eminent domain privileges. Others simply did not want to pay for travel that had formerly been free. To conciliate these different groups, legislators wrote numerous restrictions into turnpike charters. Toll gates, for example, often could be spaced no closer than every five or even ten miles. This regulation enabled some users to travel without encountering a toll gate, and eased the practice of steering horses and the high-mounted vehicles of the day off the main road so as to evade the toll gate, a practice known as “shunpiking.” The charters or general laws also granted numerous exemptions from toll payment. In New York, the exempt included people traveling on family business, those attending or returning from church services and funerals, town meetings, blacksmiths’ shops, those on military duty, and those who lived within one mile of a toll gate. In Massachusetts some of the same trips were exempt and also anyone residing in the town where the gate is placed and anyone “on the common and ordinary business of family concerns” (Laws of Massachusetts 1805, chapter 79, 649). In the face of exemptions and shunpiking, turnpike operators sometimes petitioned authorities for a toll hike, stiffer penalties against shunpikers, or the relocating of the toll gate. The record indicates that petitioning the legislature for such relief was a costly and uncertain affair (Klein and Majewski 1992, 496-98).

In view of the difficult regulatory environment and apparent free-rider problem, the success of early turnpikes in raising money and improving roads was striking. The movement built new roads at rates previously unheard of in America. Table 3 gives ballpark estimates of the cumulative investment in constructing turnpikes up to 1830 in New England and the Middle Atlantic. Repair and maintenance costs are excluded. These construction investment figures are probably too low – they generally exclude, for example, tolls revenue that might have been used to finish construction – but they nevertheless indicate the ability of private initiatives to raise money in an economy in which capital was in short supply. Turnpike companies in these states raised more than $24 million by 1830, an amount equaling 6.15 percent of those states’ 1830 GDP. To put this into comparative perspective, between 1956 and 1995 all levels of government spent $330 billion (in 1996 dollars) in building the interstate highway system, a cumulative total equaling only 4.30 percent of 1996 GDP.

Table 3
Cumulative Turnpike Investment (1800-1830) as percentage of 1830 GNP

State Cumulative Turnpike Investment, 1800-1830 ($) Cumulative Turnpike Investment as Percent of 1830 GDP Cumulative Turnpike Investment per Capita, 1830 ($)
Maine 35,000 0.16 0.09
New Hampshire 575,100 2.11 2.14
Vermont 484,000 3.37 1.72
Massachusetts 4,200,000 7.41 6.88
Rhode Island 140,000 1.54 1.44
Connecticut 1,036,160 4.68 3.48
New Jersey 1,100,000 4.79 3.43
New York 9,000,000 7.06 4.69
Pennsylvania 6,400,000 6.67 4.75
Maryland 1,500,000 3.85 3.36
TOTAL 24,470,260 6.15 4.49
Interstate Highway System, 1956-1996 330 Billion 4.15 (1996 GNP)

Sources: Pennsylvania turnpike investment: Durrenberger 1931: 61); New England turnpike investment: Taylor 1934: 210-11; New York, New Jersey, and Maryland turnpike investment: Fishlow 2000, 549. Only private investment is included. State GDP data come from Bodenhorn 2000: 237. Figures for the cost of the Interstate Highway System can be found at http://www.publicpurpose.com/hwy-is$.htm. Please note that our investment figures generally do not include investment to finish roads by loans or the use of toll revenue. The table therefore underestimates investment in turnpikes.

The organizational advantages of turnpike companies relative to government road not only generated more road mileage, but also higher quality roads (Taylor 1934, 334, Parks 1967, 23, 27). New York state gazetteer Horatio Spafford (1824, 125) wrote that turnpikes have been “an excellent school, in every road district, and people now work the highways to much better advantage than formerly.” Companies worked to intelligently develop roadway to achieve connective communication. The corporate form traversed town and county boundaries, so a single company could bring what would otherwise be separate segments together into a single organization. “Merchants and traders in New York sponsored pikes leading across northern New Jersey in order to tap the Delaware Valley trade which would otherwise have gone to Philadelphia” (Lane 1939, 156).

Turnpike networks became highly organized systems that sought to find the most efficient way of connecting eastern cities with western markets. Decades before the Erie Canal, private individuals realized the natural opening through the Appalachians and planned a system of turnpikes connecting Albany to Syracuse and beyond. Figure 1 shows the principal routes westward from Albany. The upper route begins with the Albany & Schenectady Turnpike, connects to the Mohawk Turnpike, and then the Seneca Turnpike. The lower route begins with the First Great Western Turnpike and then branches at Cherry Valley into the Second and Third Great Western Turnpikes. Corporate papers of these companies reveal that organizers of different companies talked to each other; they were quite capable of coordinating their intentions and planning mutually beneficial activities by voluntary means. When the Erie Canal was completed in 1825 it roughly followed the alignment of the upper route and greatly reduced travel on the competing turnpikes (Baer, Klein, and Majewski 1992).

Figure 1: Turnpike Network in Central New York, 1845
detail

Another excellent example of turnpike integration was the Pittsburgh Pike. The Pennsylvania route consisted of a combination of five turnpike companies, each of which built a road segment connecting Pittsburgh and Harrisburg, where travelers could take another series of turnpikes to Philadelphia. Completed in 1820, the Pittsburgh Pike greatly improved freighting over the rugged Allegheny Mountains. Freight rates between Philadelphia and Pittsburgh were cut in half because wagons increased their capacity, speed, and certainty (Reiser 1951, 76-77). Although the state government invested in the companies that formed the Pittsburgh Pike, records of the two companies for which we have complete investment information shows that private interests contributed 62 percent of the capital (calculated from Majewski 2000: 47-51: Reiser 1951, 76). Residents in numerous communities contributed to individual projects out of their own self interest. Their provincialism nevertheless helped create a coherent and integrated system.

A comparison of the Pittsburgh Pike and the National Road demonstrated the advantages of turnpike corporations over roads financed directly from government sources. Financed by the federal government, the National Road was built between Cumberland, Maryland, and Wheeling, West Virginia, where it was then extended through the Midwest with the hopes of reaching the Mississippi River. Although it never reached the Mississippi, the Federal Government nevertheless spent $6.8 million on the project (Goodrich 1960, 54, 65). The trans-Appalachian section of the National Road competed directly against the Pittsburgh Pike. From the records of two of the five companies that formed the Pittsburgh Pike, we estimate it cost $4,805 per mile to build (Majewski 2000, 47-51, Reiser 1951, 76). The Federal government, on the other hand, spent $13,455 per mile to complete the first 200 miles of the National Road (Fishlow 2000, 549). Besides costing much less, the Pennsylvania Pike was far better in quality. The toll gates along the Pittsburgh Pike provided a steady stream of revenue for repairs. The National Road, on the other hand, depended upon intermittent government outlays for basic maintenance, and the road quickly deteriorated. One army engineer in 1832 found “the road in a shocking condition, and every rod of it will require great repair; some of it now is almost impassable” (quoted in Searight, 60). Historians have found that travelers generally preferred to take the Pittsburgh Pike rather than the National Road.

The Plank Road Boom, 1847–1853

By the 1840s the major turnpikes were increasingly eclipsed by the (often state-subsidized) canals and railroads. Many toll roads reverted to free public use and quickly degenerated into miles of dust, mud and wheel-carved ruts. To link to the new and more powerful modes of communication, well-maintained, short-distance highways were still needed, but because governments became overextended in poor investments in canals, taxpayers were increasingly reluctant to fund internal improvements. Private entrepreneurs found the cost of the technologically most attractive road surfacing material (macadam, a compacted covering of crushed stones) prohibitively expensive at $3,500 per mile. Thus the ongoing need for new feeder roads spurred the search for innovation, and plank roads – toll roads surfaced with wooden planks – seemed to fit the need.

The plank road technique appears to have been introduced into Canada from Russia in 1840. It reached New York a few years later, after the village Salina, near Syracuse, sent civil engineer George Geddes to Toronto to investigate. After two trips Geddes (whose father, James, was an engineer for the Erie and Champlain Canals, and an enthusiastic canal advocate) was convinced of the plank roads’ feasibility and became their great booster. Plank roads, he wrote in Scientific American (Geddes 1850a), could be built at an average cost of $1,500 – although $1,900 would have been more accurate (Majewski, Baer and Klein 1994, 109, fn15). Geddes also published a pamphlet containing an influential, if overly optimistic, estimate that Toronto’s road planks had lasted eight years (Geddes 1850b). Simplicity of design made plank roads even more attractive. Road builders put down two parallel lines of timbers four or five feet apart, which formed the “foundation” of the road. They then laid, at right angles, planks that were about eight feet long and three or four inches thick. Builders used no nails or glue to secure the planks – they were secured only by their own weight – but they did build ditches on each side of the road to insure proper drainage (Klein and Majewski 1994, 42-43).

No less important than plank road economics and technology were the public policy changes that accompanied plank roads. Policymakers, perhaps aware that overly restrictive charters had hamstrung the first turnpike movement, were more permissive in the plank road era. Adjusting for deflation, toll rates were higher, toll gates were separated by shorter distances, and fewer local travelers were exempted from payment of tolls.

Although few today have heard of them, for a short time it seemed that plank roads might be one of the great innovations of the day. In just a few years, more than 1,000 companies built more than 10,000 miles of plank roads nationwide, including more than 3,500 miles in New York (Klein and Majewski 1994, Majewski, Baer, Klein 1993). According to one observer, plank roads, along with canals and railroads, were “the three great inscriptions graven on the earth by the hand of modern science, never to be obliterated, but to grow deeper and deeper” (Bogart 1851).

Except for most of New England, plank roads were chartered throughout the United States, especially in the top lumber-producing states of the Midwest and Mid-Atlantic states, as shown in Table 4.

Table 4: Plank Road Incorporation by State

State Number
New York 335
Pennsylvania 315
Ohio 205
Wisconsin 130
Michigan 122
Illinois 88
North Carolina 54
Missouri 49
New Jersey 25
Georgia 16
Iowa 14
Vermont 14
Maryland 13
Connecticut 7
Massachusetts 1
Rhode Island, Maine 0
Total 1388

Notes: The figure for Ohio is through 1851; Pennsylvania, New Jersey, and Maryland are through 1857. Few plank roads were incorporated after 1857. In western states, some roads were incorporated and built as plank roads, so the 1388 total is not to be taken as a total for the nation. For a complete description of the sources for this table, see Majewski, Baer, & Klein 1993: 110.

New York, the leading lumber state, had both the greatest number of plank road charters (350) and the largest value of lumber production ($13,126,000 in 1849 dollars). Plank roads were especially popular in rural dairy counties, where farmers needed quick and dependable transportation to urban markets (Majewski, Baer and Klein 1993).

The plank road and eastern turnpike episodes shared several features in common. Like the earlier turnpikes, investment in plank road companies came from local landowners, farmers, merchants, and professionals. Stock purchases were motivated less by the prospect of earning dividends than by the convenience and increased trade and development that the roads would bring. To many communities, plank roads held the hope of revitalization and the reversal (or slowing) of relative decline. But those hoping to attain these benefits once again were faced with a free-rider problem. Investors in plank roads, like the investors of the earlier turnpikes, were motivated often by esteem mechanisms – community allegiance and appreciation, reputational incentives, and their own conscience.

Although plank roads were smooth and sturdy, faring better in rain and snow than did dirt and gravel roads, they lasted only four or five years – not the eight to twelve years that promoters had claimed. Thus, the rush of construction ended suddenly by 1853, and by 1865 most companies had either switched to dirt and gravel surfaces or abandoned their road altogether.

Toll Roads in the Far West, 1850 to 1902

Unlike the areas served by the earlier turnpikes and plank roads, Colorado, Nevada, and California in the 1850s and 1860s lacked the settled communities and social networks that induced participation in community enterprise and improvement. Miners and the merchants who served them knew that the mining boom would not continue indefinitely and therefore seldom planted deep roots. Nor were the large farms that later populated California ripe for civic engagement in anywhere near the degree of the small farms of the east. Society in the early years of the West was not one where town meetings, door-to-door solicitations, and newspaper campaigns were likely to rally broad support for a road project. The lack of strong communities also meant that there would be few opponents to pressure the government for toll exemptions and otherwise hamper toll road operations. These conditions ensured that toll roads would tend to be more profit-oriented than the eastern turnpikes and plank road companies. Still, it is not clear whether on the whole the toll roads of the Far West were profitable.

The California toll road era began in 1850 after passage of general laws of incorporation. In 1853 new laws were passed reducing stock subscription requirements from $2,000 per mile to $300 per mile. The 1853 laws also delegated regulatory authority to the county governments. Counties were allowed “to set tolls at rates not to prevent a return of 20 percent,” but they did not interfere with the location of toll roads and usually looked favorably on the toll road companies. After passage of the 1853 laws, the number of toll road incorporations increased dramatically, peaking to nearly 40 new incorporations in 1866 alone. Companies were also created by special acts of the legislature. And sometimes they seemed to have operated without formal incorporation at all. David and Linda Beito (1998, 75, 84) show that in Nevada many entrepreneurs had built and operated toll roads – or other basic infrastructure – before there was a State of Nevada, and some operated for years without any government authority at all.

All told, in the Golden State, approximately 414 toll road companies were initiated,5 resulting in at least 159 companies that successfully built and operated toll roads. Table 5 provides some rough numbers for toll roads in western states. The numbers presented there are minimums. For California and Nevada, the numbers probably only slightly underestimate the true totals; for the other states the figures are quite sketchy and might significantly underestimate true totals. Again, an abundance of testimony indicates that the private road companies were the serious road builders, in terms of quantity and quality (see the ten quotations at Klein and Yin 1996, 689-90).

Table 5: Rough Minimums on Toll Roads in the West

Toll Road
Incorporations
Toll Roads
actually built
California 414 159
Colorado 350 n.a.
Nevada n.a. 117
Texas 50 n.a.
Wyoming 11 n.a.
Oregon 10 n.a.

Sources: For California, Klein and Yin 1996: 681-82; for Nevada, Beito and Beito 1998: 74; for the other states, notes and correspondence in D. Klein’s files.

Table 6 makes an attempt to justify guesses about total number of toll road companies and total toll road miles. The first three numbers in the “Incorporations” column come from Tables 2, 4, and 5. The estimates of success rates and average road length (in the third and fourth columns) are extrapolations from components that have been studied with more care. We have made these estimates conservative, in the sense of avoiding any overstatement of the extent of private road building. The ~ symbol has been used to keep the reader mindful of the fact that many of these numbers are estimates. The numbers in the right hand column have been rounded to the nearest 1000, so as to avoid any impression of accuracy. The “Other” row throws in a line to suggest a minimum to cover all the regions, periods, and road types not covered in Tables 2, 4, and 5. For example, the “Other” row would cover turnpikes in the East, South and Midwest after 1845 (Virginia’s turnpike boom came in the late 1840s and 1850s), and all turnpikes and plank roads in Indiana, whose county-based incorporation, it seems, has never been systematically researched. Ideally, not only would the numbers be more definite and complete, but there would be a weighting by years of operation. The “30,000 – 52,000 miles” should be read as a range for the sum of all the miles operated by any company at any time during the 100+ year period.

Table 6: A Rough Tally of the Private Toll Roads

Toll Road Movements Incorporations % Successful in Building Road Roads Built and Operated Average Road Length Toll Road

Miles Operated

Turnpikes incorporated from 1792 to 1845 1562 ~ 55 % ~ 859 ~ 18 ~ 15,000
Plank Roads incorporated from 1845 to roughly 1860 1388 ~ 65 % ~ 902 ~ 10 ~ 9,000
Toll Roads in the West incorporated from 1850 to roughly 1902 ~ 1127 ~ 40 % ~ 450 ~ 15 ~ 7,000
Other ~ <1000>

[a rough guess]

~ 50 % ~ 500 ~ 16 ~ 8,000
Ranges for

Totals

5,000 – 5,600

incorporations

48 – 60 percent 2,500 – 3,200 roads 12 – 16 miles 30,000 – 52,000

miles

Sources: Those of Tables 2, 4, and 5, plus the research files of the authors.

The End of Toll Roads in the Progressive Period

In 1880 many toll road companies nationwide continued to operate – probably in the range of 400 to 600 companies.6 But by 1920 the private toll road was almost entirely stamped out. From Maine to California, the laws and political attitudes from around 1880 onward moved against the handling of social affairs in ways that seemed informal, inexpert and unsystematic. Progressivism represented a burgeoning of more collectivist ideologies and policy reforms. Many progressive intellectuals took inspiration from European socialist doctrines. Although the politics of restraining corporate evils had a democratic and populist aspect, the bureaucratic spirit was highly managerial and hierarchical, intending to replicate the efficiency of large corporations in the new professional and scientific administration of government (Higgs 1987, 113-116, Ekirch 1967, 171-94).

One might point to the rise of the bicycle and later the automobile, which needed a harder and smoother surface, to explain the growth of America’s road network in the Progressive period. But such demand-side changes do not speak to the issues of road ownership and tolling. Automobiles achieved higher speeds, which made stopping to pay a toll more inconvenient, and that may have reinforced the anti-toll-road company movement that was underway prior to the automobile. Such developments figured into the history of road policy, but they really did not provide a good reason for the policy movement against the toll roads The following words of a county board of supervisors in New York in 1906 indicate a more general ideological bent against toll road companies:

[T]he ownership and operation of this road by a private corporation is contrary to public sentiment in this county, and [the] cause of good roads, which has received so much attention in this state in recent years, requires that this antiquated system should be abolished. … That public opinion throughout the state is strongly in favor of the abolition of toll roads is indicated by the fact that since the passage of the act of 1899, which permits counties to acquire these roads, the boards of supervisors of most of the counties where such roads have existed have availed themselves of its provisions and practically abolished the toll road.

Given such attitudes, it was no wonder that within the U. S. Department of Agricultural, the new Office of Road Inquiry began in 1893 to gather information, conduct research, and “educate” for better roads. The new bureaucracy opposed toll roads, and the Federal Highway Act of 1916 barred the use of tolls on highways receiving federal money (Seely 1987, 15, 79). Anti-toll-road sentiment became state and national policy.

Conclusions and Implications

Throughout the nineteenth-century, the United States was notoriously “land-rich” and “capital poor.” The viability of turnpikes shows how Americans devised institutions – in this case, toll-collecting corporations – that allowed them to invest precious capital in important public projects. What’s more, turnpikes paid little in direct dividends and stock appreciation, yet still attracted investment. Investors, of course, cared for long-term economic development, but that does not account for how turnpike organizers overcame the important public goods problem of buying turnpike stock. Esteem, social pressure, and other non-economic motivations influenced local residents to make investments that they knew would be unprofitable (at least in a direct sense) but would nevertheless help the entire community. On the other hand, the turnpike companies enjoyed the organizational clarity of stock ownership and residual returns. All companies faced the possibility of pressure from investors, who might have wanted to salvage something of their investment. Residual claimancy may have enhanced the viability of many projects, including communitarian projects undertaken primarily for use and esteem.

The combining of these two ingredients – the appeal of use and esteem, and the incentives and proprietary clarity of residual returns – is today severely undermined by the modern legal bifurcation of private initiative into “not-for-profit” and “for-profit” concerns. Not-for-profit corporations can appeal to use and esteem but cannot organize themselves to earn residual returns. For-profit corporations organize themselves for residual returns but cannot very well appeal to use and esteem. As already noted, prior to modern tax law and regulation, the old American toll roads were, relative to the British turnpike trusts, more, not less, use-and-esteem oriented by virtue of being structured to pay dividends rather than interest. Like the eighteenth century British turnpike trusts, the twentieth century American governmental toll projects financed (in part) by privately purchased bonds generally failed, relative to the nineteenth century American company model, to draw on use and esteem motivations.

The turnpike experience of nineteenth-century America suggests that the stock/dividend company can also be a fruitful, efficient, and socially beneficial way to make losses and go on making losses. The success of turnpikes suggests that our modern sensibility of dividing enterprises between profit and non-profit – a distinction embedded in modern tax laws and regulations – unnecessarily impoverishes the imagination of economists and other policy makers. Without such strict legal and institutional bifurcation, our own modern society might better recognize the esteem in trade and the trade in esteem.

References

Baer, Christopher T., Daniel B. Klein, and John Majewski. “From Trunk to Branch: Toll Roads in New York, 1800-1860.” Essays in Economic and Business History XI (1993): 191-209.

Beito, David T., and Linda Royster Beito. “Rival Road Builders: Private Toll Roads in Nevada, 1852-1880.” Nevada Historical Society Quarterly 41 (1998): 71- 91.

Benson, Bruce. “Are Public Goods Really Common Pools? Consideration of the Evolution of Policing and Highways in England.” Economic Inquiry 32 no. 2 (1994).

Bogart, W. H. “First Plank Road.” Hunt’s Merchant Magazine (1851).

Brown, Richard D. “The Emergence of Voluntary Associations in Massachusetts, 1760-1830.” Journal of Voluntary Action Research (1973): 64-73.

Bodenhorn, Howard. A History of Banking in Antebellum America. New York: Cambridge University Press, 2000.

Cage, R. A. “The Lowden Empire: A Case Study of Wagon Roads in Northern California.” The Pacific Historian 28 (1984): 33-48.

Davis, Joseph S. Essays in the Earlier History of American Corporations. Cambridge: Harvard University Press, 1917.

DuBasky, Mayo. The Gist of Mencken: Quotations from America’s Critic. Metuchen, NJ: Scarecrow Press, 1990.

Durrenberger, J.A. Turnpikes: A Study of the Toll Road Movement in the Middle Atlantic States and Maryland. Valdosta, GA.: Southern Stationery and Printing, 1981.

Ekirch, Arthur A., Jr. The Decline of American Liberalism. New York: Atheneum, 1967.

Fishlow, Albert. “Internal Transportation in the Nineteenth and Early Twentieth Centuries.” In The Cambridge Economic History of the United States, Vol. II: The Long Nineteenth Century, edited by Stanley L. Engerman and Robert E. Gallman. New York: Cambridge University Press, 2000.

Geddes, George. Scientific American 5 (April 27, 1850).

Geddes, George. Observations upon Plank Roads. Syracuse: L.W. Hall, 1850.

Goodrich, Carter. “Public Spirit and American Improvements.” Proceedings of the American Philosophical Society, 92 (1948): 305-09.

Goodrich, Carter. Government Promotion of American Canals and Railroads, 1800-1890. New York: Columbia University Press, 1960.

Gunderson, Gerald. “Privatization and the Nineteenth-Century Turnpike.” Cato Journal 9 no. 1 (1989): 191-200.

Higgs, Robert. Crises and Leviathan: Critical Episodes in the Growth of American Government. New York: Oxford University Press, 1987.

Higgs, Robert. “Regime Uncertainty: Why the Great Depression Lasted So Long and Why Prosperity Resumed after the War.” Independent Review 1 no. 4 (1997): 561-600.

Kaplan, Michael D. “The Toll Road Building Career of Otto Mears, 1881-1887.” Colorado Magazine 52 (1975): 153-70.

Kirkland, Edward C. Men, Cities and Transportation: A Study in New England History, 1820-1900. Cambridge, MA.: Harvard University Press, 1948.

Klein, Daniel. “The Voluntary Provision of Public Goods? The Turnpike Companies of Early America.” Economic Inquiry (1990): 788-812. (Reprinted in The Voluntary City, edited by David Beito, Peter Gordon and Alexander Tabarrok. Ann Arbor: University of Michigan Press, 2002.)

Klein, Daniel B. and Gordon J. Fielding. “Private Toll Roads: Learning from the Nineteenth Century.” Transportation Quarterly 46, no. 3 (1992): 321-41.

Klein, Daniel B. and John Majewski. “Economy, Community and Law: The Turnpike Movement in New York, 1797-1845.” Law & Society Review 26, no. 3 (1992): 469-512.

Klein, Daniel B. and John Majewski. “Plank Road Fever in Antebellum America: New York State Origins.” New York History (1994): 39-65.

Klein, Daniel B. and Chi Yin. “Use, Esteem, and Profit in Voluntary Provision: Toll Roads in California, 1850-1902.” Economic Inquiry (1996): 678-92.

Kresge, David T. and Paul O. Roberts. Techniques of Transport Planning, Volume Two: Systems Analysis and Simulation Models. Washington DC: Brookings Institution, 1971.

Lane, Wheaton J. From Indian Trail to Iron Horse: Travel and Transportation in New Jersey, 1620-1860. Princeton: Princeton University Press, 1939.

Majewski, John. A House Dividing: Economic Development in Pennsylvania and Virginia before the Civil War. New York: Cambridge University Press, 2000.

Majewski, John. “The Booster Spirit and ‘Mid-Atlantic’ Distinctiveness: Shareholding in Pennsylvania Banking and Transportation Corporations, 1800 to 1840.” Manuscript, Department of History, UC Santa Barbara, 2001.

Majewski, John, Christopher Baer and Daniel B. Klein. “Responding to Relative Decline: The Plank Road Boom of Antebellum New York.” Journal of Economic History 53, no. 1 (1993): 106-122.

Nash, Christopher A. “Integration of Public Transport: An Economic Assessment.” In Bus Deregulation and Privatisation: An International Perspective, edited by J.S. Dodgson and N. Topham. Brookfield, VT: Avebury, 1988

Nash, Gerald D. State Government and Economic Development: A History of Administrative Policies in California, 1849-1933. Berkeley: University of California Press (Institute of Governmental Studies), 1964.

Pawson, Eric. Transport and Economy: The Turnpike Roads of Eighteenth Century Britain. London: Academic Press, 1977.

Peyton, Billy Joe. “Survey and Building the [National] Road.” In The National Road, edited by Karl Raitz. Baltimore: Johns Hopkins University Press, 1996.

Poole, Robert W. “Private Toll Roads.” In Privatizing Transportation Systems, edited by Simon Hakim, Paul Seidenstate, and Gary W. Bowman. Westport, CT: Praeger, 1996

Reiser, Catherine Elizabeth. Pittsburgh’s Commercial Development, 1800-1850. Harrisburg: Pennsylvania Historical and Museum Commission, 1951.

Ridgway, Arthur. “The Mission of Colorado Toll Roads.” Colorado Magazine 9 (1932): 161-169.

Roth, Gabriel. Roads in a Market Economy. Aldershot, England: Avebury Technical, 1996.

Searight, Thomas B. The Old Pike: A History of the National Road. Uniontown, PA: Thomas Searight, 1894.

Seely, Bruce E. Building the American Highway System: Engineers as Policy Makers. Philadelphia: Temple University Press, 1987.

Taylor, George R. The Transportation Revolution, 1815-1860. New York: Rinehart, 1951

Thwaites, Reuben Gold. Early Western Travels, 1746-1846. Cleveland: A. H. Clark, 1907.

U. S. Agency for International Development. “A History of Foreign Assistance.” On the U.S. A.I.D. Website. Posted April 3, 2002. Accessed January 20, 2003.

Wood, Frederick J. The Turnpikes of New England and Evolution of the Same through England, Virginia, and Maryland. Boston: Marshall Jones, 1919.

1 Daniel Klein, Department of Economics, Santa Clara University, Santa Clara, CA, 95053, and Ratio Institute, Stockholm, Sweden; Email: Dklein@scu.edu.

John Majewski, Department of History, University of California, Santa Barbara, 93106; Email: Majewski@history.ucsb.edu.

2 The term “turnpike” comes from Britain, referring to a long staff (or pike) that acted as a swinging barrier or tollgate. In nineteenth century America, “turnpike” specifically means a toll road with a surface of gravel and earth, as opposed to “plank roads” which refer to toll roads surfaced by wooden planks. Later in the century, all such roads were typically just “toll roads.”

3 For a discussion of returns and expectations, see Klein 1990: 791-95.

4 See Klein 1990: 803-808, Klein and Majewski 1994: 56-61.

5 The 414 figure consists of 222 companies organized under the general law, 102 charted by the legislature, and 90 companies that we learned of by county records, local histories, and various other sources.

6 Durrenberger (1931: 164) notes that in 1911 there were 108 turnpikes operating in Pennsylvania alone.

Citation: Klein, Daniel and John Majewski. “Turnpikes and Toll Roads in Nineteenth-Century America”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/turnpikes-and-toll-roads-in-nineteenth-century-america/

History of the U.S. Telegraph Industry

Tomas Nonnenmacher, Allegheny College

Introduction

The electric telegraph was one of the first telecommunications technologies of the industrial age. Its immediate predecessors were homing pigeons, visual networks, the Pony Express, and railroads. By transmitting information quickly over long distances, the telegraph facilitated the growth in the railroads, consolidated financial and commodity markets, and reduced information costs within and between firms. This entry focuses on the industrial organization of the telegraph industry from its inception through its demise and the industry’s impact on the American economy.

The Development of the Telegraph

The telegraph was similar to many other inventions of the nineteenth century. It replaced an existing technology, dramatically reduced costs, was monopolized by a single firm, and ultimately was displaced by a newer technology. Like most radical new technologies, the telecommunications revolution of the mid-1800s was not a revolution at all, but rather consisted of many inventions and innovations in both technology and industrial organization. This section is broken into four parts, each reviewing an era of telegraphy: precursors to the electric telegraph, early industrial organization of the industry, Western Union’s dominance, and the decline of the industry.

Precursors to the Electric Telegraph

Webster’s definition of a telegraph is “an apparatus for communicating at a distance by coded signals.” The earliest telegraph systems consisted of smoke signals, drums, and mirrors used to reflect sunlight. In order for these systems to work, both parties (the sender and the receiver) needed a method of interpreting the signals. Henry Wadsworth Longfellow’s poem recounting Paul Revere’s ride (“One if by land, two if by sea, and I on the opposite shore will be”) gives an example of a simple system. The first extensive telegraph network was the visual telegraph. In 1791 the Frenchman Claude Chappe used a visual network (which consisted of a telescope, a clock, a codebook, and black and white panels) to send a message ten miles. He called his invention the télégraphe, or far writer. Chappe refined and expanded his network, and by 1799 his telegraph consisted of a network of towers with mechanical arms spread across France. The position of the arms was interpreted using a codebook with over 8,000 entries.

Technological Advances

Due to technological difficulties, the electric telegraph could not at first compete with the visual telegraph. The basic science of the electric telegraph is to send an electric current through a wire. Breaking the current in a particular pattern denotes letters or phrases. The Morse code, named after Samuel Morse, is still used today. For instance, the code for SOS (… — …) is a well-known call for help. Two elements had to be perfected before an electric telegraph could work: a means of sending the signal (generating and storing electricity) and receiving the signal (recording the breaks in the current).

The science behind the telegraph dates back at least as far as Roger Bacon’s (1220-1292) experiments in magnetism. Numerous small steps in the science of electricity and magnetism followed. Important inventions include those of Giambattista della Porta (1558), William Gilbert (1603), Stephen Gray (1729), William Watson (1747), Pieter van Musschenbroek (1754), Luigi Galvani (1786), Alessandro Giuseppe Antonio Anastasio Volta (1800), André-Marie Ampere (1820), William Sturgeon (1825), and Joseph Henry (1829). A much longer list could be made, but the point is that no single person can be credited with developing the necessary technology of the telegraph.

1830-1866: Development and Consolidation of the Electric Telegraph Industry

In 1832, Samuel Morse returned to the United States from his artistic studies in Europe. While discussing electricity with fellow passengers, Morse conceived of the idea of a single-wire electric telegraph. No one until this time had Morse’s zeal for the applicability of electromagnetism to telecommunications or his conviction of its eventual profitability. Morse obtained a patent in the United States in 1838 but split his patent right to gain the support of influential partners. He obtained a $30,000 grant from Congress in 1843 to build an experimental line between Baltimore and Washington. The first public message over Morse’s line (“What hath God wrought?”) echoed the first message over Chappe’s system (“If you succeed, you will bask in glory”). Both indicated the inventors’ convictions about the importance of their systems.

Morse and His Partners

Morse realized early on that he was incapable of handling the business end of the telegraph and hired Amos Kendall, a former Postmaster General and a member of Andrew Jackson’s “Kitchen Cabinet,” to manage his business affairs. By 1848 Morse had consolidated the partnership to four members. Kendall managed the three-quarters of the patent belonging to Morse, Leonard Gale, and Alfred Vail. Gale and Vail had helped Morse develop the telegraph’s technology. F.O.J. Smith, a former Maine Representative whose help was instrumental in obtaining the government grant, decided to retain direct control of his portion of the patent right. The partnership agreement was vague, and led to discord between Kendall and Smith. Eventually the partners split the patent right geographically. Smith controlled New England, New York, and the upper-Midwest, and Morse controlled the rest of the country.

The availability of financing influenced the early industrial organization of the telegraph. Initially, Morse tried to sell his patent to the government, Kendall, Smith, and several groups of businessmen, but all attempts were unsuccessful. Kendall then attempted to generate interest in building a unified system across the country. This too failed, leaving Kendall to sell the patent right piecemeal to regional interests. These lines covered the most potentially profitable routes, emanating from New York and reaching Washington, Buffalo, Boston and New Orleans. Morse also licensed feeder lines to supply main lines with business.

Rival Patents

Royal House and Alexander Bain introduced rival patents in 1846 and 1849. Entrepreneurs constructed competing lines on the major eastern routes using the new patents. The House device needed a higher quality wire and more insulation as it was a more precise instrument. It had a keyboard at one end and printed out letters at the other. At its peak, it could send messages considerably faster than Morse’s technique. The Bain device was similar to Morse’s, except that instead of creating dots and dashes, it discolored a piece of chemically treated paper by sending an electric current through it. Neither competitor had success initially, leading Kendall to underestimate their eventual impact on the market.

By 1851, ten separate firms ran lines into New York City. There were three competing lines between New York and Philadelphia, three between New York and Boston, and four between New York and Buffalo. In addition, two lines operated between Philadelphia to Pittsburgh, two between Buffalo and Chicago, three between points in the Midwest and New Orleans, and entrepreneurs erected lines between many Midwestern cities. In all, in 1851 the Bureau of the Census reported 75 companies with 21,147 miles of wire.

Multilateral Oligopolies

The telegraph markets in 1850 were multilateral oligopolies. The term “multilateral” means that the production process extended in several directions. Oligopolies are markets in which a small number of firms strategically interact. Telegraph firms competed against rivals on the same route, but sought alliances with firms with which they connected. For example, four firms (New York, Albany & Buffalo; New York State Printing; Merchants’ State; and New York and Erie) competed on the route between New York City and Buffalo. Rates fell dramatically (by more than 50%) as new firms entered, so this market was quite competitive for a while. But each of these firms sought to create an alliance with connecting firms, such as those with lines from New York City to Boston or Washington. Increased business from exchanging messages meant increased profitability.

Mistransmission Problems

Quality competition was also fierce, with the line that erected the best infrastructure and supplied the fastest service usually dominating other, less capable firms. Messages could easily be garbled, and given the predominately business-related use of the telegraph, a garbled message was often worse than no message at all. A message sent from Boston to St. Louis could have traveled over the lines of five firms. Due to the complexity of the production process, messages were also often lost, with no firm taking responsibility for the mistransmission. This lack of responsibility gave firms an incentive to provide a lower quality service compared to an integrated network. These issues ultimately contributed to the consolidation of the industry.

Horizontal and System Integration

Horizontal integration-integration between two competing firms-and system integration-integration between two connecting firms-occurred in the telegraph industry during different periods. System integration occurred between 1846 and 1852, as main lines acquired most of the feeder lines in the country. In 1852 the Supreme Court declared the Bain telegraph an infringement on Morse’s patent, and Bain lines merged with Morse lines across the country. Between 1853 and 1857 regional monopolies formed and signed the “Treaty of Six Nations,” a pooling agreement between the six largest regional firms. During this phase the industry experienced both horizontal and system integration. By the end of the period, most remaining firms were regional monopolists, controlled several large cities and owned both the House and the Morse patents. Figure 1 shows the locations of these firms.

Figure 1: Treaty of Six Nations

Source: Thompson, p. 315

The final phase of integration occurred between 1857 and 1866. In this period the pool members consolidated into a national monopoly. By 1864 only Western Union and the American Telegraph Company remained of the “Six Nations.” The United States Telegraph Company entered the field by consolidating smaller, independent firms in the early 1860s, and operated in the territory of both the American Telegraph Company and Western Union. By 1866 Western Union absorbed its last two competitors and reached its position of market dominance.

Efficiency versus Market Power

Horizontal and system integration had two causes: efficiency and market power. Horizontal integration created economies of scale that could be realized from placing all of the wires between two cities on the same route or all the offices in a city in the same location. This consolidation reduced the cost of maintaining multiple lines. The reduction in competition due to horizontal integration also allowed firms to charge a higher price and earn monopoly profits. The efficiency gain from system integration was better control of messages travelling long distances. With responsibility for the message placed clearly in the hands of one firm, messages were transmitted with more care. System integration also created monopoly power, since to compete with a large incumbent system, a new entrant would have to also create a large infrastructure.

1866-1900: Western Union’s Dominance

The period from 1866 through the turn of the century was the apex of Western Union’s power. Yearly messages sent over its lines increased from 5.8 million in 1867 to 63.2 million in 1900. Over the same period, transmission rates fell from an average of $1.09 to 30 cents per message. Even with these lower prices, roughly 30 to 40 cents of every dollar of revenue were net profit for the company. Western Union faced three threats during this period: increased government regulation, new entrants into the field of telegraphy, and new competition from the telephone. The last two were the most important to the company’s future profitability.

Western Union Fends off Regulation

Western Union was the first nationwide industrial monopoly, with over 90% of the market share and dominance in every state. The states and the federal government responded to this market power. State regulation was largely futile given the interstate character of the industry. On the federal level, bills were introduced in almost every session of Congress calling for either regulation of or government entry into the industry. Western Union’s lobby was able to block almost any legislation. The few regulations that were passed either helped Western Union maintain its control over the market or were never enforced.

Western Union’s Smaller Rivals

Western Union’s first rival was the Atlantic and Pacific Telegraph Company, a conglomeration of new and merged lines created by Jay Gould in 1874. Gould sought to wrest control of Western Union from the Vanderbilts, and he succeeded in 1881 when the two firms merged. A more permanent rival appeared in the 1880s in the form of the Postal Telegraph Company. John Mackay, who had already made a fortune at the Comstock Lode, headed this firm. Mackay did what many of his telegraph predecessors did in the 1850s: create a network by buying out existing bankrupt firms and merging them into a network with large enough economies of scale to compete with Western Union. Postal never challenged Western Union’s market dominance, but did control over 10-20% of the market at various times.

The Threat from the Telephone

Western Union’s greatest threat came from a new technology, the telephone. Alexander Graham Bell patented the telephone in 1876, initially referring to it as a “talking telegraph.” Bell offered Western Union the patent for the telephone for $100,000, but the company declined to purchase it. Western Union could have easily gained control of AT&T in the 1890s, but management decided that higher dividends were more important than expansion. The telephone was used in the 1880s only for local calling, but with the development in the 1890s of “long lines,” the telephone offered increased competition to the telegraph. In 1900, local calls accounted for 97% of the telephone’s business, and it was not until the twentieth century that the telephone fully displaced the telegraph.

1900-1988: Increased Competition and Decline

The twentieth century saw the continued rise of the telephone and decline of the telegraph. Telegraphy continued to have a niche in inexpensive long-distance and international communication, including teletypewriters, Telex, and stock ticker. As shown in Table 1, after 1900, the rise in telegraph traffic slowed, and after 1930, the number of messages sent began to decline.

Table 1: Messages Handled by the Telegraph Network: 1870-1970

Date Messages Handled Date Messages Handled
1870 9,158,000 1930 211,971,000
1880 29,216,000 1940 191,645,000
1890 55,879,000 1945 236,169,000
1900 63,168,000 1950 178,904,000
1910 75,135,000 1960 124,319,000
1920 155,884,000 1970 69,679,000

Source: Historical Statistics.
Notes: Western Union messages 1870-1910; all telegraph companies, 1920-1970.

AT&T Obtains Western Union, Then Gives It Up

In 1909, AT&T gained control of Western Union by purchasing 30% of its stock. In many ways, the companies were heading in opposite directions. AT&T was expanding rapidly, while Western Union was content to reap handsome profits and issue large dividends but not reinvest in itself. Under AT&T’s ownership, Western Union was revitalized, but the two companies separated in 1913, succumbing to pressure from the Department of Justice. In 1911, the Department of Justice successfully used the Sherman Antitrust Act to force a breakup of Standard Oil. This success made the threat of antitrust action against AT&T very credible. Both Postal Telegraph and the independent telephone companies wishing to interconnect with AT&T lobbied for government regulation. In order to forestall any such government action, AT&T issued the “Kingsbury Commitment,” a unilateral commitment to divest itself of Western Union and allow independent telephone firms to interconnect.

Decline of the Telegraph

The telegraph flourished in the 1920s, but the Great Depression hit the industry hard, and it never recovered to its previous position. AT&T introduced the teletypewriter exchange service in 1931. The teletypewriter and the Telex allowed customers to install a machine on their premises that would send and receive messages directly. In 1938, AT&T had 18%, Postal 15% and Western Union 64% of telegraph traffic. In 1945, 236 million domestic messages were sent, generating $182 million in revenues. This was the most messages sent in a year over the telegraph network in the United States. By that time, Western Union had incorporated over 540 telegraph and cable companies into its system. The last important merger was between Western Union and Postal, which occurred in 1945. This final merger was not enough to stop the continuing rise of the telephone or the telegraph’s decline. Already in 1945, AT&T’s revenues and transmission dwarfed those of Western Union. AT&T made $1.9 billion in yearly revenues by transmitting 89.4 million local phone calls and 4.9 million toll calls daily. Table 2 shows the increasing competitiveness of telephone rates with telegraph rates.

Table 2: Telegraph and Telephone Rates from New York City to Chicago: 1850-1970

Date Telegraph* Telephone**
1850 $1.55
1870 1.00
1890 .40
1902 5.45
1919 .60 4.65
1950 .75 1.50
1960 1.45 1.45
1970 2.25 1.05

Source: Historical Statistics.
Notes: * Beginning 1960, for 15 word message. Prior to 1960 for 10 word message. ** Rates for station-to station, daytime, 3-minute call

The Effects of the Telegraph

The travel time from New York City to Cleveland in 1800 was two weeks, with another four weeks necessary to reach Chicago. By 1830, those travel times had fallen in half, and by 1860 it took only two days to reach Chicago from New York City. However, by use of the telegraph, news could travel between those two cities almost instantaneously. This section examines three instances where the telegraph affected economic growth: railroads, high throughput firms, and financial markets.

Telegraphs and Railroads

The telegraph and the railroad were natural partners in commerce. The telegraph needed the right of way that the railroads provided and the railroads needed the telegraph to coordinate the arrival and departure of trains. These synergies were not immediately recognized. Only in 1851 did railways start to use telegraphy. Prior to that, telegraph wires strung along the tracks were seen as a nuisance, occasionally sagging and causing accidents and even fatalities.

The greatest savings of the telegraph were from the continued use of single-tracked railroad lines. Prior to 1851, the U.S. system was single-tracked, and trains ran on a time-interval system. Two types of accidents could occur. Trains running in opposite directions could run into one another, as could trains running in the same direction. The potential for accidents required that railroad managers be very careful in dispatching trains. One way to reduce the number of accidents would have been to double-track the system. A second, better, way was to use the telegraph.

Double-tracking was a good alternative, but not perfect. Double-tracked lines would eliminate head-on collisions, but not same direction ones. This would still need to be done using a timing system, i.e. requiring a time interval between departing trains. Accidents were still possible using this system. By using the telegraph, station managers knew exactly what trains were on the tracks under their supervision. Double-tracking the U.S. rail system in 1893 has been estimated to cost $957 million. Western Union’s book capitalization was $123 million in 1893, making this seem like a good investment. Of course, the railroads could have used a system like Chappe’s visual telegraph to coordinate traffic, but such a system would have been less reliable and would not have been able to handle the same volume of traffic.

Telegraph and Perishable Products Industries

Other industries that had a high inventory turnover also benefited from the telegraph. Of particular importance were industries in which the product was perishable. These industries included meatpacking and the distribution of fruits and vegetables. The growth of both of these industries was facilitated by the introduction of the refrigerated car in 1874. The telegraph was required for the exact control of shipments. For instance, refrigeration and the telegraph allowed for the slaughter and disassembly of livestock in the giant stockyards of Chicago, Kansas City, St. Louis and Omaha. Beef would then be shipped east at a cost of 50% that of shipping the live cattle. The centralization of the stockyards also created tremendous amounts of by-products that could be processed into glue, tallow, dye, fertilizer, feed, brushes, false teeth, gelatin, oleomargarine, and many other useful products.

Telegraph and Financial Markets

The telegraph undoubtedly had a major impact on the structure of financial markets in the United States. New York became the financial center of the country, setting prices for a variety of commodities and financial instruments. Among these were beef, corn, wheat, stocks and bonds. As the telegraph spread, so too did the centralization of prices. For instance, in 1846, wheat and corn prices in Buffalo lagged four days behind those in New York City. In 1848, the two markets were linked telegraphically and prices were set simultaneously.

The centralization of stock prices helped make New York the financial capital of the United States. Over the course of the nineteenth century, hundreds of exchanges appeared and then disappeared across the country. Few of them remained, with only those in New York, Philadelphia, Boston, Chicago and San Francisco achieving any permanence. By 1910, 90 percent of all bond and two-thirds of all stock trades occurred on the New York Stock Exchange.

Centralization of the market created much more liquidity for stockholders. As the number of potential traders increased, so too did the ability to find a buyer or seller of a financial instrument. This increase in liquidity may have led to an increase in the total amount invested in the market, therefore leading to higher levels of investment and economic growth. Centralization may also have led to the development of certain financial institutions that could not have been developed otherwise. Although difficult to quantify, these aspects of centralization certainly had a positive effect on economic growth.

In some respects, we may tend to overestimate the telegraph’s influence on the economy. The rapid distribution of information may have had a collective action problem associated with it. If no one else in Buffalo has a piece of information, such as the change in the price of wheat in New York City, then there is a large private incentive to discover that piece of information quickly. But once everyone has the information, no one made better off. A great deal of effort may have been spent on an endeavor that, from society’s perspective, did not increase overall efficiency. The centralization in New York also increased the gains from other wealth-neutral or wealth-reducing activities, such as speculation and market manipulation. Higher volumes of trading increased the payoff from the successful manipulation of a market, yet did not increase society’s wealth.

Conclusion

The telegraph accelerated the speed of business transactions during the late nineteenth century and contributed to the industrialization of the United States. Like most industries, it faced new competition that ultimately proved its downfall. The telephone was easier and faster to use, and the telegraph ultimately lost its cost-advantages. In 1988, Western Union divested itself of its telegraph infrastructure and focused on financial services, such as money orders. A Western Union telegram is still available, currently costing $9.95 for 250 words.

Telegraph Timeline

1837 Cooke and Wheatstone patent telegraph in England.
1838 Morse’s Electro-Magnetic Telegraph patent approved.
1843 First message sent between Washington and Baltimore.
1846 First commercial telegraph line completed. The Magnetic Telegraph Company’s lines ran from New York to Washington.
House’s Printing Telegraph patent approved.
1848 Associated Press formed to pool telegraph traffic.
1849 Bain’s Electro-Chemical patent approved.
1851 Hiram Sibley and associates incorporate New York and Mississippi Valley Printing Telegraph Company. Later became Western Union.
1851 Telegraph first used to coordinate train departures.
1857 Treaty of Six Nations is signed, creating a national cartel
1859 First transatlantic cable is laid from Newfoundland to Valentia, Ireland. Fails after 23 days, having been used to send a total of 4,359 words. Total cost of laying the line was $1.2 million.
1861 First Transcontinental telegraph completed.
1866 First successful transatlantic telegraph laid
Western Union merges with major remaining rivals.
1867 Stock ticker service inaugurated.
1870 Western Union introduces the money order service.
1876 Alexander Graham Bell patents the telephone.
1908 AT&T gains control of Western Union. Divests itself of Western Union in 1913.
1924 AT&T offers Teletype system.
1926 Inauguration of the direct stock ticker circuit from New York to San Francisco.
1930 High-speed tickers can print 500 words per minute.
1945 Western Union and Postal Telegraph Company merge.
1962 Western Union offers Telex for international teleprinting.
1974 Western Union places Westar satellite in operation.
1988 Western Union Telegraph Company reorganized as Western Union Corporation. The telecommunications assets were divested and Western Union focuses on money transfers and loan services.

References

Blondheim, Menahem. News over the Wires. Cambridge: Harvard University Press, 1994.

Brock, Gerald. The Telecommunications Industry. Cambridge: Harvard University Press, 1981.

DuBoff, Richard. “Business Demand and the Development of the Telegraph in the United States, 1844-1860.” Business History Review 54 (1980): 461-477.

Field, Alexander. “The Telegraphic Transmission of Financial Asset Prices and Orders to Trade: Implications for Economic Growth, Trading Volume, and Securities Market Regulation.” Research in Economic History 18 (1998).

Field, Alexander. “French Optical Telegraphy, 1793-1855: Hardware, Software, Administration.” Technology and Culture 35 (1994): 315-47.

Field, Alexander. “The Magnetic Telegraph, Price and Quantity Data, and the New Management of Capital.” Journal of Economic History 52 (1992): 401-13.

Gabler, Edwin. The American Telegrapher: A Social History 1860-1900. New Brunswick: Rutgers University Press, 1988.

Goldin, H. H. “Governmental Policy and the Domestic Telegraph Industry.” Journal of Economic History 7 (1947): 53-68.

Israel, Paul. From Machine Shop to Industrial Laboratory. Baltimore: Johns Hopkins, 1992.

Lefferts, Marshall. “The Electric Telegraph: its Influence and Geographical Distribution.” American Geographical and Statistical Society Bulletin, II (1857).

Nonnenmacher, Tomas. “State Promotion and Regulation of the Telegraph Industry, 1845-1860.” Journal of Economic History 61 (2001).

Oslin, George. The Story of Telecommunications. Macon: Mercer University Press, 1992.

Reid, James. The Telegraph in America. New York: Polhemus, 1886.

Thompson, Robert. Wiring a Continent, Princeton: Princeton University Press, 1947.

U.S. Bureau of the Census. Report of the Superintendent of the Census for December 1, 1852, Washington: Robert Armstrong, 1853.

U.S. Bureau of the Census. Historical Statistics of the United States: Colonial Times to 1970: Bicentennial Edition, Washington: GPO, 1976.

Yates, JoAnne. “The Telegraph’s Effect on Nineteenth Century Markets and Firms.” Business and Economic History 15 (1986):149-63.

Citation: Nonnenmacher, Tomas. “History of the U.S. Telegraph Industry”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/history-of-the-u-s-telegraph-industry/

The Economic History of Taiwan

Kelly Olds, National Taiwan University

Geography

Taiwan is a sub-tropical island, roughly 180 miles long, located less than 100 miles offshore of China’s Fujian province. Most of the island is covered with rugged mountains that rise to over 13,000 feet. These mountains rise directly out of the ocean along the eastern shore facing the Pacific so that this shore, and the central parts of the island are sparsely populated. Throughout its history, most of Taiwan’s people have lived on the Western Coastal Plain that faces China. This plain is crossed by east-west rivers, which occasionally bring floods of water down from the mountains creating broad boulder strewn flood plains. Until modern times, these rivers have made north-south travel costly and limited the island’s economic integration. The most important river is the Chuo Shuei-Hsi (between present-day Changhua and Yunlin counties), which has been an important economic and cultural divide.

Aboriginal Economy

Little is known about Taiwan prior to the seventeenth-century. When the Dutch came to the island in 1622, they found a population of roughly 70,000 Austronesian aborigines, at least 1,000 Chinese and a smaller number of Japanese. The aborigine women practiced subsistence agriculture while aborigine men harvested deer for export. The Chinese and Japanese population was primarily male and transient. Some of the Chinese were fishermen who congregated at the mouths of Taiwanese rivers but most Chinese and Japanese were merchants. Chinese merchants usually lived in aborigine villages and acted as middlemen, exporting deerskins, primarily to Japan, and importing salt and various manufactures. The harbor alongside which the Dutch built their first fort (in present-day Tainan City) was already an established place of rendezvous for Chinese and Japanese trade when the Dutch arrived.

Taiwan under the Dutch and Koxinga

The Dutch took control of most of Taiwan in a series of campaigns that lasted from the mid-1630s to the mid-1640s. The Dutch taxed the deerskin trade, hired aborigine men as soldiers and tried to introduce new forms of agriculture, but otherwise interfered little with the aborigine economy. The Tainan harbor grew in importance as an international entrepot. The most important change in the economy was an influx of about 35,000 Chinese to the island. These Chinese developed land, mainly in southern Taiwan, and specialized in growing rice and sugar. Sugar became Taiwan’s primary export. One of the most important Chinese investors in the Taiwanese economy was the leader of the Chinese community in Dutch Batavia (on Java) and during this period the Chinese economy on Taiwan bore a marked resemblance to the Batavian economy.

Koxinga, a Chinese-Japanese sea lord, drove the Dutch off the island in 1661. Under the rule of Koxinga and his heirs (1661-1683), Chinese settlement continued to spread in southern Taiwan. On the one hand, Chinese civilians made the crossing to flee the chaos that accompanied the Ming-Qing transition. On the other hand, Koxinga and his heirs brought over soldiers who were required to clear land and farm when they were not being used in wars. The Chinese population probably rose to about 120,000. Taiwan’s exports changed little, but the Tainan harbor lost importance as a center of international trade, as much of this trade now passed through Xiamen (Amoy), a port across the strait in Fujian that was also under the control of Koxinga and his heirs.

Taiwan under Qing Rule

The Qing dynasty defeated Koxinga’s grandson and took control of Taiwan in 1683. Taiwan remained part of the Chinese empire until it ceded the island to Japan in 1895. The Qing government originally saw control of Taiwan as an economic burden that had to be borne in order to keep the island out of the hand of pirates. In the first year of occupation, the Qing government shipped as many Chinese residents as possible back to the mainland. The island lost perhaps one-third of its Chinese population. Travel to Taiwan by all but male migrant workers was illegal until 1732 and this prohibition was reinstated off-and-on until it was finally permanently rescinded in 1788. However, the island’s Chinese population grew about two percent per year in the century following the Qing takeover. Both illegal immigration and natural increase were important components of this growth. The Qing government feared the expense of Chinese-aborigine confrontations and tried futilely to restrain Chinese settlement and keep the populations apart. Chinese pioneers, however, were constantly pushing the bounds of Chinese settlement northward and eastward and the aborigines were forced to adapt. Some groups permanently leased their land to Chinese settlers. Others learned Chinese farming skills and eventually assimilated or else moved toward the mountains where they continued hunting, learned to raise cattle or served as Qing soldiers. Due to the lack of Chinese women, intermarriage was also common.

Individual entrepreneurs or land companies usually organized Chinese pioneering enterprises. These people obtained land from aborigines or the government, recruited settlers, supplied loans to the settlers and sometimes invested in irrigation projects. Large land developers often lived in the village during the early years but moved to a city after the village was established. They remained responsible for paying the land tax and they received “large rents” from the settlers amounting to 10-15 percent of the expected harvest. However, they did not retain control of land usage or have any say in land sales or rental. The “large rents” were, in effect, a tax paid to a tax farmer who shared this revenue with the government. The payers of the large rents were the true owners who controlled the land. These people often chose to rent out their property to tenants who did the actual farming and paid a “small rent” of about 50 percent of the expected harvest.

Chinese pioneers made extensive use of written contracts but government enforcement of contracts was minimal. In the pioneers’ homeland across the strait, protecting property and enforcing agreements was usually a function of the lineage. Being part of a strong lineage was crucial to economic success and violent struggles among lineages were a problem endemic to south China. Taiwanese settlers had crossed the strait as individuals or in small groups and lacked strong lineages. Like other Chinese immigrants throughout the world, they created numerous voluntary associations based on one’s place of residence, occupation, place of origin, surname, etc. These organizations substituted for lineages in protecting property and enforcing contracts, and violent conflict among these associations over land and water rights was frequent. Due to property rights problems, land sales contracts often included the signature of not only the owner, but also his family and neighbors agreeing to the transfer. The difficulty of seizing collateral led to the common use of “conditional sales” as a means of borrowing money. Under the terms of a conditional sale, the lender immediately took control of the borrower’s property and retained the right to the property’s production in lieu of rent until the borrower paid back the loan. Since the borrower could wait an indefinite period of time before repaying the loan, this led to an awkward situation in which the person who controlled the land did not have permanent ownership and had no incentive to invest in land improvements.

Taiwan prospered during a sugar boom in the early eighteenth century, but afterwards its sugar industry had a difficult time keeping up with advances in foreign production. Until the Japanese occupation in 1895, Taiwan’s sugar farms and sugar mills remained small-scale operations. The sugar industry was centered in the south of the island and throughout the nineteenth century, the southern population showed little growth and may have declined. By the end of the nineteenth century, the south of the island was poorer than the north of the island and its population was shorter in stature and had a lower life expectancy. The north of the island was better suited to rice production and the northern economy seems to have grown robustly. As the Chinese population moved into the foothills of the northern mountains in the mid-nineteenth century, they began growing tea, which added to the north’s economic vitality and became the island’s leading export during the last quarter of the nineteenth century. The tea industry’s most successful product was oolong tea produced primarily for the U.S. market.

During the last years of the Qing dynasty’s rule in Taiwan, Taiwan was made a full province of China and some attempts were made to modernize the island by carrying out a land survey and building infrastructure. Taiwan’s first railroad was constructed linking several cities in the north.

Taiwan under Japanese Rule

The Japanese gained control of Taiwan in 1895 after the Sino-Japanese War. After several years of suppressing both Chinese resistance and banditry, the Japanese began to modernize the island’s economy. A railroad was constructed running the length of the island and modern roads and bridges were built. A modern land survey was carried out. Large rents were eliminated and those receiving these rents were compensated with bonds. Ownership of approximately twenty percent of the land could not be established to Japanese satisfaction and was confiscated. Much of this land was given to Japanese conglomerates that wanted land for sugarcane. Several banks were established and reorganized irrigation districts began borrowing money to make improvements. Since many Japanese soldiers had died of disease, improving the island’s sanitation and disease environment was also a top priority.

Under the Japanese, Taiwan remained an agricultural economy. Although sugarcane continued to be grown mainly on family farms, sugar processing was modernized and sugar once again became Taiwan’s leading export. During the early years of modernization, native Taiwanese sugar refiners remained important but, largely due to government policy, Japanese refiners holding regional monopsony power came to control the industry. Taiwanese sugar remained uncompetitive on the international market, but was sold duty free within the protected Japanese market. Rice, also bound for the protected Japanese market, displaced tea to become the second major export crop. Altogether, almost half of Taiwan’s agricultural production was being exported in the 1930s. After 1935, the government began encouraging investment in non-agricultural industry on the island. The war that followed was a time of destruction and economic collapse.

Growth in Taiwan’s per-capita economic product during this colonial period roughly kept up with that of Japan. Population also grew quickly as health improved and death rates fell. The native Taiwanese population’s per-capita consumption grew about one percent per year, slower than the growth in consumption in Japan, but greater than the growth in China. Better property rights enforcement, population growth, transportation improvements and protected agricultural markets caused the value of land to increase quickly, but real wage rates increased little. Most Taiwanese farmers did own some land but since the poor were more dependent on wages, income inequality increased.

Taiwan Under Nationalist Rule

Taiwan’s economy recovered from the war slower than the Japanese economy. The Chinese Nationalist government took control of Taiwan in 1945 and lost control of their original territory on the mainland in 1949. The Japanese population, which had grown to over five percent of Taiwan’s population (and a much greater proportion of Taiwan’s urban population), was shipped to Japan and the new government confiscated Japanese property creating large public corporations. The late 1940s was a period of civil war in China, and Taiwan also experienced violence and hyperinflation. In 1949, soldiers and refugees from the mainland flooded onto the island increasing Taiwan’s population by about twenty percent. Mainlanders tended to settle in cities and were predominant in the public sector.

In the 1950s, Taiwan was dependent on American aid, which allowed its government to maintain a large military without overburdening the economy. Taiwan’s agricultural economy was left in shambles by the events of the 1940s. It had lost its protected Japanese markets and the low-interest-rate formal-sector loans to which even tenant farmers had access in the 1930s were no longer available. With American help, the government implemented a land reform program. This program (1) sold public land to tenant farmers, (2) limited rent to 37.5% of the expected harvest and (3) severely restricted the size of individual landholdings forcing landlords to sell most of their land to the government in exchange for stocks and bonds valued at 2.5 times the land’s annual expected harvest. This land was then redistributed. The land reform increased equality among the farm population and strengthened government control of the countryside. Its justice and effect on agricultural investment and productivity are still hotly debated.

High-speed growth accompanied by quick industrialization began in the late-1950s. Taiwan became known for its cheap manufactured exports produced by small enterprises bound together by flexible sub-contracting networks. Taiwan’s postwar industrialization is usually attributed to (1) the decline in land per capita, (2) the change in export markets and (3) government policy. Between 1940 and 1962, Taiwan’s population increased at an annual rate of slightly over three percent. This cut the amount of land per capita in half. Taiwan’s agricultural exports had been sold tariff-free at higher-than-world-market prices in pre-war Japan while Taiwan’s only important pre-war manufactured export, imitation panama hats, faced a 25% tariff in the U.S., their primary market. After the war, agricultural products generally faced the greatest trade barriers. As for government policy, Taiwan went through a period of import substitution policy in the 1950s, followed by promotion of manufactured exports in the 1960s and 1970s. Subsidies were available for certain manufactures under both regimes. During the import substitution regime, domestic manufactures were protected both by tariffs and multiple overvalued exchange rates. Under the later export promotion regime, export processing zones were set up in which privileges were extended to businesses which produced products which would not be sold domestically.

Historical research into the “Taiwanese miracle” has focused on government policy and its effects, but statistical data for the first few post-war decades is poor and the overall effect of the various government policies is unclear. During the 1960s and 1970s, real GDP grew about 10% (7% per capita) each year. Most of this growth can be explained by increases in factors of production. Savings rates began rising after the currency was stabilized and reached almost 30% by 1970. Meanwhile, primary education, in which 70% of Taiwanese children had participated under the Japanese, became universal, and students in higher education increased many-fold. Although recent research has emphasized the importance of factor growth in the Asian “miracle economies,” studies show that productivity also grew substantially in Taiwan.

Further Reading

Chang, Han-Yu and Ramon Myers. “Japanese Colonial Development Policy in Taiwan, 1895-1906.” Journal of Asian Studies 22, no. 4 (August 1963): 433-450.

Davidson, James. The Island of Formosa: Past and Present. London: MacMillan & Company, 1903.

Fei, John et.al. Growth with Equity: The Taiwan Case. New York: Oxford University Press, 1979.

Gardella, Robert. Harvesting Mountains: Fujian and the China Tea Trade, 1757-1937. Berkeley: University of California Press, 1994.

Ho, Samuel. Economic Development of Taiwan 1860-1970. New Haven: Yale University Press, 1978.

Ho, Yhi-Min. Agricultural Development of Taiwan, 1903-1960. Nashville: Vanderbilt University Press, 1966.

Ka, Chih-Ming. Japanese Colonialism in Taiwan: Land Tenure, Development, and Dependency, 1895-1945. Boulder: Westview Press, 1995.

Knapp, Ronald, editor. China’s Island Frontier: Studies in the Historical Geography of Taiwan. Honolulu: University Press of Hawaii, 1980.

Li, Kuo-Ting. The Evolution of Policy Behind Taiwan’s Development Success. New Haven: Yale University Press, 1988.

Koo Hui-Wen and Chun-Chieh Wang. “Indexed Pricing: Sugarcane Price Guarantees in Colonial Taiwan, 1930-1940.” Journal of Economic History 59, no. 4 (December 1999): 912-926.

Mazumdar, Sucheta. Sugar and Society in China: Peasants, Technology, and the World Market. Cambridge, MA: Harvard University Asia Center, 1998.

Meskill, Johanna. A Chinese Pioneer Family: The Lins of Wu-feng, Taiwan, 1729-1895. Princeton, NJ: Princeton University Press, 1979.

Ng, Chin-Keong. Trade and Society: The Amoy Network on the China Coast 1683-1735. Singapore: Singapore University Press, 1983.

Olds, Kelly. “The Risk Premium Differential in Japanese-Era Taiwan and Its Effect.” Journal of Institutional and Theoretical Economics 158, no. 3 (September 2002): 441-463.

Olds, Kelly. “The Biological Standard of Living in Taiwan under Japanese Occupation.” Economics and Human Biology, 1 (2003): 1-20.

Olds, Kelly and Ruey-Hua Liu. “Economic Cooperation in Nineteenth-Century Taiwan.” Journal of Institutional and Theoretical Economics 156, no. 2 (June 2000): 404-430.

Rubinstein, Murray, editor. Taiwan: A New History. Armonk, NY: M.E. Sharpe, 1999.

Shepherd, John. Statecraft and Political Economy on the Taiwan Frontier, 1600-1800. Stanford: Stanford University Press, 1993.

Citation: Olds, Kelly. “The Economic History of Taiwan”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-taiwan/

A History of the Standard of Living in the United States

Richard H. Steckel, Ohio State University

Methods of Measuring the Standard of Living

During many years of teaching, I have introduced the topic of the standard of living by asking students to pretend that they would be born again to unknown (random) parents in a country they could choose based on three of its characteristics. The list put forward in the classroom invariably includes many of the categories usually suggested by scholars who have studied the standard of living over the centuries: access to material goods and services; health; socio-economic fluidity; education; inequality; the extent of political and religious freedom; and climate. Thus, there is little disagreement among people, whether newcomers or professionals, on the relevant categories of social performance.

Components and Weights

Significant differences of opinion emerge, both among students and research specialists, on the precise measures to be used within each category and on the weights or relative importance that should be attached to each. There are numerous ways to measure health, for example, with some approaches emphasizing length of life while other people give high priority to morbidity (illness or disability) or to yet other aspects of health quality of life while living (e.g. physical fitness). Conceivably one might attempt comparisons using all feasible measures, but this is expensive and time-consuming and in any event, many good measures within categories are often highly correlated.

Weighting the various components is the most contentious issue in any attempt to summarize the standard of living, or otherwise compress diverse measures into a single number. Some people give high priority to income, for example, while others claim that health is most important. Economists and other social scientists recognize that tastes or preferences are individualistic and diverse, and following this logic to the extreme, one might argue that all interpersonal comparisons are invalid. On the other hand, there are general tendencies in preferences. Every class that I have taught has emphasized the importance of income and health, and for this reason I discuss historical evidence on these measures.

Material Aspects of the Standard of Living

Gross Domestic Product

The most widely used measure of the material standard of living is Gross Domestic Product (GDP) per capita, adjusted for changes in the price level (inflation or deflation). This measure, real GDP per capita, reflects only economic activities that flow through markets, omitting productive endeavors unrecorded in market exchanges, such a preparing meals at home or maintenance done by the homeowner. It ignores work effort required to produce income and does not consider conditions surrounding the work environment, which might affect health and safety. Crime, pollution, and congestion, which many people consider important to their quality of life, are also excluded from GDP. Moreover technological change, relative prices and tastes affect the course of GDP and the products and services that it includes, which creates what economists call an “index number” problem that is not readily solvable. Nevertheless most economists believe that real GDP per capita does summarize or otherwise quantify important aspects of the average availability of goods and services.

Time Trends in Real GDP per Capita

Table 1 shows the course of the material standard of living in the United States from 1820 to 1998. Over this period of 178 years real GDP per capita increased 21.7 fold, or an average of 1.73 percent per year. Although the evidence available to estimate GDP directly is meager, this rate of increase was probably many times higher than experienced during the colonial period. This conclusion is justified by considering the implications of extrapolating the level observed in 1820 ($1,257) backward in time at the growth rate measured since 1820 (1.73 percent). Under this supposition, real per capita GDP would have doubled every forty years (halved every forty years going backward in time) and so by the mid 1700s there would have been insufficient income to support life. Because the cheapest diet able to sustain good health would have cost nearly $500 per year, the tentative assumption of modern economic growth contradicts what actually happened. Moreover, historical evidence suggests that important ingredients of modern economic growth, such as technological change and human and physical capital, accumulated relatively slowly during the colonial period.

Table 1: GDP per Capita in the United States

Year GDP per capitaa Annual growth rate from previous period
1820 1,257
1870 2,445 1.34
1913 5,301 1.82
1950 9,561 1.61
1973 16,689 2.45
1990 23,214 1.94
1998 27,331 2.04

a. Measured in 1990 international dollars.

Source: Maddison (2001), Tables A-1c and A-1d.

Cycles in Real GDP per Capita

Although real GDP per capita is given for only 7 dates in Table 1, it is apparent that economic progress has been uneven over time. If annual or quarterly data were given, it would show that business cycles have been a major feature of the economic landscape since industrialization began in the 1820s. By far the worst downturn in U.S. history occurred during the Great Depression of the 1930s, when real per capita GDP declined by approximately one-third and the unemployment rate reached 25 percent.

Regional Differences

The aggregate numbers also disguise regional differences in the standard of living. In 1840 personal income per capita was twice as high in the Northeast as in the North Central States. Regional divergence increased after the Civil War when the South Atlantic became the nation’s poorest region, attaining a level only one-third of that in the Northeast. Regional convergence occurred in the twentieth century and industrialization in the South significantly improved the region’s economic standing after World War II.

Health and the Standard of Living

Life Expectancy

Two measures of health are widely used in economic history: life expectancy at birth (or average length of life) and average height, which measures nutritional conditions during the growing years. Table 2 shows that life expectancy approximately doubled over the past century and a half, reaching 76.7 years in 1998. If depressions and recessions have adversely affected the material standard of living, epidemics have been a major cause of sudden declines in health in the past. Fluctuations during the nineteenth century are evident from the table, but as a rule growth rates in health have been considerably less volatile than those for GDP, particularly during the twentieth century.

Table 2: Life Expectancy at Birth in the United States

Year Life Expectancy
1850 38.3
1860 41.8
1870 44.0
1880 39.4
1890 45.2
1900 47.8
1910 53.1
1920 54.1
1930 59.7
1940 62.9
1950 68.2
1960 69.7
1970 70.8
1980 73.7
1990 75.4
1998 76.7

Source: Haines (2002)

Childhood mortality greatly affects life expectancy, which was low in the mid 1800s substantially because mortality rates were very high for this age group. For example, roughly one child in five born alive in 1850 did not survive to age one, but today the infant mortality rate is under one percent. The past century and a half witnessed a significant shift in deaths from early childhood to old age. At the same time, the major causes of death have shifted from infectious diseases originating with germs or microorganisms to degenerative processes that are affected by life-style choices such as diet, smoking and exercise.

The largest gains were concentrated in the first half of the twentieth century, when life expectancy increased from 47.8 years in 1900 to 68.2 years in 1950. Factors behind the growing longevity include the ascent of the germ theory of disease, programs of public health and personal hygiene, better medical technology, higher incomes, better diets, more education, and the emergence of health insurance.

Explanations of Increases in Life Expectancy

Numerous important medical developments contributed to improving health. The research of Pasteur and Koch was particularly influential in leading to acceptance of the germ theory in the late 1800s. Prior to their work, many diseases were thought to have arisen from miasmas or vapors created by rotting vegetation. Thus, swamps were accurately viewed as unhealthy, but not because they were home to mosquitoes and malaria. The germ theory gave public health measures a sound scientific basis, and shortly thereafter cities began cost-effective measures to remove garbage, purify water supplies, and process sewage. The notion that “cleanliness is next to Godliness” also emerged in the home, where bathing and the washing of clothes, dishes, and floors became routine.

The discovery of Salvarsan in 1910 was the first use of an antibiotic (for syphilis), which meant that the drug was effective in altering the course of a disease. This was an important medical event, but broad-spectrum antibiotics were not available until the middle of the century. The most famous of these early drugs was penicillin, which was not manufactured in large quantities until the 1940s. Much of the gain in life expectancy was attained before chemotherapy and a host of other medical technologies were widely available. A cornerstone of improving health from the late 1800s to the middle of the twentieth century was therefore prevention of disease by reducing exposure to pathogens. Also important were improvements in immune systems created by better diets and by vaccination against diseases such as smallpox and diphtheria.

Heights

In the past quarter century, historians have increasingly used average heights to assess health aspects of the standard of living. Average height is a good proxy for the nutritional status of a population because height at a particular age reflects an individual’s history of net nutrition, or diet minus claims on the diet made by work (or physical activity) and disease. The growth of poorly nourished children may cease, and repeated bouts of biological stress — whether from food deprivation, hard work, or disease — often leads to stunting or a reduction in adult height. The average heights of children and of adults in countries around the world are highly correlated with their life expectancy at birth and with the log of the per capita GDP in the country where they live.

This interpretation for average height has led to their use in studying the health of slaves, health inequality, living standards during industrialization, and trends in mortality. The first important results in the “new anthropometric history” dealt with the nutrition and health of Americans slaves as determined from stature recorded for identification purposes on slave manifests required in the coastwise slave trade. The subject of slave health has been a contentious issue among historians, in part because vital statistics and nutrition information were never systematically collected for slaves (or for the vast majority of the American population in the mid-nineteenth century, for that matter). Yet, the height data showed that children were astonishingly small and malnourished while working slaves were remarkably well fed. Adolescent slaves grew rapidly as teenagers and were reasonably well off in nutritional aspects of health.

Time Trends in Average Height

Table 3 shows the time pattern in height of native-born American men obtained in historical periods from military muster rolls, and for men and women in recent decades from the National Health and Nutrition Examination Surveys. This historical trend is notable for the tall stature during the colonial period, the mid-nineteenth century decline, and the surge in heights of the past century. Comparisons of average heights from military organizations in Europe show that Americans were taller by two to three inches. Behind this achievement were a relatively good diet, little exposure to epidemic disease, and relative equality in the distribution of wealth. Americans could choose their foods from the best of European and Western Hemisphere plants and animals, and this dietary diversity combined with favorable weather meant that Americans never had to contend with harvest failures. Thus, even the poor were reasonably well fed in colonial America.

Table 3:

Average Height of Native-Born American Men and Women by Year of Birth

Centimeters

Inches

Year Men Men Women
1710 171.5 67.5
1720 171.8 67.6
1730 172.1 67.8
1740 172.1 67.8
1750 172.2 67.8
1760 172.3 67.8
1770 172.8 68.0
1780 173.2 68.2
1790 172.9 68.1
1800 172.9 68.1
1810 173.0 68.1
1820 172.9 68.1
1830 173.5 68.3
1840 172.2 67.8
1850 171.1 67.4
1860 170.6 67.2
1870 171.2 67.4
1880 169.5 66.7
1890 169.1 66.6
1900 170.0 66.9
1910 172.1 67.8
1920 173.1 68.1
1930 175.8 162.6 69.2 64.0
1940 176.7 163.1 69.6 64.2
1950 177.3 163.1 69.8 64.2
1960 177.9 164.2 70.0 64.6
1970 177.4 163.6 69.8 64.4

Source: Steckel (2002) and sources therein.

Explaining Height Cycles

Loss of stature began in the second quarter of the nineteenth century when the transportation revolution of canals, steamboats and railways brought people into greater contact with diseases. The rise of public schools meant that children were newly exposed to major diseases such as whooping cough, diphtheria, and scarlet fever. Food prices also rose during the 1830s and growing inequality in the distribution of income or wealth accompanied industrialization. Business depressions, which were most hazardous for the health of those who were already poor, also emerged with industrialization. The Civil War of the 1860s and its troop movements further spread disease and disrupted food production and distribution. A large volume of immigration also brought new varieties of disease to the United States at a time when urbanization brought a growing proportion of the population into closer contact with contagious diseases. Estimates of life expectancy among adults at ages 20, 30 and 50, which was assembled from family histories, also declined in the middle of the nineteenth century.

Rapid Increases in Heights in the First Half of the Twentieth Century

In the twentieth century, heights grew most rapidly for those born between 1910 and 1950, an era when public health and personal hygiene measures took vigorous hold, incomes rose rapidly and there was reduced congestion in housing. The latter part of the era also witnessed a larger share of income or wealth going to the lower portion of the distribution, implying that the incomes of the less well-off were rising relatively rapidly. Note that most of the rise in heights occurred before modern antibiotics were available, which means that disease prevention rather than the ability to alter its course after onset, was the most important basis of improving health. The growing control that humans have exercised over their environment, particularly increased food supply and reduced exposure to disease, may be leading to biological (but not genetic) evolution of humans with more durable vital organ systems, larger body size, and later onset of chronic diseases.

Recent Stagnation

Between the middle of the twentieth century and the present, however, the average heights of American men have stagnated, increasing by only a small fraction of an inch over the past half century. Table 3 refers to the native born, so recent increases in immigration cannot account for the stagnation. In the absence of other information, one might be tempted to suppose that environmental conditions for growth are so good that most Americans have simply reached their genetic potential for growth. Unlike the United States, heights and life expectancy have continued to grow in Europe, which has the same genetic stock from which most Americans descend. By the 1970s several American health indicators had fallen behind those in Norway, Sweden, the Netherlands, and Denmark. While American heights were essentially flat after the 1970s, heights continued to grow significantly in Europe. The Dutch men are now the tallest, averaging six feet, about two inches more than American men. Lagging heights leads to questions about the adequacy of health care and life-style choices in America. As discussed below, it is doubtful that lack of resource commitment to health care is the problem because America invests far more than the Netherlands. Greater inequality and less access to health care could be important factors in the difference. But access to health care alone, whether due to low income or lack of insurance coverage, may not be the only issues — health insurance coverage must be used regularly and wisely. In this regard, Dutch mothers are known for regular pre-and post-natal checkups, which are important for early childhood health.

Note that significant differences in health and the quality of life follow from these height patterns. The comparisons are not part of an odd contest that emphasizes height, nor is big per se assumed to be beautiful. Instead, we know that on average, stunted growth has functional implications for longevity, cognitive development, and work capacity. Children who fail to grow adequately are often sick, suffer learning impairments and have a lower quality of life. Growth failure in childhood has a long reach into adulthood because individuals whose growth has been stunted are at greater risk of death from heart disease, diabetes, and some types of cancer. Therefore it is important to know why Americans are falling behind.

International Comparisons

Per capita GDP

Table 4 places American economic performance in perspective relative to other countries. In 1820 the United States was fifth in world rankings, falling roughly thirty percent below the leaders (United Kingdom and the Netherlands), but still two-to-three times better off than the poorest sections of the globe. It is notable that in 1820 the richest country (the Netherlands at $1,821) was approximately 4.4 times better off than the poorest (Africa at $418) but by 1950 the ratio of richest-to-poorest had widened to 21.8 ($9,561 in the United States versus $439 in China), which is roughly the level it is today (in 1998, it was $27,331 in the United States versus $1,368 in Africa). These calculations understate the growing disparity in the material standard of living because several African countries today fall significantly below the average, whereas it is unlikely that they did so in 1820 because GDP for the continent as a whole was close to the level of subsistence.

Table 4: GDP per Capita by Country and Year (1990 International $)

Country 1820 1870 1913 1950 1973 1998 Ratio 1998 to 1820
Austria 1,218 1,863 3,465 3,706 11,235 18,905 15.5
Belgium 1,319 2,697 4,220 5,462 12,170 19,442 14.7
Denmark 1,274 2,003 3,912 6,946 13,945 22,123 17.4
Finland 781 1,140 2,111 4,253 11,085 18,324 23.5
France 1,230 1,876 3,485 5,270 13,123 19,558 15.9
Germany 1,058 1,821 3,648 3,881 11,966 17,799 16.8
Italy 1,117 1,499 2,564 3,502 10,643 17,759 15.9
Netherlands 1,821 2,753 4,049 5,996 13,082 20,224 11.1
Norway 1,104 1,432 2,501 5,463 11,246 23,660 21.4
Sweden 1,198 1,664 3,096 6,738 13,493 18,685 15.6
Switzerland 1,280 2,202 4,266 9,064 18,204 21,367 16.7
United Kingdom 1,707 3,191 4,921 6,907 12,022 18,714 11.0
Portugal 963 997 1,244 2,069 7,343 12,929 13.4
Spain 1,063 1,376 2,255 2,397 8,739 14,227 13.4
United States 1,257 2,445 5,301 9,561 16,689 27,331 21.7
Mexico 759 674 1,732 2,365 4,845 6,655 8.8
Japan 669 737 1,387 1,926 11,439 20,413 30.5
China 600 530 552 439 839 3,117 5.2
India 533 533 673 619 853 1,746 3.3
Africa 418 444 585 852 1,365 1,368 3.3
World 667 867 1,510 2,114 4,104 5,709 8.6
Ratio of richest to poorest 4.4 7.2 8.9 20.6 21.7 20.0

Source: Maddison (2001), Table B-21.

It is clear that the poorer countries are better off today than they were in 1820 (3.3 fold in both Africa and India). But the countries that are now rich grew at a much faster rate. The last column of Table 4 shows that Japan realized the most spectacular gain, climbing from approximately the world average in 1820 to the fifth richest today, with an increase of over thirty fold in real per capita GDP. All countries that are rich today had rapid increases in their material standard of living, realizing more than ten-fold increases since 1820. The underlying reasons for this diversity of economic success is a central question in the field of economic history.

Life Expectancy

Table 5 shows that disparities in life expectancy have been much less than those in per capita GDP. In 1820 all countries were bunched in the range of 21 to 41 years, with Germany at the top and India at the bottom, giving a ratio of less than 2 to 1. It is doubtful that any country or region has had a life expectancy below 20 years for long periods of time because death rates would have exceeded any plausible upper limit for birth rates, leading to population implosion. The twentieth century witnessed a compression in life expectancies across countries, with the ratio of levels in 1999 being 1.56 (81 in Japan versus 52 in Africa). Japan has also been a spectacular performer in health, increasing life expectancy from 34 years in 1820 to 81 years in 1999. Among poor unhealthy countries, health aspects of the standard of living have improved more rapidly than the material standard of living relative to the world average. Because many public health measures are cheap and effective, it has been easier to extend life than it has been to promote material prosperity, which has numerous complicated causes.

Table 5: Life Expectancy at Birth by Country and Year

Country 1820 1900 1950 1999
France 37 47 65 78
Germany 41 47 67 77
Italy 30 43 66 78
Netherlands 32 52 72 78
Spain 28 35 62 78
Sweden 39 56 70 79
United Kingdom 40 50 69 77
United States 39 47 68 77
Japan 34 44 61 81
Russia 28 32 65 67
Brazil 27 36 45 67
Mexico n.a. 33 50 72
China n.a. 24 41 71
India 21 24 32 60
Africa 23 24 38 52
World 26 31 49 66

n.a.: not available.

Source: Maddison (2001), Table 1-5a.

Height Comparisons

Figure 1 compares stature in the United States and the United Kingdom. Americans were very tall by global standards in the early nineteenth century as a result of their rich and varied diets, low population density, and relative equality of wealth. Unlike other countries that have been studied (France, the Netherlands, Sweden, Germany, Japan and Australia), both the U.S. and the U.K. suffered significant height declines during industrialization (as defined primarily by the achievement of modern economic growth) in the nineteenth century. (Note, however, that the amount and timing of the height decline in the U.K. has been the subject of a lively debate in the Economic History Review involving Roderick Floud, Kenneth Wachter and John Komlos; only the Floud-Wachter figures are given here.)

Source: Steckel (2002, Figure 12) and Floud, Wachter and Gregory (1990, table 4.8).

One may speculate that the timing of the declines shown in the Figure 1 is probably more coincidental than emblematic of linkage among similar causal factors across the two countries. While it is possible that growing trade and commerce spread disease, as in the United States, it is more likely that a major culprit in the U.K was rapid urbanization and associated increased in exposure to diseases. This conclusion is reached by noting that urban-born men were substantially shorter than the rural born, and between the periods of 1800-1830 and 1830–1870 the share of the British population living in urban areas leaped from 38.7 to 54.1%.

References

Easterlin, Richard A. “Regional Income Trends, 1840-1950.” In The Reinterpretation of American Economic History, edited byRobert William Fogel and Stanley L. Engerman. New York: Harper and Row, 1971.

Engerman, Stanley L. “The Standard of Living Debate in International Perspective: Measures and Indicators.” In Health and Welfare during Industrialization, edited by Richard H. Steckel and Roderick Floud. Chicago: University of Chicago Press, 1997.

Floud, Roderick, Kenneth W. Wachter and Annabel S. Gregory. Height, Health, and History: Nutritional Status in the United Kingdom, 1750-1980. Cambridge: Cambridge University Press, 1990.

Haines, Michael. “Vital Statistics.” In Historical Statistics of the United States: Millennial Edition, edited by Susan Carter, Scott Gartner, Michael Haines, Alan Olmstead, Richard Sutch, and Gavin Wright. New York: Cambridge University Press, forthcoming, 2002.

Komlos, John. “Shrinking in a Growing Economy? The Mystery of Physical Stature during the Industrial Revolution.” Journal of Economic History 58, no. 3 (1998): 779-802.

Maddison, Angus. The World Economy: A Millennial Perspective. Paris: OECD, 2001.

Meeker, Edward. “Medicine and Public Health.” In Encyclopedia of American Economic History, edited by Glenn Porter. New York: Scribner, 1980.

Pope, Clayne L. “Adult Mortality in America before 1900: A View from Family Histories.” In Strategic Factors in Nineteenth-Century American Economic History: A Volume to Honor Robert W. Fogel, edited by Claudia Goldin and Hugh Rockoff. Chicago: University of Chicago Press, 1992.

Porter, Roy, editor. The Cambridge Illustrated History of Medicine. Cambridge: Cambridge University Press, 1996.

Steckel, Richard H. “Health, Nutrition and Physical Well-Being.” In Historical Statistics of the United States: Millennial Edition, edited by Susan Carter, Scott Gartner, Michael Haines, Alan Olmstead, Richard Sutch, and Gavin Wright. New York: Cambridge University Press, forthcoming, 2002.

Steckel, Richard H. “Industrialization and Health in Historical Perspective.” In Poverty, Inequality and Health, edited by David Leon and Gill Walt. Oxford: Oxford University Press, 2000.

Steckel, Richard H. “Strategic Ideas in the Rise of the New Anthropometric History and Their Implications for Interdisciplinary Research.” Journal of Economic History 58, no. 3 (1998): 803-21.

Steckel, Richard H. “Stature and the Standard of Living.” Journal of Economic Literature 33, no. 4 (1995): 1903-1940.

Steckel, Richard H. “A Peculiar Population: The Nutrition, Health, and Mortality of American Slaves from Childhood to Maturity.” Journal of Economic History 46, no. 3 (1986): 721-41.

Steckel, Richard H. and Roderick Floud, editors. Health and Welfare during Industrialization. Chicago: University of Chicago Press, 1997.

Citation: Steckel, Richard. “A History of the Standard of Living in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. July 21, 2002. URL http://eh.net/encyclopedia/a-history-of-the-standard-of-living-in-the-united-states/