EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Confederate Political Economy: Creating and Managing a Southern Corporatist Nation

Author(s):Bonner, Michael Brem
Reviewer(s):Pecquet, Gary M.

Published by EH.Net (September 2016)

Michael Brem Bonner, Confederate Political Economy: Creating and Managing a Southern Corporatist Nation. Baton Rouge: Louisiana State University Press, 2016. x + 260 pp. $48 (hardcover), ISBN: 978-0-8071-6212-5.

Reviewed for EH.Net by Gary M. Pecquet, Department of Economics, Central Michigan University.

Historian Michael Bonner examines how the government, bureaucrats, industrial leaders and ordinary citizens interacted under the pressures of emergency wartime conditions to create a distinct Confederate political economy. The Confederate war economy dispensed with many of the normal features of a market economy just as the United States did in prosecuting the two world wars during the twentieth century.

Although the extent of the Confederate government’s economic control was unprecedented by American standards, Bonner correctly contends that the Confederate political economy was never a top-down command economy, as Louise Hill (1936) and as William Davis (1994) have contended. Bonner correctly dispatches the claim that the Confederacy adopted a “State Socialist” model. According to him, “The State Socialism argument overlooks the capitalists, both the agricultural capitalist slaveholders and the growing class of industrial capitalists, who facilitated the dramatically increased production of war materiel” (p. 222, fn. 44).

Instead, Bonner draws from Nobel-laureate Edmund Phelps’ Mass Flourishing (2013) and finds the Confederate war economy to be neither free market, nor state socialism. Bonner finds that the Confederate war economy compared more closely to the authoritarian “corporatist” economic model (aka “Fascism,” aka “crony capitalism”) adopted throughout Europe during the early-to-mid twentieth century to preserve traditional social values from dynamic capitalistic.  Markets are tolerated, but subject to significant government oversight and regulation.

Confederate leaders did not intentionally set up an authoritarian regime, but the corporate model emerged out of wartime expediency. Special provisions in the new Confederate Constitution conferred additional powers to the executive branch: a six-year Presidential term, a line-item veto and executive control over government expenditures. In addition, The Confederate Supreme Court was never appointed and approved; nor did the Congress even establish federal courts for judicial review. Moreover, although Confederate congressmen and senators continued to stand for elections, wartime pressures for patriotism reduced the role of political parties and prevented gridlock. This gave President Jefferson Davis a free hand to contract with selected private firms to secure war supplies. Initially, the Confederacy lacked a bureaucracy and trained public servants so it often relied upon the assistance from states for enforcement.

The Confederate authorities had to negotiate with private interests in order to ensure reliable supplies of essential military goods. They also developed an ad hoc policy towards railroads. The Confederacy conscripted men into military service and imposed a system of wartime passes upon civilians to prevent espionage.

Bonner describes thorough narratives the corporatist interworking between the Confederate government and private manufacturers. These included the Tredegar Iron Works of Richmond and the Shelby Iron Works of Alabama, near Birmingham. Drawing largely from Charles Dew (1966), Bonner describes the rise of entrepreneur/businessman Joseph Reid Anderson, who built the Tredegar Iron Works ten years before the beginning of the war. Tredegar secured early contracts from the emerging Confederate government in February 1861 and continued to sell to both private railroads as well as the government. Bonner does a good job describing the contractual negotiations between Tredegar and government purchasing agents. Due to the onslaught of rampant inflation, the company faced rising costs and accusations of price gouging by Confederate politicians. These complaints increased and by late 1862, an army ordinance officer accused the company of yielding excessive profits of 30 to 50 percent or even as high as “60-80% in recent months” (p. 80). After 1863 the company asked five more times for price increases and the accusations of profiteering only got worse. But Tredegar was the major supplier of iron to the Confederacy and continued to obtain contracts. We may regard this process of price increases followed by accusations and new contracts as a bi-lateral negotiating process taking place during times of depreciating currency values. (Incidentally, this process was not substantially different from the union-management negotiations under periods of continuous price inflation in certain twentieth-century corporatist nations.)

Compared to Tredegar, Shelby Iron Works of Alabama was a latecomer. Largely from new primary sources, Bonner uncovers details of crony capitalism between the Shelby Company and government purchasing agents. At Shelby, prospective owners sought to expand operations, but wanted government protection from risks, so with the help of an influential Confederate purchasing agent, they secured a $75,000 loan from the Confederacy. But the private/public partnership created conflicting expectations that undermined the effectiveness of the operation. The company was supposed to repay the loan, but the government expected that the added facilities should be used to fill government orders, not private orders. The government officials feared that Shelby iron might be sold at higher prices to private buyers. The government also aided the Shelby works by exempting key employees from the draft. The major point of contention between Shelby and government was the means of payment (sound money or depreciated Confederate notes and bonds). Eventually, however, Shelby agreed to accept payment in fixed prices, with an eye to renegotiating new terms as prices increased. But in this case, Confederate officials could threaten Shelby’s labor supply by denying draft exemptions, so they may have held an advantage.

The Confederacy did embark upon a major government-run business. At the beginning of the war, the South had only four small local gunpowder mills. The Confederate government decided to create a single, large government-owned gunpowder factory at a secure location to provide its wartime requisitions. This single factory successfully supplied the Confederate armies for most of the war.

Bonner does a good job showing how Woodrow Wilson’s administration adopted the Confederate corporatist model as it mobilized the economy for participation in World War I. Although the Confederacy stumbled into cozy business-government relationships, the Wilson Administration consciously followed the same pattern. The Wilsonian World War I regime was not a top-down command-and-control system, but one that mixed government force and favors with private cooperation. Like the Confederacy, the WWI selective service relied upon the cooperation of local draft boards. Wilson’s government takeover of the railroads worked much the same way as Confederate control over rails. In both cases, the governments had to rely upon the railroad owners’ expertise giving business the upper hand in setting policy.

Bonner’s book provides a helpful addition to the study of early twentieth century Progressive economic policy. His book exploring Confederate mobilization provides a complimentary narrative to Robert Higgs’ (1987) analysis of the growth of government in Crisis and Leviathan. Bonner does not consider the role of intellectual history. Corporatism originated in Europe as the “German Historical School” and adherents taught its doctrines in American universities. Wilson adopted the theories of corporatism from his university professors (Pecquet and Thies, 2010). What Bonner shows us is that the practice of wartime mobilization also flowed out of a preexisting pattern that remained in the memories of contemporary historians.

References:

Davis, William C. 1994. A Government of Our Own: The Making of the Confederacy. New York: Free Press.

Dew, Charles B. 1994. Bond of Iron: Master and Slave at Buffalo Forge. New York: W. W. Norton.

Higgs, Robert J. 1987. Crisis and Leviathan: Critical Episodes in the Growth of American Government. Oxford University Press.

Hill, Louise B. 1936. “State Socialism in the Confederate States of America,” in Southern Sketches, Charlottesville, VA: Historical Publishing.

Pecquet, Gary M. and Clifford F. Thies. 2010. “The Shaping of the Political-Economic Thought of a Future President: Professor Ely and Young Woodrow Wilson at ‘The Hopkins,” The Independent Review, 15 (2) 257-277.

Phelps, Edmund. 2013. Mass Flourishing: How Grassroots Innovation Created Jobs, Challenge and Change. Princeton, NJ: Princeton University Press.

Gary M. Pecquet has published numerous articles on nineteenth and early twentieth century American economic history. The most recent of these works (with Clifford F. Thies) is “Reputation Overrides Record: How Warren G. Harding Mistakenly Became the Worst President of the United States,” The Independent Review (Summer 2016). He can be reached for comment at pecqu1g@cmich.edu.

Copyright (c) 2016 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (September 2016). All EH.Net reviews are archived at http://eh.net/book-reviews/

Subject(s):Government, Law and Regulation, Public Finance
Military and War
Geographic Area(s):North America
Time Period(s):19th Century

The Economic History of Mexico

The Economic History of Mexico

Richard Salvucci, Trinity University

 

Preface[1]

This article is a brief interpretive survey of some of the major features of the economic history of Mexico from pre-conquest to the present. I begin with the pre-capitalist economy of Mesoamerica. The colonial period is divided into the Habsburg and Bourbon regimes, although the focus is not really political: the emphasis is instead on the consequences of demographic and fiscal changes that colonialism brought.  Next I analyze the economic impact of independence and its accompanying conflict. A tentative effort to reconstruct secular patterns of growth in the nineteenth century follows, as well as an account of the effects of foreign intervention, war, and the so-called “dictatorship” of Porfirio Diaz.  I then examine the economic consequences of the Mexican Revolution down through the presidency of Lázaro Cárdenas, before considering the effects of the Great Depression and World War II. This is followed by an examination of the so-called Mexican Miracle, the period of import-substitution industrialization after World War II. The end of the “miracle” and the rise of economic instability in the 1970s and 1980s are discussed in some detail. I conclude with structural reforms in the 1990s, the North American Free Trade Agreement (NAFTA), and slow growth in Mexico since then. It is impossible to be comprehensive and the references appearing in the citations are highly selective and biased (where possible) in favor of English-language works, although Spanish is a must for getting beyond the basics. This is especially true in economic history, where some of the most innovative and revisionist work is being done, as it should be, by historians and economists in Mexico.[2]

 

Where (and What) is Mexico?

For most of its long history, Mexico’s boundaries have been shifting, albeit broadly stable. Colonial Mexico basically stretched from Guatemala, across what is now California and the Southwestern United States, and vaguely into the Pacific Northwest.  There matters stood for more than three centuries[3]. The big shock came at the end of the War of 1847 (“the Mexican-American War” in U.S. history). The Treaty of Guadalupe Hidalgo (1848) ended the war, but in so doing, ceded half of Mexico’s former territory to the United States—recall Texas had been lost in 1836. The northern boundary now ran on a line beginning with the Rio Grande to El Paso, and thence more or less west to the Pacific Ocean south of San Diego. With one major adjustment in 1853 (the Gadsden Purchase or Treaty of the Mesilla) and minor ones thereafter, because of the shifting of the Rio Grande, there it has remained.

Prior to the arrival of the Europeans, Mexico was a congeries of ethnic and city states whose own boundaries were unstable. Prior to the emergence of the most powerful of these states in the fifteenth century, the so-called Triple Alliance (popularly “Aztec Empire”), Mesoamerica consisted of cultural regions determined by political elites and spheres of influence that were dominated by large ceremonial centers such as La Venta, Teotihuacan, and Tula.

While such regions may have been dominant at different times, they were never “economically” independent of one another. At Teotihuacan, there were living quarters given over to Olmec residents from the Veracruz region, presumably merchants. Mesoamerica was connected, if not unified, by an ongoing trade in luxury goods and valuable stones such as jade, turquoise and precious feathers. This was not, however, trade driven primarily by factor endowments and relative costs. Climate and resource endowments did differ significantly over the widely diverse regions and microclimates of Mesoamerica. Yet trade was also political and ritualized in religious belief. For example, calling the shipment of turquoise from the (U.S.) Southwest to Central Mexico the outcome of market activity is an anachronism. In the very long run, such prehistorical exchange facilitated the later emergence of trade routes, roads, and more technologically advanced forms of transport. But arbitrage does not appear to have figured importantly in it.[4]

In sum, what we call “Mexico” in a modern sense is not of much use to the economic historian with an interest in the country before 1870, which is to say, the great bulk of its history. In these years, specificity of time and place, sometimes reaching to the village level, is an indispensable prerequisite for meaningful discussion. At the very least, it is usually advisable to be aware of substantial regional differences which reflect the ethnic and linguistic diversity of the country both before and after the arrival of the Europeans. There are fully ten language families in Mexico, and two of them, Nahuatl and Quiché, number over a million speakers each.[5]

 

Trade and Tribute before the Europeans

In the codices or deerskin folded paintings the Europeans examined (or actually commissioned), they soon became aware of a prominent form of Mesoamerican economic activity: tribute, or taxation in kind, or even labor services. In the absence of anything that served as money, tribute was forced exchange. Tribute has been interpreted as a means of redistribution in a nonmonetary economy. Social and political units formed a basis for assessment, and the goods collected included maize, beans, chile and cotton cloth. It was through the tribute the indigenous “empires” mobilized labor and resources. There is little or no evidence for the existence of labor or land markets to do so, for these were a European import, although marketplaces for goods existed in profusion.

To an extent, the preconquest reliance on barter economies and the absence of money largely accounts for the ubiquity of tribute. The absence of money is much more difficult to explain and was surely an obstacle to the growth of productivity in the indigenous economies.

The tribute was a near-universal attribute of Mesoamerican ceremonial centers and political empires. The city of Teotihuacan (ca. 600 CE, with a population of 125,000 or more) in central Mexico depended on tribute to support an upper stratum of priests and nobles while the tributary population itself lived at subsistence. Tlatelolco (ca 1520, with a population ranging from 50 to 100 thousand) drew maize, cotton, cacao, beans and precious feathers from a wide swath of territory that broadly extended from the Pacific to Gulf coasts that supported an upper stratum of priests, warriors, nobles, and merchants. It was this urban complex that sat atop the lagoons that filled the Valley of Mexico that so awed the arriving conquerors.

While the characterization of tribute as both a corvée and a tax in kind to support nonproductive populations is surely correct, its persistence in altered (i.e., monetized) form under colonial rule does suggest an important question. The tributary area of the Mexica (“Aztec” is a political term, not an ethnic one) broadly comprised a Pacific slope, a central valley, and a Gulf slope. These embrace a wide range of geographic features ranging from rugged volcanic highlands (and even higher snow-capped volcanoes) to marshy, humid coastal plains. Even today, travel through these regions is challenging. Lacking both the wheel and draught animals, the indigenous peoples relied on human transport, or, where possible, waterborne exchange. However we measure the costs of transportation, they were high. In the colonial period, they typically circumscribed the subsistence radius of markets to 25 to 35 miles. Under the circumstances, it is not easy to imagine that voluntary exchange, particularly between the coastal lowlands and the temperate to cold highlands and mountains, would be profitable for all but the most highly valued goods. In some parts of Mexico–as in the Andean region—linkages of family and kinship bound different regions together in a cult of reciprocal economic obligations. Yet absent such connections, it is not hard to imagine, for example, transporting woven cottons from the coastal lowlands to the population centers of the highlands could become a political obligation rather than a matter of profitable, voluntary exchange. The relatively ambiguous role of markets in both labor and goods that persisted into the nineteenth century may perhaps derive from just this combination of climatic and geographical characteristics. It is what made voluntary exchange under capitalistic markets such a puzzlingly problematic answer to the ordinary demands of economic activity.

 

[See the relief map below for the principal physical features of Mexico.]

image1

http://www.igeograf.unam.mx/sigg/publicaciones/atlas/anm-2007/muestra_mapa.php?cual_mapa=MG_I_1.jpg

[See the political map below for Mexican states and state capitals.]

image2

 

 

Used by permission of the University of Texas Libraries, The University of Texas at Austin.

 

“New Spain” or Colonial Mexico: The First Phase

Mexico was established by military conquest and civil war. In the process, a civilization with its own institutions and complex culture was profoundly modified and altered, if not precisely destroyed, by the European invaders. The catastrophic elements of conquest, including the sharp decline of the existing indigenous population, from perhaps 25 million to fewer than a million within a century due to warfare, disease, social disorganization and the imposition of demands for labor and resources should nevertheless not preclude some assessment, however tentative, of its economic level in 1519, when the Europeans arrived.[6]

Recent thinking suggests that Spain was far from poor when it began its overseas expansion. If this were so, the implications of the Europeans’ reactions to what they found on the mainland of Mexico (not, significantly in the Caribbean, and, especially, in Cuba, where they were first established) is important. We have several accounts of the conquest of Mexico by the European participants, of which Bernal Díaz del Castillo is the best known, but not the only one. The reaction of the Europeans was almost uniformly astonishment by the apparent material wealth of Tenochtitlan. The public buildings, spacious residences of the temple precinct, the causeways linking the island to the shore, and the fantastic array of goods available in the marketplace evoked comparisons to Venice, Constantinople, and other wealthy centers of European civilization. While it is true that this was a view of the indigenous elite, the beneficiaries of the wealth accumulated from numerous tributaries, it hardly suggests anything other than a kind of storied opulence. Of course, the peasant commoners lived at subsistence and enjoyed no such privileges, but then so did the peasants of the society from which Bernal Díaz, Cortés, Pedro de Alvarado and the other conquerors were drawn. It is hard to imagine that the average standard of living in Mexico was any lower than that of the Iberian Peninsula. The conquerors remarked on the physical size and apparent robust health of the people whom they met, and from this, scholars such as Woodrow Borah and Sherburne Cook concluded that the physical size of the Europeans and the Mexicans was about the same. Borah and Cook surmised that caloric intake per individual in Central Mexico was around 1,900 calories per day, which certainly seems comparable to European levels.[7]

Certainly, the technological differences with Europe hampered commercial exchange, such as the absence of the wheel for transportation, metallurgy that did not include iron, and the exclusive reliance on pictographic writing systems. Yet by the same token, Mesoamerican agricultural technology was richly diverse and especially oriented toward labor-intensive techniques, well suited to pre-conquest Mexico’s factor endowments. As Gene Wilken points out, Bernardo de Sahagún explained in his General History of the Things of New Spain that the Nahua farmer recognized two dozen soil types related to origin, source, color, texture, smell, consistency and organic content.  They were expert at soil management.[8] So it is possible not only to misspecify, but to mistake the technological “backwardness” of Mesoamerica relative to Europe, and historians routinely have.

The essentially political and clan-based nature of economic activity made the distribution of output somewhat different from standard neoclassical models. Although no one seriously maintains that indigenous civilization did not include private property and, in fact, property rights in humans, the distribution of product tended to emphasize average rather than marginal product. If responsibility for tribute was collective, it is logical to suppose that there was some element of redistribution and collective claim on output by the basic social groups of indigenous society, the clans or calpulli.[9] Whatever the case, it seems clear that viewing indigenous society and economy as strained by population growth to the point of collapse, as the so-called “Berkeley school” did in the 1950s, is no longer tenable. It is more likely that the tensions exploited by the Europeans to divide and conquer their native hosts and so erect a colonial state on pre-existing native entities were mainly political rather than socioeconomic. It was through the assistance of native allies such as the Tlaxcalans, as well as with the help of previously unknown diseases such as smallpox that ravaged the indigenous peoples, that the Europeans were able to place a weakened Tenochtitlan under siege and finally defeat it.

 

Colonialism and Economic Adjustment to Population Decline

With the subjection first of Tenochtitlan and Tlatelolco and then of other polities and peoples, a process that would ultimately stretch well into the nineteenth century and was never really completed, the Europeans turned their attention to making colonialism pay. The process had several components: the modification or introduction of institutions of rule and appropriation; the introduction of new flora and fauna that could be turned to economic use; the reorientation of a previously autarkic and precapitalist economy to the demands of trade and commercial exploitation; and the implementation of European fiscal sovereignty. These processes were complex, required much time, and were, in many cases, only partly successful. There is considerable speculation regarding how long it took before Spain (arguably a relevant term by the mid-sixteenth century) made colonialism pay. The best we can do is present a schematic view of what occurred. Regional variations were enormous: a “typical” outcome or institution of colonialism may well have been an outcome visible in central Mexico. Moreover, all generalizations are fragile, rest on limited quantitative evidence, and will no doubt be substantially modified eventually. The message is simple: proceed with caution.

The Europeans did not seek to take Mesoamerica as a tabula rasa. In some ways, they would have been happy to simply become the latest in a long line of ruling dynasties established by decapitating native elites and assuming control. The initial demand of the conquerors for access to native labor in the so-called encomienda was precisely that, with the actual task of governing be left to the surviving and collaborating elite: the principle of “indirect rule.”[10] There were two problems with this strategy: the natives resisted and the natives died. They died in such large numbers as to make the original strategy impracticable.

The number of people who lived in Mesoamerica has long been a subject of controversy, but there is no point in spelling it out once again. The numbers are unknowable and, in an economic sense, not really important. The population of Tenochtitlan has been variously estimated between 50 and 200 thousand individuals, depending on the instruments of estimation.  As previously mentioned, some estimates of the Central Mexican population range as high as 25 million on the eve of the European conquest, and virtually no serious student accepts the small population estimates based on the work of Angel Rosenblatt. The point is that labor was abundant relative to land, and that the small surpluses of a large tributary population must have supported the opulent elite that Bernal Díaz and his companions described.

By 1620, or thereabouts, the indigenous population had fallen to less than a million according to Cook and Borah. This is not just the quantitative speculation of modern historical demographers. Contemporaries such as Jerónimo de Mendieta in his Historia eclesiástica Indiana (1596) spoke of towns formerly densely populated now witness to “the palaces of those former Lords ruined or on the verge of. The homes of the commoners mostly empty, roads and streets deserted, churches empty on feast days, the few Indians who populate the towns in Spanish farms and factories.” Mendieta was an eyewitness to the catastrophic toll that European microbes and warfare took on the native population. There was a smallpox epidemic in 1519-20 when 5 to 8 million died. The epidemic of hemorrhagic fever in 1545 to 1548 was one of the worst demographic catastrophes in human history, killing 5 to 15 million people. And then again in 1576 to 1578, when 2 to 2.5 million people died, we have clear evidence that land prices in the Valley of Mexico (Coyoacán, a village outside Mexico City, as the reconstructed Tenochtitlán was called) collapsed. The death toll was staggering. Lesser outbreaks were registered in 1559, 1566, 1587, 1592, 1601, 1604, 1606, 1613, 1624, and 1642. The larger point is that the intensive use of native labor, such as the encomienda, had to come to an end, whatever its legal status had become by virtue of the New Laws (1542). The encomienda or the simple exploitation of massive numbers of indigenous workers was no longer possible. There were too few “Indians” by the end of the sixteenth century.[11]

As a result, the institutions and methods of economic appropriation were forced to change. The Europeans introduced pastoral agriculture – the herding of cattle and sheep – and the use of now abundant land and scarce labor in the form of the hacienda while the remaining natives were brought together in “villages” whose origins were not essentially pre- but post-conquest, the so-called congregaciones, at the same time that the titles to now-vacant lands were created, regularized and “composed.”[12] (Land titles were a European innovation as well). Sheep and cattle, which the Europeans introduced, became part of the new institutional backbone of the colony. The natives would continue to rely on maize for the better part of their subsistence, but the Europeans introduced wheat, olives (oil), grapes (wine) and even chickens, which the natives rapidly adopted. On the whole, the results of these alterations were complex. Some scholars argue that the native diet improved even in the face of their diminishing numbers, a consequence of increased land per person and of greater variety of foodstuffs, and that the agricultural potential of the colony now called New Spain was enhanced. By the beginning of the seventeenth century, the combined indigenous, European immigrant, and new mixed blood populations could largely survive on the basis of their own production. The introduction of sheep lead to the introduction and manufacture of woolens in what were called obrajes or manufactories in Puebla, Querétaro, and Coyoacán. The native peoples continued to produce cottons (a domestic crop) under the stimulus of European organization, lending, and marketing. Extensive pastoralism, the cultivation of cereals and even the incorporation of native labor then characterized the emergence of the great estates or haciendas, which became a characteristic rural institution through the twentieth century, when the Mexican Revolution put an end to many of them. Thus the colony of New Spain continued to feed, clothe and house itself independent of metropolitan Spain’s direction. Certainly, Mexico before the Conquest was self-sufficient. The extent to which the immigrant and American Spaniard or creole population depended on imports of wine, oil and other foodstuffs and textiles in the decades immediately following the conquest is much less clear.

At the same time, other profound changes accompanied the introduction of Europeans, their crops and their diseases into what they termed the “kingdom” (not colony, for constitutional reasons) of New Spain.[13] Prior to the conquest, land and labor had been commoditized, but not to any significant extent, although there was a distinction recognized between possession and ownership.  Scholars who have closely examined the emergence of land markets after the conquest—mainly in the Valley of Mexico—are virtually unanimous in this conclusion. To the extent that markets in labor and commodities had emerged, it took until the 1630s (and later elsewhere in New Spain) for the development to reach maturity. Even older mechanisms of allocation of labor by administrative means (repartimiento) or by outright coercion persisted. Purely economic incentives in the form of money wages and prices never seemed adequate to the job of mobilizing resources and those with access to political power were reluctant to pay a competitive wage. In New Spain, the use of some sort of political power or rent-seeking nearly always accompanied labor recruitment. It was, quite simply, an attempt to evade the implications of relative scarcity, and renders the entire notion of “capitalism” as a driving economic force in colonial Mexico quite inexact.

 

Why the Settlers Resisted the Implications of Scarce Labor

The reasons behind this development are complex and varied. The evidence we have for the Valley of Mexico demonstrates that the relative price of labor rose while the relative price of land fell even when nominal movements of one or the other remained fairly limited. For instance, the table constructed below demonstrates that from 1570-75 through 1591-1606, the price of unskilled labor in the Valley of Mexico nearly tripled while the price of land in the Valley (Coyoacán) fell by nearly two thirds. On the whole, the price of labor relative to land increased by nearly 800 percent. The evolution of relative prices would have inevitably worked against the demanders of labor (Europeans and increasingly, creoles or Americans of largely European ancestry) and in favor of the supplier (native labor, or people of mixed race generically termed mestizo). This was not of course what the Europeans had in mind and by capture of legal institutions (local magistrates, in particularly), frequently sought to substitute compulsion for what would have been costly “free labor.” What has been termed the “depression” of the seventeenth century may well represent one of the consequences of this evolution: an abundance of land, a scarcity of labor, and the attempt of the new rulers to adjust to changing relative prices. There were repeated royal prohibitions on the use of forced indigenous labor in both public and private works, and thus a reduction in the supply of labor. All highly speculative, no doubt, but the adjustment came during the central decades of the seventeenth century, when New Spain increasingly produced its own woolens and cottons, and largely assumed the tasks of providing itself with foodstuffs and was thus required to save and invest more.  No doubt, the new rulers felt the strain of trying to do more with less.[14]

 

Years Land Price Index Labor Price Index (Labor/Land) Index
1570-1575 100 100 100
1576-1590 50 143 286
1591-1606 33 286 867

 

Source: Calculated from Rebecca Horn, Postconquest Coyoacan: Nahua-Spanish Relations in Central Mexico, 1519-1650 (Stanford: Stanford University Press, 1997), p. 208 and José Ignacio Urquiola Permisan, “Salarios y precios en la industria manufacturer textile de la lana en Nueva España, 1570-1635,” in Virginia García Acosta, (ed.), Los precios de alimentos y manufacturas novohispanos (México, DF: CIESAS, 1995), p. 206.

 

The overall role of Mexico within the Hapsburg Empire was in flux as well. Nothing signals the change as much as the emergence of silver mining as the principal source of Mexican exportables in the second half of the sixteenth century. While Mexico would soon be eclipsed by Peru as the most productive center of silver mining—at least until the eighteenth century—the discovery of significant silver mines in Zacatecas in the 1540s transformed the economy of the Spanish empire and the character of New Spain’s as well.

 

 

 

Silver Mining

While silver mining and smelting was practiced before the conquest, it was never a focal point of indigenous activity. But for the Europeans, Mexico was largely about silver mining. From the mid- sixteenth century onward, it was explicitly understood by the viceroys that they were to do all in their power to “favor the mines,” as one memorable royal instruction enjoined. Again, there has been much controversy of the precise amounts of silver that Mexico sent to the Iberian Peninsula. What we do know certainly is that Mexico (and the Spanish Empire) became the leading source of silver, monetary reserves, and thus, of high-powered money. Over the course of the colonial period, most sources agree that Mexico provided nearly 2 billion pesos (dollars) or roughly 1.6 billion troy ounces to the world economy. The graph below provides a picture of the remissions of all Mexican silver to both Spain and to the Philippines taken from the work of John TePaske.[15]

page16

Since the population of Mexico under Spanish rule was at most 6 million people by the end of the colonial period, the kingdom’s silver output could only be considered astronomical.

This production has to be considered in both its domestic and international dimensions. From a domestic perspective, the mines were what a later generation of economists would call “growth poles.” They were markets in which inputs were transformed into tradable outputs at a much higher rate of productivity (because of mining’s relatively advanced technology) than Mexico’s other activities. Silver thus became Mexico’s principal exportable good, and remained so well into the late nineteenth century.  The residual claimants on silver production were many and varied.  There were, of course the silver miners themselves in Mexico and their merchant financiers and suppliers. They ranged from some of the wealthiest people in the world at the time, such as the Count of Regla (1710-1781), who donated warships to Spain in the eighteenth century, to individual natives in Zacatecas smelting their own stocks of silver ore.[16] While the conditions of labor in Mexico’s silver mines were almost uniformly bad, the compensation ranged from above market wages paid to free labor in the prosperous larger mines  of the Bajío and the North to the use of forced village  labor drafts in more marginal (and presumably less profitable) sites such as Taxco. In the Iberian Peninsula, income from American silver mines ultimately supported not only a class of merchant entrepreneurs in the large port cities, but virtually the core of the Spanish political nation, including monarchs, royal officials, churchmen, the military and more. And finally, silver flowed to those who valued it most highly throughout the world. It is generally estimated that 40 percent of Spain’s American (not just Mexican, but Peruvian as well) silver production ended up in hoards in China.

Within New Spain, mining centers such as Guanajuato, San Luis Potosí, and Zacatecas became places where economic growth took place rapidly, in which labor markets more readily evolved, and in which the standard of living became obviously higher than in neighboring regions. Mining centers tended to crowd out growth elsewhere because the rate of return for successful mines exceeded what could be gotten in commerce, agriculture and manufacturing. Because silver was the numeraire for Mexican prices—Mexico was effectively on a silver standard—variations in silver production could and did have substantial effects on real economic activity elsewhere in New Spain. There is considerable evidence that silver mining saddled Mexico with an early case of “Dutch disease” in which irreducible costs imposed by the silver standard ultimately rendered manufacturing and the production of other tradable goods in New Spain uncompetitive. For this reason, the expansion of Mexican silver production in the years after 1750 was never unambiguously accompanied by overall, as opposed to localized prosperity. Silver mining tended to absorb a disproportional quantity of resources and to keep New Spain’s price level high, even when the business cycle slowed down—a fact that was to impress visitors to Mexico well into the nineteenth century. Mexican silver accounted for well over three-quarters of exports by value into the nineteenth century as well. The estimates vary widely, for silver was by no means the only, or even the most important source of revenue to the Crown, but by the end of the colonial era, the Kingdom of New Spain probably accounted for 25 percent of the Crown’s imperial income.[17] That is why reformist proposals circulating in governing circles in Madrid in the late eighteenth century fixed on Mexico. If there was any threat to the American Empire, royal officials thought that Mexico, and increasingly, Cuba, were worth holding on to. From a fiscal standpoint, Mexico had become just that important.[18]

 

“New Spain”: The Second Phase                of the Bourbon “Reforms”

In 1700, the last of the Spanish Hapsburgs died and a disputed succession followed. The ensuring conflict, known as the War of Spanish Succession, came to an end in 1714. The grandson of French king Louis XIV came to the Spanish throne as King Philip V. The dynasty he represented was known as the Bourbons. For the next century of so, they were to determine the fortunes of New Spain. Traditionally, the Bourbons, especially the later ones, have been associated with an effort to “renationalize” the Spanish empire in America after it had been thoroughly penetrated by French, Dutch, and lastly, British commercial interests.[19]

There were at least two areas in which the Bourbon dynasty, “reformist” or no, affected the Mexican economy. One of them dealt with raising revenue and the other was the international position of the imperial economy, specifically, the volume and value of trade. A series of statistics calculated by Richard Garner shows that the share of Mexican output or estimated GDP taken by taxes grew by 167 percent between 1700 and 1800. The number of taxes collected by the Royal Treasury increased from 34 to 112 between 1760 and 1810. This increase, sometimes labelled as a Bourbon “reconquest” of Mexico after a century and a half of drift under the Hapsburgs, occurred because of Spain’s need to finance increasingly frequent and costly wars of empire in the eighteenth century. An entire array of new taxes and fiscal placemen came to Mexico. They affected (and alienated) everyone, from the wealthiest merchant to the humblest villager. If they did nothing else, the Bourbons proved to be expert tax collectors.[20]

The second and equally consequential change in imperial management lay in the revision and “deregulation” of New Spain’s international trade, or the evolution from a “fleet” system to a regime of independent sailings, and then, finally, of voyages to and from a far larger variety of metropolitan and colonial ports. From the mid-sixteenth century onwards, ocean-going trade between Spain and the Americas was, in theory, at least, closely regulated and supervised. Ships in convoy (flota) sailed together annually under license from the monarchy and returned together as well. Since so much silver specie was carried, the system made sense, even if the flotas made a tempting target and the problem of contraband was immense. The point of departure was Seville and later, Cadiz. Under pressure from other outports in the late eighteenth century, the system was finally relaxed. As a consequence, the volume and value of trade to Mexico increased as the price of importables fell. Import-competing industries in Mexico, especially textiles, suffered under competition and established merchants complained that the new system of trade was too loose. But to no avail. There is no measure of the barter terms of trade for the eighteenth century, but anecdotal evidence suggests they improved for Mexico. Nevertheless, it is doubtful that these gains could have come anywhere close to offsetting the financial cost of Spain’s “reconquest” of Mexico.[21]

On the other hand, the few accounts of per capita real income growth in the eighteenth century that exist suggest little more than stagnation, the result of population growth and a rising price level. Admittedly, looking for modern economic growth in Mexico in the eighteenth century is an anachronism, although there is at least anecdotal evidence of technological change in silver mining, especially in the use of gunpowder for blasting and excavating, and of some productivity increase in silver mining. So even though the share of international trade outside of goods such as cochineal and silver was quite small, at the margin, changes in the trade regime were important. There is also some indication that asset income rose and labor income fell, which fueled growing social tensions in New Spain. In the last analysis, the growing fiscal pressure of the Spanish empire came when the standard of living for most people in Mexico—the native and mixed blood population—was stagnating. During periodic subsistence crisis, especially those propagated by drought and epidemic disease, and mostly in the 1780s, living standards fell. Many historians think of late colonial Mexico as something of a powder keg waiting to explode. When it did, in 1810, the explosion was the result of a political crisis at home and a dynastic failure abroad. What New Spain had negotiated during the Wars of Spanish Succession—regime change– provide impossible to surmount during the Napoleonic Wars (1794-1815). This may well be the most sensitive indicator of how economic conditions changed in New Spain under the heavy, not to say clumsy hand, of the Bourbon “reforms.”[22]

 

The War for Independence, the Insurgency, and Their Legacy

The abdication of the Bourbon monarchy to Napoleon Bonaparte in 1808 produced a series of events that ultimately resulted in the independence of New Spain. The rupture was accompanied by a violent peasant rebellion headed by the clerics Miguel Hidalgo and José Morelos that, one way or another, carried off 10 percent of the population between 1810 and 1820. Internal commerce was largely paralyzed. Silver mining essentially collapsed between 1810 and 1812 and a full recovery of mining output was delayed until the 1840s. The mines located in zones of heavy combat, such as Guanajuato and Querétaro, were abandoned by fleeing workers. Thus neglected, they quickly flooded.

At the same time, the fiscal and human costs of this period, the Insurgency, were even greater.[23] The heavy borrowings in which the Bourbons engaged to finance their military alliances left Mexico with a considerable legacy of internal debt, estimated at £16 million at Independence. The damage to the fiscal, bureaucratic and administrative structure of New Spain in the face of the continuing threat of Spanish reinvasion (Spain did not recognize the Independence of Mexico (1821)) in the 1820s drove the independent governments into foreign borrowing on the London market to the tune of £6.4 million in order to finance continuing heavy military outlays. With a reduced fiscal capacity, in part the legacy of the Insurgency and in part the deliberate effort of Mexican elites to resist any repetition Bourbon-style taxation, Mexico defaulted on its foreign debt in 1827. For the next sixty years, through a serpentine history of moratoria, restructuring and repudiation (1867), it took until 1884 for the government to regain access to international capital markets, at what cost can only be imagined. Private sector borrowing and lending continued, although to what extent is currently unknown. What is clear is that the total (internal plus external) indebtedness of Mexico relative to late colonial GDP was somewhere in the range of 47 to 56 percent.[24]

This was, perhaps, not an insubstantial amount for a country whose mechanisms of public finance were in what could be mildly termed chaotic condition in the 1820s and 1830s as the form, philosophy, and mechanics of government oscillated from federalist to centralist and back into the 1850s.  Leaving aside simple questions of uncertainty, there is the very real matter that the national government—whatever the state of private wealth—lacked the capacity to service debt because national and regional elites denied it the means to do so. This issue would bedevil successive regimes into the late nineteenth century, and, indeed, into the twentieth.[25]

At the same time, the demographic effects of the Insurgency exacted a cost in terms of lost output from the 1810s through the 1840s. Gaping holes in the labor force emerged, especially in the fertile agricultural plains of the Bajío that created further obstacles to the growth of output. It is simply impossible to generalize about the fortunes of the Mexican economy in this period because of the dramatic regional variations in the Republic’s economy. A rough estimate of output per head in the late colonial period was perhaps 40 pesos (dollars).[26] After a sharp contraction in the 1810s, income remained in that neighborhood well into the 1840s, at least until the eve of the war with the United States in 1846. By the time United States troops crossed the Rio Grande, a recovery had been under way, but the war arrested it. Further political turmoil and civil war in the 1850s and 1860s represented setbacks as well. In this way, a half century or so of potential economic growth was sacrificed from the 1810s through the 1870s. This was not an uncommon experience in Latin America in the nineteenth century, and the period has even been called The Stage of the Great Delay.[27] Whatever the exact rate of real per capita income growth was, it is hard to imagine it ever exceeded two percent, if indeed it reached much more than half that.

 

Agricultural Recovery and War

On the other hand, it is clear that there was a recovery in agriculture in the central regions of the country, most notably in the staple maize crop and in wheat. The famines of the late colonial era, especially of 1785-86, when massive numbers perished, were not repeated. There were years of scarcity and periodic corresponding outbreaks of epidemic disease—the cholera epidemic of 1832 affected Mexico as it did so many other places—but by and large, the dramatic human wastage of the colonial period ceased, and the death rate does appear to have begun to fall. Very good series on wheat deliveries and retail sales taxes for the city of Puebla southeast of Mexico City show a similarly strong recovery in the 1830s and early 1840s, punctuated only by the cholera epidemic whose effects were felt everywhere.[28]

Ironically, while the Panic of 1837 appears to have at least hit the financial economy in Mexico hard with a dramatic fall in public borrowing (and private lending), especially in the capital,[29] an incipient recovery of the real economy was ended by war with the United States. It is not possible to put numbers on the cost of the war to Mexico, which lasted intermittently from 1846 to 1848, but the loss of what had been the Southwest under Mexico is most often emphasized. This may or may not be accurate. Certainly, the loss of California, where gold was discovered in January 1848, weighs heavily on the historical imaginations of modern Mexicans. There is also the sense that the indemnity paid by the United States–$15 million—was wholly inadequate, which seems at least understandable when one considers that Andrew Jackson offered $5 million to purchase Texas alone in 1829.

It has been estimated that the agricultural output of the Mexican “cession” as it was called in 1900, was nearly $64 million, and that the value of livestock in the territory was over $100 million. The value of gold and silver produced was about $35 million. Whether it is reasonable to employ the numbers in estimating the present value of output relative to the indemnity paid is at least debatable as a counterfactual, unless one chooses to regard this as the annuitized value on a perpetuity “purchased” from Mexico at gunpoint, which seems more like robbery than exchange.  In the long run, the loss may have been staggering, but in the short run, much less so. The northern territories Mexico lost had really yielded very little up until the War. In fact, the balance of costs and revenues to the Mexican government may well have been negative.[30]

Whatever the case, the decades following the war with the United States until the beginning of the administration of Porfirio Díaz (1876) are typically regarded as a step backward. The reasons are several. In 1850, the government essentially went broke. While it is true that its financial position had disintegrated since the mid-1830s, 1850 marked a turning point. The entire indemnity payment from the United States was consumed in debt service, but this made no appreciable dent in the outstanding principal, which hovered around 50 million pesos (dollars).  The limits of debt sustainability had been reached: governing was turned into a wild search for resources, which proved fruitless. Mexico continued to sell of parts of its territory, such as the Treaty of the Mesilla (1853), or Gadsden Purchase, whose proceeds largely ended up in the hands of domestic financiers rather than foreign creditors’.[31] Political divisions, if anything, terrible before the war with the United States, turned catastrophic. A series of internal revolts, uprisings and military pronouncements segued into yet another violent civil war between liberals and conservatives—now a formal party—the so-called Three Years’ War (1856-58). In 1862, frustrated by Mexico’s suspension of foreign debt service, Great Britain, Spain and France seized Veracruz. A Hapsburg prince, Maximilian, was installed as Mexico’s second “emperor.” (Agustín de Iturbide was the first). While only the French actively prosecuted the war within Mexico, and while they never controlled more than a very small part of the country, the disruption was substantial. By 1867, with Maximillian deposed and the French army withdrawn, the country required serious reconstruction. [32]

 

Juárez, Díaz and the Porfiriato: authoritarian development.

To be sure, the origins of authoritarian development in nineteenth century Mexico were not with Porfirio Díaz, as is often asserted. Their beginnings actually went back several decades earlier, to the last presidency of Santa Anna, generally known as the Dictatorship (1853-54). But Santa Anna was overthrown too quickly, and now for the last time, for much to have actually occurred. A ministry for development (Fomento) had been created, but the Liberal revolution of Ayutla swept Santa Anna and his clique away for good. Serious reform seems to have begun around 1870, when the Finance Minister was Matías Romero. Romero was intent on providing Mexico with a modern Treasury, and on ending the hand-to- mouth financing that had mostly characterized the country’s government since Independence, or at least since the mid-1830s. So it is appropriate to pick up with the story here. Where did Mexico stand in 1870?[33]

The most revealing data that we have on the state of economic development come from various anthropometric and cost of living studies by Amilcar Challu, Aurora Gómez Galvarriato, and Moramay López Alonso.[34] Their research overlaps in part, and gives a fascinating picture of Mexico in the long run, from 1735 to 1940. For the moment, let us look at the period leading up to 1867, when the French withdrew from Mexico. If we look at the heights of the “literate” population, Challu’s research suggests that the standard of living stagnated between 1750 and 1840. If we look at the “illiterate” population, there was a consistent decline until 1850. Since the share of the illiterate population was clearly larger, we might infer that living standards for most Mexicans declined after 1750, however we interpret other quantitative and anecdotal evidence.

López Alonso confines her work to the period after the 1840s. From 1850 through 1890, her work generally corroborates Challu’s. The period after the Mexican War was clearly a difficult one for most Mexicans, and the challenge that both Juárez and Díaz faced was a macroeconomy in frank contraction after 1850. The regimes after 1867 were faced with stagnation.

The real wage study of by Amilcar Challu and Aurora Gómez Galvarriato, when combined with the existing anthropometric work, offers a pretty clear correlation between movements in real wages (down) and height (falling). [35]

It would then appear growth from the 1850s through the 1870s was slow—if there was any at all—and perhaps inferior to what had come between the 1820s and the 1840s. Given the growth of import substitution during the Napoleonic Wars, roughly 1790-1810, coupled with the commercial opening brought by the Bourbons’   post-1789 extension of “free trade” to Mexico, we might well see a pattern of mixed performance (1790-1810), sharp contraction (the 1810s), rebound and recovery, with a sharp financial shocks coming in the mid-1820s and mid -1830s (1820s-1840s), and stagnation once more (1850s-1870s). Real per capita output oscillated, sometimes sharply, around an underlying growth rate of perhaps one percent; changes in the distribution of income and wealth are more or less impossible to identify consistently, because studies conflict.

Far less speculative is that the foundations for modern economic growth were laid down in Mexico during the era of Benito Juárez. Its key elements were the creation of a secular, bourgeois state and secular institutions embedded in the Constitution of 1857. The titanic ideological struggles between liberals and conservatives were ultimately resolved in favor of a liberal, but nevertheless centralizing form of government under Porfirio Diáz. This was the beginning of the end of the Ancien Regime. Under Juárez, corporate lands of the Church and native villages were privatized in favor of individual holdings and their former owners compensated in bonds. This was effectively the largest transfer of land title since the late sixteenth century (not including the war with the United States) and it cemented the idea of individual property rights. With the expulsion of the French and the outright repudiation of the French debt, the Treasury was reorganized along more modern lines. The country got additional breathing room by the suspension of debt service to Great Britain until the terms of the 1825 loans were renegotiated under the Dublán Convention (1884). Equally, if not more important, Mexico now entered the railroad age in 1876, nearly forty years after the first tracks were laid in Cuba in 1837. The educational system was expanded in an attempt to create at least a core of literate citizens who could adopt the tools of modern finance and technology. Literacy still remained in the neighborhood of 20 percent, and life expectancy at birth scarcely reached 40 years of age, if that. Yet by the end of the Restored Republic (1876), Mexico had turned a corner. There would be regressions, but the nineteenth century had finally arrived, aptly if brutally signified by Juárez’ execution of Maximilian in Querétaro in 1867.[36]

Porfirian Mexico

Yet when Díaz came to power, Mexico was, in many ways, much as it had been a century earlier. It was a rural, agrarian nation whose primary agricultural output per person was maize, followed by wheat and beans. These were produced on haciendas and ranchos in Jalisco, Guanajuato, Michoacán, Mexico, Puebla as well as Oaxaca, Veracruz, Aguascalientes, Chihuahua and Sonora. Cotton, which with great difficulty had begun to supply a mechanized factory regime (first in spinning, then weaving) was produced in Oaxaca, Yucatán, Guerrero and Chiapas as well as in parts of Durango and Coahuila. Domestic production of raw cotton rarely sufficed to supply factories in Michoacán, Querétaro, Puebla and Veracruz, so imports from the Southern United States were common. For the most part, the indigenous population lived on maize, beans, and chile, producing its own subsistence on small, scattered plots known as milpas. Perhaps 75 percent of the population was rural, with the remainder to be found in cities like Mexico, Guadalajara, San Luis Potosí, and later, Monterrey. Population growth in the Southern and Eastern parts of the country had been relatively slow in the nineteenth century. The North and the center North grew more rapidly.  The Center of the country, less so. Immigration from abroad had been of no consequence.[37]

It is a commonplace to see the presidency of Porfirio Díaz (1876-1910) as a critical juncture in Mexican history, and this would be no less true of economic or commercial history as well. By 1910, when the Díaz government fell and Mexico descended into two decades of revolution, the first one extremely violent, the face of the country had been changed for good. The nature and effect of these changes remain not only controversial, but essential for understanding the subsequent evolution of the country, so we should pause here to consider some of their essential features.

While mining and especially, silver mining, had long held a privileged place in the economy, the nineteenth century had witnessed a number of significant changes. Until about 1889, the coinage of gold, silver, and copper—a very rough proxy for production given how much silver had been illegally exported—continued on a steadily upward track. In 1822, coinage was about 10 million pesos. By 1846, it had reached roughly 15 million pesos. There was something of a structural break after the war with the United States (its origins are unclear), and coinage continued upward to about 25 million pesos in 1888. Then, the falling international price of silver, brought on by large increases in supply elsewhere, drove the trend after 1889 sharply downward. By 1909-10, coinage had collapsed to levels previously unrecorded since the 1820s, although in 1904 and 1905, it had skyrocketed to nearly 45 million pesos.[38]

It comes as no surprise that these variations in production corresponded to sharp changes in international relative prices. For example, the market price of silver declined sharply relative to lead, which in turn encountered a large increase in Mexican production and a diversification into other metals including zinc, antinomy, and copper. Mexico left the silver standard (for international transactions, but continued to use silver domestically) in 1905, which contributed to the eclipse of this one crucial industry, which would never again have the status it had when Díaz became president in 1876, when precious metals represented 75 percent of Mexican exports by value. By the time he had decamped in exile to Paris, precious metals accounted for less than half of all exports.

The reason for this relative decline was the diversification of agricultural exports that had been slowly occurring since the 1870s. Coffee, cotton, sugar, sisal and vanilla were the principal crops, and some regions of the country such as Yucatán (henequen) and Durango and Tamaulipas (cotton) supplied new export crops.

 

Railroads and Infrastructure

None of be of this would have occurred without the massive changes in land tenure that had begun in the 1850s, but most of all, without the construction of railroads financed by the migration of foreign capital to Mexico under Díaz. At one level, it is a well-known story of social savings, which were substantial in Mexico because the terrain was difficult and the alternative modes of carriage few. One way or another, transportation has always been viewed as an “obstacle” to Mexican economic development. That must be true at some level, although recent studies (especially by Sandra Kuntz) have raised important qualifications. Railroads may not have been gateways to foreign dependency, as historians once argued, but there were limits to their ability to effect economic change, even internally. They tended to enlarge the internal market for some commodities more than others. The peculiarities of rate-making produced other distortions, while markets for some commodities were inevitably concentrated in major cities or transshipment points which afforded some monopoly power to distributors even as a national market in basic commodities became more of a reality. Yet, in general, the changes were far reaching.[39]

Conventional figures confirm conventional wisdom. When Díaz assumed the presidency, there were 660 km (410 miles) of track. In 1910, there were 19,280 km (about 12,000 miles). Seven major lines linked the cities of Mexico, Veracruz, Acapulco, Juárez, Laredo, Puebla, Oaxaca. Monterrey and Tampico in 1892. The lines were built by foreign capital (e.g., the Central Mexicano was built by the Atchison, Topeka and Santa Fe), which is why resolving the long-standing questions of foreign debt service were critical. Large government subsidies on the order of 3,500 to 8,000 pesos per km were granted, and financing the subsidies amounted to over 30 million pesos by 1890. While the railroads were successful in creating more of a national market, especially in the North, their finances were badly affected by the depreciation of the silver peso, given that foreign liabilities had to be liquidated in gold.

As a result, the government nationalized the railroads in 1903. At the same time, it undertook an enormous effort to construct infrastructure such as drainage and ports, virtually all of which were financed by British capital and managed by “Don Porfirio’s contactor,” Sir Weetman Pearson.  Between railroads, ports, drainage works and irrigation facilities, the Mexican government borrowed 157 million pesos to finance costs.[40]

The expansion of the railroads, the build-out of infrastructure and the expansion of trade would have normally increased output per capita. Any data we have prior to 1930 are problematic, and before 1895, strictly speaking, we have no official measures of output per capita at all. Most scholars shy away from using levels of GDP in any form, other than for illustrative purposes.  Aside from the usual problems attending national income accounting, Mexico presents a few exceptional challenges. In peasant families, where women were entrusted with converting maize into tortilla, no small job, the omission of their value added from GDP must constitute a sizeable defect in measured output. Moreover, as the commercial radius of Mexican agriculture expanded rapidly as railroads, roads, and later, highways spread extensively, growth rates represented increased commercialization rather than increased growth. We have no idea how important this phenomenon was, but it is worth keeping in mind when we look at very rapid growth rates after 1940.

There are various measures of cumulative growth during the Porfiriato. By and large, the figure from 1900 through 1910 is around 23 percent, which is certainly higher than rates achieved during the nineteenth century, but nothing like what was recorded after 1940. In light of declining real wages, one can only assume that the bulk of “progress” flowed to the recipients of property income. This may well have represented a reversal of trends in the nineteenth century, when some argue that property income contracted in the wake of the Insurgency[41].

There was also significant industrialization in Mexico during the Porfiriato. Some industry, especially textiles, had its origins in the 1840s, but its size, scale and location altered dramatically by the end of the nineteenth century. For example, the cotton textile industry saw the number of workers, spindles and looms more than double from the late 1870s to the first decade of the nineteenth century. Brewing and its associated industry, glassmaking, became well established in Monterrey during the 1890s. The country’s first iron and steel mill, Fundidora Monterrey, was established there as well in 1903. Other industries, such as papermaking and cigarettes followed suit. By the end of the Porfiriato, over 10 percent of Mexico’s output was certainly industrial.[42]

 

From Revolution to “Miracle”

The Mexican Revolution (1910-1940) began as a political upheaval provoked by a crisis in the presidential succession when Porfirio Díaz refused to leave office in the wake of electoral defeat after signaling his willingness to do so in a famous pubic interview of 1908.[43] It was also the result of an agrarian uprising and the insistent demand of Mexico’s growing industrial proletariat for a share of political power. Finally, there was a small (fewer than 10 percent of all households) but upwardly mobile urban middle class created by economic development under Díaz whose access to political power had been effectively blocked by the regime’s mechanics of political control. Precisely how “revolutionary” were the results of the armed revolt—which persisted largely through the 1910s and peaked in a civil war in 1914-1915—has long been contentious, but is only tangentially relevant as a matter of economic history. The Mexican Revolution was no Bolshevik movement (of course, it predated Bolshevism by seven years) but it was not a purely bourgeois constitutional movement either, although it did contain substantial elements of both.

From a macroeconomic standpoint, it has become fashionable to argue that the Revolution had few, if any, profound economic consequences. It seems as if the principal reason was that revolutionary factions were interested in appropriating rather than destroying the means of production. For example, the production of crude oil peaked in Mexico in 1915—at the height of the Revolution—because crude oil could be used as a source of income to the group controlling the wells in Veracruz state. This was a powerful consideration.[44]

Yet in another sense, the conclusion that the Revolution had slight economic effects is not only facile, but obviously wrong. As the demographic historian Robert McCaa showed, the excess mortality occasioned by the Revolution was larger than any similar event in Mexican history other than the conquest in the sixteenth century. There has been no attempt made to measure the output lost by the demographic wastage (including births that never occurred), yet even the effect on the population cohort born between 1910 and 1920 is plain to see in later demographic studies.  [45]

There is also a subtler question that some scholars have raised. The Revolution increased labor mobility and the labor supply by abolishing constraints on the rural population such as debt peonage and even outright slavery. Moreover, the Revolution, by encouraging and ultimately setting into motion a massive redistribution of previously privatized land, contributed to an enlarged supply of that factor of production as well. The true impact of these developments was realized in the 1940s and 1950s, when rapid economic growth began, the so-called Mexican Miracle, which was characterized by rates of real growth of as much as 6 percent per year (1955-1966). Whatever the connection between the Revolution and the Miracle, it will require a serious examination on empirical grounds and not simply a dogmatic dismissal of what is now regarded as unfashionable development thinking: import substitution and inward-oriented growth.[46]

The other major consequence of the Revolution, the agrarian reform and the creation of the ejido, or land granted by the Mexican state to rural population under the authority provided it by the revolutionary Constitution on 1917 took considerable time to coalesce, and were arguably not even high on one of the Revolution’s principal instigators, Francisco Madero’s, list of priorities. The redistribution of land to the peasantry in the form of possession if not ownership – a kind of return to real or fictitious preconquest and colonial forms of land tenure – did peak during the avowedly reformist, and even modestly radical presidency of Lázaro Cárdenas (1934-1940) after making only halting progress under his predecessors since the 1920s. From 1940 to 1965, the cultivated area in Mexico grew at 3.7 percent per year and the rise in productivity in basic food crops was 2.8 percent per year.

Nevertheless, the long-run effects of the agrarian reform and land redistribution have been predictably controversial. Under the presidency of Carlos Salinas (1988-1994) the reform was officially declared over, with no further land redistribution to be undertaken and the legal status of the ejido definitively changed. The principal criticism of the ejido was that, in the long run, it encouraged inefficiently small landholding per farmer and, by virtue of its limitations on property rights, made agricultural credit difficult for peasants to obtain.[47]

There is no doubt these are justifiable criticisms, but they have to be placed in context. Cárdenas’ predecessors in office, Alvaro Obregón (1924-1928) and Plutarco Elías Calles (1928-1932) may well have preferred a more commercial model of agriculture with larger, irrigated holdings. But it is worth recalling that one of the original agrarian leaders of the Revolution, Emiliano Zapata, had an uneasy relationship with Madero, who saw the Revolution in mostly political terms, from the start and quickly rejected Madero’s leadership in favor of restoring peasant lands in his native state of Morelos.  Cárdenas, who was in the midst of several major maneuvers that would require widespread popular support—such as the expropriation of foreign oil companies operating in Mexico in March 1938—was undoubtedly sensitive to the need to mobilize the peasantry on his behalf. The agrarian reform of his presidency, which surpassed that of any other, needs to be considered in those terms as well as in terms of economic efficiency.[48]

Cárdenas’ presidency also coincided with the continuation of the Great Depression. Like other countries in Latin America, Mexico was hard hit by the Great Depression, at least through the early 1930s.  All sorts of consumer goods became scarcer, and the depreciation of the peso raised the relative price of imports. As had happened previously in Mexican history (1790-1810, during the Napoleonic Wars and the disruption of the Atlantic trade), in the medium term domestic industry was nevertheless given a stimulus and import substitution, the subsequent core of Mexico’s industrialization program after World War II, was given a decisive boost. On the other hand, Mexico also experienced the forced “repatriation” of people of Mexican descent, mostly from California, of whom 60 percent were United States citizens. The effects of this movement—the emigration of the Revolution in reverse—has never been properly analyzed. The general consensus is that World War II helped Mexico to prosper. Demand for labor and materials from the United States, to which Mexico was allied, raised real wages and incomes, and thus boosted aggregate demand. From 1939 through 1946, real output in Mexico grew by approximately 50 percent. The growth in population accelerated as well as the country began to move into the later stages of the demographic transition, with a falling death rate, while birth rates remained high.[49]

 

From Miracle to Meltdown: 1950-1982  

The history of import substitution manufacturing did not begin with postwar Mexico, but few countries (especially in Latin America) became as identified with the policy in the 1950s, and with what Mexicans termed the emergence of “stabilizing development.” There was never anything resembling a formal policy announcement, although Raúl Prebisch’s 1949 manifesto, “The Economic Development of Latin America and its Principal Problems” might be regarded as supplying one. Prebisch’s argument, that a directed change in the composition of imports toward capital goods to facilitate domestic industrialization was, in essence, the basis of the policy that Mexico followed. Mexico stabilized the nominal exchange rate at 12.5 pesos to the dollar in 1954, but further movement in the real exchange rate (until the 1970s) were unimportant. The substantive bias of import substitution in Mexico was a high effective rate of protection to both capital and consumer goods. Jaime Ros has calculated these rates in 1960 ranged between 47 and 85 percent, and between 33 and 109 percent in 1980. The result, in the short to intermediate run, was very rapid rates of economic growth, averaging 6.5 percent in 1950 through 1973. Other than Brazil, which also followed an import substitution regime, no country in Latin America experienced higher rates of growth. Mexico’s was substantially above the regional average. [50]

[See the historical graph of population growth in Mexico through 2000 below]

page39

Source: Essentially, Estadísticas Históricas de México (various editions since 1999; the most recent is 2014)

http://dgcnesyp.inegi.org.mx/ehm/ehm.htm (Accessed July 20, 2016)

 

But there were unexpected results as well. The contribution of labor to GDP growth was 14 percent. Capital’s contribution was 53 percent, and the remainder, total factor productivity (TFP) 28 percent.[51] As a consequence, while Mexico’s growth occurred through the accumulation of capital, the distribution of income became extremely skewed. The ratio of the top 10 percent of household income to the bottom 40 percent was 7 in 1960, and 6 in 1968. Even supporters of Mexico’s development program, such as Carlos Tello, conceded that it probable that it was the organized peasants and workers experienced an effective improvement of their relative position. The fruits of the Revolution were unevenly distributed, even among the working class.[52]

By “organized” one means such groups as the most important labor union in the country, the CTM (Confederation of Mexican Workers) or the nationally recognized peasant union, the CNC, both of which formed two of the three organized sectors of the official government party, the PRI, or Party of the Institutional Revolution that was organized in 1946. The CTM in particular was instrumental in supporting the official policy of import substitution, and thus benefited from government wage setting and political support. The leaders of these organizations became important political figures in their own right. One, Fidel Velázquez, as both a federal senator and the head of the CTM from 1941 to his death in 1997. The incorporation of these labor and peasant groups into the political system offered the government both a means of control and a guarantee of electoral support. They became pillars of what the Peruvian writer Mario Vargas Llosa famously called “the perfect dictatorship” of the PRI from 1946 to 2000, during which the PRI held a monopoly of the presidency and the important offices of state. In a sense, import substitution was the economic ideology of the PRI.[53]

Labor and economic development during the years of rapid growth is, like many others, a debated subject. While some have found strong wage growth, others, looking mostly at Mexico City, have found declining real wages. Beyond that, there is the question of informality and a segmented labor market. Were workers in the CTM the real beneficiaries of economic growth, while others in the informal sector (defined as receiving no social security payments, meaning roughly two-thirds of Mexican workers) did far less well? Obviously, the attraction of a segmented labor market model can address one obvious puzzle: why would industry substitute capital for labor, as it obviously did, if real wages were not rising? Postulating an informal sector that absorbed the rapid influx of rural migrants and thus held nominal wages steady while organized labor in the CTM got the benefit of higher negotiated wages, but in so doing, limited their employment is an attractive hypothesis, but would not command universal agreement. Nothing has been resolved, at least for the period of the “Miracle.” After Mexico entered a prolonged series of economic crises in the 1980s—here labelled as “meltdown”—the discussion must change, because many hold that the key to relative political stability and the failure of open unemployment to rise sharply can be explained by falling real wages.

The fiscal basis on which the years of the Miracle were constructed was conventional, not to say conservative.[54] A stable nominal exchange rate, balanced budgets, limited public borrowing, and a predictable monetary policy were all predicated on the notion that the private sector would react positively to favorable incentives. By and large, it did. Until the late 1960s, foreign borrowing was considered inconsequential, even if there was some concern on the horizon that it was starting to rise. No one foresaw serious macroeconomic instability. It is worth consulting a brief memorandum from Secretary of State Dean Rusk to President Lyndon Johnson (Washington, December 11, 1968) –to get some insight into how informed contemporaries viewed Mexico. The instability that existed was seen as a consequence of heavy-handedness on the part of the PRI and overreaction in the security forces. Informed observers did not view Mexico’s embrace of import-substitution industrialization as a train wreck waiting to happen. Historical actors are rarely so prescient.[55]

 

Slowing of the Miracle and Echeverría

The most obvious problems in Mexico were political. They stemmed from the increasing awareness that the limits of the “institutional revolution” had been reached, particularly regarding the growing democratic demands of the urban middle classes. The economic problem, which was far from obvious, was that import substitution had concentrated income in the upper 10 per cent of the population, so that domestic demand had begun to stagnate. Initially at least, public sector borrowing could support a variety of consumption subsidies to the population, and there were also efforts to transfer resources out of agriculture via domestic prices for staples such as maize. Yet Mexico’s population was also growing at the rate of nearly 3 percent per year, so that the long term prospects for any of these measures were cloudy.

At the same time, growing political pressures on the PRI, mostly dramatically manifest in the army’s violent repression of student demonstrators at Tlatelolco in 1968 just prior to the Olympics, had convinced some elements in the PRI, people like Carlos Madrazo, to argue for more radical change. The emergence of an incipient guerilla movement in the state of Guerrero had much the same effect. The new president, Luis Echeverría (1970-76), openly pushed for changes in the distribution of income and wealth, incited agrarian discontent for political purposes, dramatically increased government spending and borrowing, and alienated what had typically been a complaisant, if not especially friendly private sector.

The country’s macroeconomic performance began to deteriorate dramatically. Inflation, normally in the range of about 5 percent, rose into the low 20 percent range in the early 1970s. The public sector deficit, fueled by increasing social spending, rose from 2 to 7 percent of GDP. Money supply growth now averaged about 14 percent per year. Real GDP growth had begun to slip after 1968 and in the early 1970s, in deteriorated more, if unevenly. There had been clear convergence of regional economies in Mexico between 1930 and 1980 because of changing patterns of industrialization in the northern and central regions of the country.  After 1980, that process stalled and regional inequality again widened. [56]

While there is a tendency to blame Luis Echeverria for all or most of these developments, this forgets that his administration coincided with the First OPEC oil shock (1973) and rapidly deteriorating external conditions. Mexico had, as yet, not discovered the oil reserves (1978) that were to provide a temporary respite from economic adjustment after the shock of the peso devaluation of 1976—the first change in its value in over 20 years. At the same time, external demand fell, principally transmitted from the United States, Mexico’s largest trading partner, where the economy had fallen into recession in late 1973. Yet it seems reasonable to conclude that the difficult international environment, while important in bring Mexico’s “miracle” period to a close, was not helped by Echeverría’s propensity for demagoguery, of the loss of fiscal discipline that had long characterized government policy, at least since the 1950s. The only question to be resolved was to what sort of conclusion the period would come. The answer, unfortunately, was disastrous.[57]

 

Meltdown: The Debt Crisis, the Lost Decade and After

In contemporary parlance, Mexico had passed from “stabilizing” to “shared” development under Echeverría. But the devaluation of 1976 from 12.5 to 20.5 pesos to the dollar suggested that something had gone awry. One might suppose that some adjustment in course, especially in public spending and borrowing, would have occurred. But precisely the opposite occurred. Between 1976 and 1979, nominal federal spending doubled. The budget deficit increased by a factor of 15. The reason for this odd performance was the discovery of crude oil in the Gulf of Mexico, perhaps unsurprising in light of the spiking prices of the 1970s (the oil shocks of 1973-74, 1978-79), but nevertheless of considerable magnitude. In 1975, Mexico’s proven reserves were 6 billion barrels of oil. By 1978, they had increased to 40 billion. President López Portillo set himself to the task of “administering abundance” and Mexican analysts confidently predicted crude oil at $100 a barrel (when it stood at $37 in current prices in 1980). The scope of the miscalculation was catastrophic. At the same time, encouraged by bank loan pushing and effectively negative real rates of interest, Mexico borrowed abroad. Consumption subsidies, while vital in the face of slowing import substitution, were also costly, and when supported by foreign borrowing, unsustainable, but foreign indebtedness doubled between 1976 and 1979, and even further thereafter.

Matters came to a head in 1982. By then, Mexico’s foreign indebtedness was estimated at over $80 billion dollars, an increase from less than $20 billion in 1975. Real interest rates had begun to rise in the United States in mid-1981, and with Mexican borrowing tied to international rates, debt service rapidly increased. Oil revenue, which had come to constitute the great bulk of foreign exchange, followed international crude prices downward, driven in large part by a recession that had begun in the United States in mid-1981. Within six months, Mexico, too, had fallen into recession. Real per capital output was to decline by 8 percent in 1982.  Forced to sharply devalue, the real exchange rate fell by 50 percent in 1982 and inflation approached 100 percent. By the late summer, Finance Minister Jesus Silva Herzog admitted that the country could not meet an upcoming payment obligation, and was forced to turn to the US Federal Reserve, to the IMF, and to a committee of bank creditors for assistance. In late August, in a remarkable display of intemperance, President López Portillo nationalized the banking system. By December 20, 1982, Mexico’s incoming President, Miguel de la Madrid (1982-88) appeared, beleaguered, on the cover of Time Magazine framed by the caption, “We are in an Emergency.”  It was, as the saying goes, a perfect storm, and with it, the Debt Crisis and the “Lost Decade” in Mexico had begun. It would be years before anything resembling stability, let alone prosperity, was restored. Even then, what growth there was a pale imitation of what had occurred during the decades of the “Miracle.”

 

The 1980s

The 1980s were a difficult decade.[58]  After 1981, annual real per capita growth would not reach 4 percent again until 1989, and in 1986, it fell by 6 percent. In 1987, inflation reached 159 percent. The nominal exchange rate fell by 139 percent in 1986-1987. By the standards of the years of stabilizing development, the record of the 1980s was disastrous. To complete the devastation, on September 19, 1985, the worst earthquake in Mexican history, 7.8 on the Richter Scale, devastated large parts of central Mexico City and killed 5 thousand (some estimates run as high as 25 thousand), many of whom were simply buried in mass graves. It was as if a plague of biblical proportions had struck the country.

Massive indebtedness produced a dramatic decline in the standard of living as structural adjustment occurred. Servicing the debt required the production of an export surplus in non-oil exports, which in turn, required a reduction in domestic consumption. In an effort to surmount the crisis, the government implemented an agreement between organized labor, the private sector, and agricultural producers called the Economic Solidarity Pact (PSE). The PSE combined an incomes policy with fiscal austerity, trade and financial liberalization, generally tight monetary policy, and debt renegotiation and reduction. The centerpiece of the “remaking” of the previously inward orientation of the domestic economy was the North American Free Trade Agreement (NAFTA, 1993) linking Mexico, the United States, and Canada. While average tariff rates in Mexico had fallen from 34 percent in 1985 to 4 percent in 1992—even before NAFTA was signed—the agreement was generally seen as creating the institutional and legal framework whereby the reforms of Miguel de la Madrid and Carlos Salinas (1988-1994) would be preserved. Most economists thought its effects would be relatively larger in Mexico than in the United States, which generally appears to have been the case. Nevertheless, NAFTA has been predictably controversial, as trade agreements are wont to be. The political furor (and, in some places, euphoria) surrounding the agreement have faded, but never entirely disappeared. In the United States in particular, NAFTA is blamed for deindustrialization, although pressure on manufacturing, like trade liberalization itself, was underway long before NAFTA was negotiated. In Mexico, there has been much hand wringing over the fate of agriculture and small maize producers in particular. While none of this is likely to cease, it is nevertheless the case that there has been a large increase in the volume of trade between the NAFTA partners. To dismiss this is, quite plainly, misguided, even where sensitive and well organized political constituencies are concerned. But the legacy of NAFTA, like most everything in Mexican economic history, remains unsettled.

 

Post Crisis: No Miracles

Still, while some prosperity was restored to Mexico by the reforms of the 1980s and 1990s, the general macroeconomic results have been disappointing, not to say mediocre. The average real compensation per person in manufacturing in 2008 was virtually unchanged from 1993 according to the Instituto Nacional De Estadística  Geografía e Informática, and there is little reason to think the compensation has improved at all since then. It is generally conceded that per capita GDP growth has probably averaged not much more than 1 percent a year. Real GDP growth since NAFTA according to the OECD has rarely reached 5 percent and since 2010, it has been well below that.

 

 

Source: http://www.worldbank.org/en/country/mexico (Accessed July 21, 2016). The vertical scale cuts the horizontal axis at 1982

 

For virtually everyone in Mexico, the question is why, and the answers proposed include virtually any plausible factor: the breakdown of the political system after the PRI’s historic loss of presidential power in 2000; the rise of China as a competitor to Mexico in international markets; the explosive spread of narcoviolence in recent years, albeit concentrated in the states of Sonora, Sinaloa, Tamaulipas, Nuevo León and Veracruz; the results of NAFTA itself; the failure of the political system to undertake further structural economic reforms and privatizations after the initial changes of the 1980s, especially regarding the national oil monopoly, Petroleos Mexicanos (PEMEX); the failure of the border industrialization program (maquiladoras) to develop substantive backward linkages to the rest of the economy. This is by no means an exhaustive list of the candidates for poor economic performance. The choice of a cause tends to reflect the ideology of the critic.[59]

Yet it seems that, at the end of the day, the reason why post-NAFTA Mexico has failed to grow comes down to something much more fundamental: a fear of growing, embedded in the belief that the collapse of the 1980s and early 1990s (including the devastating “Tequila Crisis” of 1994-1995, which resulted in a another enormous devaluation of the peso after an initial attempt to contain the crisis was bungled)  was so traumatic and costly as to render event modest efforts to promote growth, let alone the dirigisme of times past, as essentially unwarranted. The central bank, the Banco de México (Banxico) rules out the promotion of economic growth as part of its remit—even as a theoretical proposition, let alone as a goal of macroeconomic policy– and concerns itself only with price stability. The language of its formulation is striking. “During the 1970s, there was a debate as to whether it was possible to stimulate economic growth via monetary policy.  As a result, some governments and central banks tried to reduce unemployment through expansive monetary policy.  Both economic theory and the experience of economies that tried this prescription demonstrated that it lacked validity. Thus, it became clear that monetary policy could not actively and directly stimulate economic activity and employment. For that reason, modern central banks have as their primary goal the promotion of price stability” (translation mine). Banxico is not the Fed: there is no dual mandate in Mexico.[60]

The Mexican banking system has scarcely made things easier. Private credit stands at only about a third of GDP. In recent years, the increase in private sector savings has been largely channeled to government bonds, but until quite recently, public sector deficits were very small, which is to say, fiscal policy has not been expansionary. If monetary and fiscal policy are both relatively tight, if private credit is not easy to come by, and if growth is typically presumed to be an inevitable concomitant to economic stability for which no actor (other than the private sector) is deemed responsible, it should come as no surprise that economic growth over the past two decades has been lackluster.  In the long run, aggregate supply determines real GDP, but in the short run, nominal demand matters: there is no point in creating productive capacity to satisfy demand that does not exist. And, unlike during the period of the Miracle and Stabilizing Development, attention to demand since 1982 has been limited, not to say off the table completely. It may be understandable, but Mexico’s fiscal and monetary authorities seem to suffer from what could be termed, “Fear of Growth.” For better or worse, the results are now on display. After its current (2016) return to a relatively austere budget, it remains to be seen how the economic and political system in contemporary Mexico handles slow economic growth. For that would now seem to be, in a basic sense, its largest challenge for the future.

[1] I am grateful to Ivan Escamilla and Robert Whaples for their careful readings and thoughtful criticisms.

[2] The standard reference work is Sandra Kuntz Ficker, (ed), Historia económica general de México. De la Colonia a nuestros días (México, DF: El Colegio de Mexico, 2010).

[3] Oscar Martinez, Troublesome Border (rev. ed., University of Arizona Press: Tucson, AZ, 2006) is the most helpful general account in English.

[4] There are literally dozens of general accounts of the pre-conquest world. A good starting point is Richard E.W. Adams, Prehistoric Mesoamerica (3d ed., University of Oklahoma Press: Norman, OK, 2005). More advanced is Richard E.W. Adams and Murdo J. Macleod, The Cambridge History of the Mesoamerican Peoples: Mesoamerica. (2 parts, New York: Cambridge University Press, 2000).

[5] Nora C. England and Roberto Zavala Maldonado, “Mesoamerican Languages” Oxford Bibliographies http://www.oxfordbibliographies.com/view/document/obo-9780199772810/obo-9780199772810-0080.xml

(Accessed July 10, 2016)

[6] For an introduction to the nearly endless controversy over the pre- and post-contact population of the Americas, see William M. Denevan (ed.), The Native Population of the Americas in 1492 (2d rev ed., Madison: University of Wisconsin Press, 1992).

[7] Sherburne F Cook and Woodrow Borah, Essays in Population History: Mexico and California (Berkeley, CA: University of California Press, 1979), p. 159.

[8]Gene C. Wilken, Good Farmers Traditional Agricultural Resource Management in Mexico and Central America (Berkeley: University of California Press, 1987), p. 24.

[9] Bernard Ortiz de Montellano, Aztec Medicine Health and Nutrition (New Brunswick, NJ: Rutgers University Press, 1990).

[10] Bernardo García Martínez, “Encomenderos españoles y British residents: El sistema de dominio indirecto desde la perspectiva novohispana”, in Historia Mexicana, LX: 4 [140] (abr-jun 2011), pp. 1915-1978.

[11] These epidemics are extensively and exceedingly well documented. One of the most recent examinations is Rodofo Acuna-Soto, David W. Stahle, Matthew D. Therrell , Richard D. Griffin,  and Malcolm K. Cleaveland, “When Half of the Population Died: The Epidemic of Hemorrhagic Fevers of 1576 in Mexico,” FEMS Microbiology Letters 240 (2004) 1–5. (http:// femsle.oxfordjournals.org/content/femsle/240/1/1.full.pdf, accessed July 10, 2016.) See in particular the exceptional map and table on pp. 2-3.

[12] See in particular, Bernardo García Martínez. Los pueblos de la Sierrael poder y el espacio entre los indios del norte de Puebla hasta 1700 (Mexico, DF: El Colegio de México, 1987) and Elinor G.K. Melville, A Plague of Sheep: Environmental Consequences of the Conquest of Mexico (New York: Cambridge University Press, 1997).

[13] J. H. Elliott, “A Europe of Composite Monarchies,” Past & Present 137 (The Cultural and Political Construction of Europe): 48–71; Guadalupe Jiménez Codinach, “De Alta Lealtad: Ignacio Allende y los sucesos de 1808-1811,” in Marta Terán and José Antonio Serrano Ortega, eds., Las guerras de independencia en la América Española (La Piedad, Michoacán, MX: El Colegio de Michoacán, 2002), p. 68.

[14] Richard Salvucci, “Capitalism and Dependency in Latin America,” in Larry Neal and Jeffrey G. Williamson, eds., The Cambridge History of Capitalism (2 vols.), New York: Cambridge University Press, 2014), 1: pp. 403-408.

[15] Source: TePaske Page, http://www.insidemydesk.com/hdd.html (Accessed July 19, 2016)

[16]  Edith Boorstein Couturier, The Silver King: The Remarkable Life of the Count of Regla in Colonial Mexico (Albuquerque, NM: University of New Mexico Press, 2003).  Dana Velasco Murillo, Urban Indians in a Silver City: Zacatecas, Mexico, 1546-1810 (Stanford, CA: Stanford University Press, 2015), p. 43. The standard work on the subject is David Brading, Miners and Merchants in Bourbon Mexico, 1763-1810 (New York: Cambridge University Press, 1971) But also see Robert Haskett, “Our Suffering with the Taxco Tribute: Involuntary Mine Labor and Indigenous Society in Central New Spain,” Hispanic American Historical Review, 71:3 (1991), pp. 447-475. For silver in China see http://afe.easia.columbia.edu/chinawh/web/s5/s5_4.html (accessed July 13, 2016). For the rents of empire question, see Michael Costeloe, Response to Revolution: Imperial Spain and the Spanish American Revolutions, 1810-1840 (New York: Cambridge University Press, 1986).

[17] This is an estimate. David Ringrose concluded that in the 1780s, the colonies accounted for 45 percent of Crown income, and one would suppose that Mexico would account for at least about half of that. See David R. Ringrose, Spain, Europe and the ‘Spanish Miracle’, 1700-1900 (New York: Cambridge University Press, 1996), p. 93; Mauricio Drelichman, “The Curse of Moctezuma: American Silver and the Dutch Disease,” Explorations in Economic History 42:3 (2005), pp. 349-380.

[18] José Antonio Escudero, El supuesto memorial del Conde de Aranda sobre la Independencia de América) México, DF: Universidad Nacional Autónoma de México, 2014) (http://bibliohistorico.juridicas.unam.mx/libros/libro.htm?l=3637, accessed July 13, 2016)

[19] Allan J. Kuethe and Kenneth J. Andrien, The Spanish Atlantic World in the Eighteenth Century. War and the Bourbon Reforms, 1713-1796 (New York: Cambridge University Press, 2014) is the most recent account of this period.

[20] Richard J. Salvucci, “Economic Growth and Change in Bourbon Mexico: A Review Essay,” The Americas, 51:2 (1994), pp. 219-231; William B Taylor, Magistrates of the Sacred: Priests and Parishioners in Eighteenth Century Mexico (Palo Alto: Stanford University Press, 1996), p. 24; Luis Jáuregui, La Real Hacienda de Nueva España. Su Administración en la Época de los Intendentes, 1786-1821 (México, DF: UNAM, 1999), p. 157.

[21] Jeremy Baskes, Staying AfloatRisk and Uncertainty in Spanish Atlantic World Trade, 1760-1820 (Stanford, CA: Stanford University Press, 2013); Xabier Lamikiz, Trade and Trust in the Eighteenth-century Atlantic World: Spanish Merchants and their Overseas Networks (Suffolk, UK: The Boydell Press., 2013). The starting point of all these studies is Clarence Haring, Trade and Navigation between Spain and the Indies in the Time of the Hapsburgs (Cambridge, MA: Harvard University Press, 1918).

[22] The best, and indeed, virtually unique starting point for considering these changes in their broadest dimensions   are the joint works of Stanley and Barbara Stein: Silver, Trade, and War (2003); Apogee of Empire (2004), and Edge of Crisis (2010), All were published by Johns Hopkins University Press and do for the Spanish Empire what Laurence Henry Gipson did for the First British Empire.

[23] The key work is María Eugenia Romero Sotelo, Minería y Guerra. La economía de Nueva España, 1810-1821 (México, DF: UNAM, 1997)

[24] Calculated from José María Luis Mora, Crédito Público ([1837] México, DF: Miguel Angel Porrúa, 1986), pp. 413-460. Also see Richard J. Salvucci, Politics, Markets, and Mexico’s “London Debt,” 1823-1887 (NY: Cambridge University Press, 2009).

[25] Jesús Hernández Jaimes, La Formación de la Hacienda Pública Mexicana y las Tensiones Centro -Periferia, 1821-1835  (México, DF: El Colegio de México, 2013). Javier Torres Medina, Centralismo y Reorganización. La Hacienda Pública Durante la Primera República Central de México, 1835-1842 (México, DF: Instituto Mora, 2013). The only treatment in English is Michael P. Costeloe, The Central Republic in Mexico, 1835-1846 (New York: Cambridge University Press, 1993).

[26] An agricultural worker who worked full time, 6 days a week, for the entire year (a strong assumption), in Central Mexico could have expected cash income of perhaps 24 pesos. If food, such as beans and tortilla were added, the whole pay might reach 30. The figure of 40 pesos comes from considerably richer agricultural lands around the city of Querétaro, and includes as an average income from nonagricultural employment as well, which was higher.  Measuring Worth would put the relative historic standard of living value in 2010 prices at $1.040, with the caveat that this is relative to a bundle of goods purchased in the United States. (https://www.measuringworth.com/uscompare/relativevalue.php).

[27]The phrase comes from Guido di Tella and Manuel Zymelman. See Colin Lewis, “Explaining Economic Decline: A review of recent debates in the economic and social history literature on the Argentine,” European Review of Latin American and Caribbean Studies, 64 (1998), pp. 49-68.

[28] Francisco Téllez Guerrero, De reales y granos. Las finanzas y el abasto de la Puebla de los Angeles, 1820-1840 (Puebla, MX: CIHS, 1986). Pp. 47-79.

[29]This is based on an analysis of government lending contracts. See Rosa María Meyer and Richard Salvucci, “The Panic of 1837 in Mexico: Evidence from Government Contracts” (in progress).

[30] There is an interesting summary of this data in U.S Govt., 57th Cong., 1 st sess., House, Monthly Summary of Commerce and Finance of the United States (September 1901) (Washington, DC: GPO, 1901), pp. 984-986.

[31] Salvucci, Politics and Markets, pp. 201-221.

[32] Miguel Galindo y Galindo, La Gran Década Nacional o Relación Histórica de la Guerra de Reforma, Intervención Extranjera, y gobierno del archiduque Maximiliano, 1857-1867 ([1902], 3 vols., México, DF: Fondo de Cultura Económica, 1987).

[33] Carmen Vázquez Mantecón, Santa Anna y la encrucijada del Estado. La dictadura, 1853-1855 (México, DF: Fondo de Cultura Económica, 1986).

[34] Moramay López-Alonso, Measuring Up: A History of Living Standards in Mexico, 1850-1950 (Stanford, CA: Stanford University Press, 2012);  Amilcar Challú and Auroro Gómez Galvarriato, “Mexico’s Real Wages in the Age of the Great Divergence, 1730-1930,” Revista de Historia Económica 33:1 (2015), pp. 123-152; Amílcar E. Challú, “The Great Decline: Biological Well-Being and Living Standards in Mexico, 1730-1840,” in Ricardo Salvatore, John H. Coatsworth, and Amilcar E. Challú, Living Standards in Latin American History: Height, Welfare, and Development, 1750-2000 (Cambridge, MA: Harvard University Press, 2010), pp. 23-67.

[35]See Challú and Gómez Galvarriato, “Real Wages,” Figure 5, p. 101.

[36] Luis González et al, La economía mexicana durante la época de Juárez (México, DF: 1976).

[37] Teresa Rojas Rabiela and Ignacio Gutiérrez Ruvalcaba, Cien ventanas a los países de antaño: fotografías del campo mexicano de hace un siglo) (México, DF: CONACYT, 2013), pp. 18-65.

[38] Alma Parra, “La Plata en la Estructura Económica Mexicana al Inicio del Siglo XX,” El Mercado de Valores 49:11 (1999), p. 14.

[39] Sandra Kuntz Ficker, Empresa Extranjera y Mercado Interno: El Ferrocarril Central Mexicano (1880-1907) (México, DF: El Colegio de México, 1995).

[40] Priscilla Connolly, El Contratista de Don Porfirio. Obras públicas, deuda y desarrollo desigual (México, DF: Fondo de Cultura Económica, 1997).

[41] Most notably John Tutino, From Insurrection to Revolution in Mexico: Social Bases of Agrarian Violence, 1750-1940 (Princeton, NJ: Princeton University Press, 1986). p. 229. My growth figures are based on the INEGI, Estadísticas Historicas de México, 2014) (http://dgcnesyp.inegi.org.mx/cgi-win/ehm2014.exe/CI080010, Accessed July 15, 2016).

[42] Stephen H. Haber, Industry and Underdevelopment: The Industrialization of Mexico, 1890-1940 (Stanford, CA: Stanford University Press, 1989); Aurora Gómez-Galvarriato, Industry and Revolution: Social and Economic Change in the Orizaba Valley (Cambridge, MA: Harvard University Press, 2013).

[43] There are literally dozens of accounts of the Revolution. The usual starting point, in English, is Alan Knight, The Mexican Revolution (reprint ed., 2 vols., Lincoln, NE: 1990).

[44] This argument has been made most insistently in Armando Razo and Stephen Haber, “The Rate of Growth of Productivity in Mexico, 1850-1933: Evidence from the Cotton Textile Industry,” Journal of Latin American Studies 30:3 (1998), pp. 481-517.

[45]Robert McCaa, “Missing Millions: The Demographic Cost of the Mexican revolution,” Mexican Studies/Estudios Mexicanos 19:2 (Summer 2003): 367-400; Virgilio Partida-Bush, “Demographic Transition, Demographic Bonus, and Ageing in Mexico, “ Proceedings of the United Nations Expert Group Meeting on Social and Economic Implications of Changing Population Age Structures. (http://www.un.org/esa/population/meetings/Proceedings_EGM_Mex_2005/partida.pdf) (Accessed July 15, 2016), pp. 287-290.

[46] An implication of the studies of Alan Knight, and of Clark Reynolds, The Mexican Economy: Twentieth Century Structure and Growth (New Haven, CT: Yale University Press, 1971).

[47] An interesting summary of revisionist thinking on the nature and history of the ejido appears in Emilio Kuri, “La invención del ejido, Nexos, January 2015.

[48]Alan Knight, “Cardenismo: Juggernaut or Jalopy?” Journal of Latin American Studies, 26:1 (1994), pp. 73-107.

[49] Stephen Haber, “The Political Economy of Industrialization,” in Victor Bulmer-Thomas, John Coatsworth, and Roberto Cortes-Conde, eds., The Cambridge Economic History of Latin America (2 vols., New York: Cambridge University Press, 2006), 2:  537-584.

[50]Again, there are dozens of studies of the Mexican economy in this period. Ros’ figures come from “Mexico’s Trade and Industrialization Experience Since 1960: A Reconsideration of Past Policies and Assessment of Current Reforms,” Kellogg Institute (Working Paper 186, January 1993). For a more general study, see Juan Carlos Moreno-Brid and Jaime Ros, Development and Growth in the Me3xican Economy. A Historical Perspective (New York: Oxford University Press, 2009). A recent Spanish language treatment is Enrique Cárdenas Sánchez, El largo curso de la economía mexicana. De 1780 a nuestros días (México, DF: Fondo de Cultura Económica, 2015). A view from a different perspective is Carlos Tello, Estado y desarrollo económico. México 1920-2006 (México, DF, UNAM, 2007).

[51]André A. Hoffman, Long Run Economic Development in Latin America in a Comparative Perspective: Proximate and Ultimate Causes (Santiago, Chile: CEPAL, 2001), p. 19.

[52]Tello, Estado y desarrollo, pp. 501-505.

[53] Mario Vargas Llosa, “Mexico: The Perfect Dictatorship,” New Perspectives Quarterly 8 (1991), pp. 23-24.

[54] Rafael Izquierdo, Política Hacendario del Desarrollo Estabilizador, 1958-1970 (México, DF: Fondo de Cultura Económica, 1995. The term stabilizing development was itself termed by Izquierdo as a government minister.

[55]See Foreign Relations of the United States, 1964-1968. Mexico and Central America http://2001-2009.state.gov/r/pa/ho/frus/johnsonlb/xxxi/36313.htm (Accessed July 15, 2016).

[56] José Aguilar Retureta, “The GDP Per Capita of the Mexican Regions (1895:1930): New Estimates, Revista de Historia Económica, 33: 3 (2015), pp. 387-423.

[57] For a contemporary account with a sense of the immediacy of the end of the Echeverría regime, see “Así se devaluó el peso,” Proceso, November 13, 1976.

[58] The standard account is Stephen Haber, Herbert Klein, Noel Maurer, and Kevin Middlebrook, Mexico since 1980 (New York: Cambridge University Press, 2008). A particularly astute economic account is Nora Lustig, Mexico: The Remaking of an Economy (2d ed., Washington, DC: The Brookings Institution, 1998).  But also Louise E. Walker, Waking from the Dream. Mexico’s Middle Classes After 1968 (Stanford, CA: Stanford University Press, 2013).

[59] See, for example, Jaime Ros Bosch, Algunas tesis equivocadas sobre el estancamiento económico de México (México, DF: El Colegio de México, 2013).

[60] La Banca Central y la Importancia de la Estabilidad Económica  June 16, 2008.  (http://www.banxico.org.mx/politica-monetaria-e-inflacion/material-de-referencia/intermedio/politica-monetaria/%7B3C1A08B1-FD93-0931-44F8-96F5950FC926%7D.pdf, Accessed July 15, 2016.). Also see Brian Winter, “This Man is Brilliant: So Why Doesn’t Mexico’s Economy Grow Faster?” Americas Quarterly (http://americasquarterly.org/content/man-brilliant-so-why-doesnt-mexicos-economy-grow-faster) (Accessed July 21, 2016)

 

 

Project 2000/2001

Project 2000

Each month during 2000, EH.NET published a review essay on a significant work in twentieth-century economic history. The purpose of these essays was to survey the works that have had the most influence on the field of economic history and to highlight the intellectual accomplishments of twentieth-century economic historians. Each review essay outlines the work’s argument and findings, discusses the author’s methods and sources, and examines the impact that the work has had since its publication.

Nominations were received from dozens of EH.Net’s users. P2K
selection committee members were: Stanley Engerman (University of
Rochester), Alan Heston (University of Pennsylvania), Paul
Hohenberg, chair (Rensselaer Polytechnic Institute), and Mary
Yeager (University of California-Los Angeles). Project Chair was
Robert Whaples (Wake Forest University).

The review essays are:

Braudel, Fernand
Civilization and Capitalism, 15th-18th Century Time
Reviewed by Alan Heston (University of Pennsylvania).

Chandler, Alfred D. Jr.
The Visible Hand: The Managerial Revolution in American Business
Reviewed by David S. Landes (Department of Economics and History, Harvard University).

Chaudhuri, K. N.
The Trading World of Asia and the English East India Company, 1660-1760
Reviewed by Santhi Hejeebu.

Davis, Lance E. and North, Douglass C. (with the assistance of Calla Smorodin)
Institutional Change and American Economic Growth.
Reviewed by Cynthia Taft Morris (Department of Economics, Smith College and American University).

Fogel, Robert W.
Railroads and American Economic Growth: Essays in Econometric History
Reviewed by Lance Davis (California Institute of Technology).

Friedman, Milton and Schwartz, Anna Jacobson
A Monetary History of the United States, 1867-1960
Reviewed by Hugh Rockoff (Rutgers University).

Heckscher, Eli F.
Mercantilism
Reviewed by John J. McCusker (Departments of History and Economics, Trinity University).

Landes, David S.
The Unbound Prometheus: Technological Change and Industrial Development in Western Europe from 1750 to the Present
Reviewed by Paul M. Hohenberg (Rensselaer Polytechnic Institute).

Pinchbeck, Ivy
Women Workers and the Industrial Revolution, 1750-1850 
Reviewed by Joyce Burnette (Wabash College).

Polanyi, Karl
The Great Transformation: The Political and Economic Origins of Our Time
Reviewed by Anne Mayhew (University of Tennessee).

Schumpeter, Joseph A.
Capitalism, Socialism and Democracy 
Reviewed by Thomas K. McCraw (Harvard Business School).

Weber, Max
The Protestant Ethic and the Spirit of Capitalism
Reviewed by Stanley Engerman.

Project 2001

Throughout 2001 and 2002, EH.Net published a second series
of review essays on important and influential works in economic
history. As with Project 2000, nominations for Project 2001 were
received from many EH.Net users and reviewed by the Selection
Committee: Lee Craig (North Carolina State University); Giovanni
Federico (University of Pisa); Anne McCants (MIT); Marvin McInnis
(Queen’s University); Albrecht Ritschl (University of Zurich);
Winifred Rothenberg (Tufts University); and Richard Salvucci
(Trinity College).

Project 2001 selections were:

Borah, Woodrow Wilson
New Spain’s Century of Depression
Reviewed by Richard Salvucci (Department of Economics, Trinity University).

Boserup, Ester
Conditions of Agricultural Growth: The Economics of Agrarian Change under Population Pressure
Reviewed by Giovanni Federico (Department of Modern History, University of Pisa).

Deane, Phyllis and W. A. Cole
British Economic Growth, 1688-1959: Trends and Structure
Reviewed by Knick Harley (Department of Economics, University of Western Ontario).

Fogel, Robert and Stanley Engerman
Time on the Cross: The Economics of American Negro Slavery
Reviewed by Thomas Weiss (Department of Economics, University of Kansas).

Gerschenkron, Alexander
Economic Backwardness in Historical Perspective
Review Essay by Albert Fishlow (International Affairs, Columbia University).

Horwitz, Morton
The Transformation of American Law, 1780-1860
Reviewed by Winifred B. Rothenberg (Department of Economics, Tufts University).

Kuznets, Simon
Modern Economic Growth: Rate, Structure and Spread
Reviewed by Richard A. Easterlin (Department of Economics, University of Southern California).

Le Roy Ladurie, Emmanuel
The Peasants of Languedoc
Reviewed by Anne E.C. McCants (Department of History, Massachusetts Institute of Technology).

North, Douglass and Robert Paul Thomas
The Rise of the Western World: A New Economic History
Reviewed by Philip R. P. Coelho (Department of Economics, Ball State University).

de Vries, Jan
The Economy of Europe in an Age of Crisis, 1600-1750
Review Essay by George Grantham (Department of Economics, McGill University).

Temin, Peter
The Jacksonian Economy
Reviewed by Richard Sylla (Department of Economics, Stern School of Business, New York University).

Wrigley, E. A. and R. S. Schofield
The Population History of England, 1541-1871: A Reconstruction

Project Coordinator and Editor: Robert Whaples (Wake Forest
University)

The National Recovery Administration

Barbara Alexander, Charles River Associates

This article outlines the history of the National Recovery Administration, one of the most important and controversial agencies in Roosevelt’s New Deal. It discusses the agency’s “codes of fair competition” under which antitrust law exemptions could be granted in exchange for adoption of minimum wages, problems some industries encountered in their subsequent attempts to fix prices under the codes, and the macroeconomic effects of the program.

The early New Deal suspension of antitrust law under the National Recovery Administration (NRA) is surely one of the oddest episodes in American economic history. In its two-year life, the NRA oversaw the development of so-called “codes of fair competition” covering the larger part of the business landscape.1 The NRA generally is thought to have represented a political exchange whereby business gave up some of its rights over employees in exchange for permission to form cartels.2 Typically, labor is taken to have gotten the better part of the bargain; the union movement having extended its new powers after the Supreme Court abolished the NRA in 1935, while the business community faced a newly aggressive FTC by the end of the 1930s. While this characterization may be true in broad outline, close examination of the NRA reveals that matters may be somewhat more complicated than is suggested by the interpretation of the program as a win for labor contrasted with a missed opportunity for business.

Recent evaluations of the NRA have wended their way back to themes sounded during the early nineteen thirties, in particular, interrelationships between the so-called “trade practice” or cartelization provisions of the program and the grant of enhanced bargaining power to trade unions.3 On the microeconomic side, allowing unions to bargain for industry-wide wages may have facilitated cartelization in some industries. Meanwhile, macroeconomists have suggested that the Act and its progeny, especially labor measures such as the National Labor Relations Act may bear more responsibility for the length and severity of the Great Depression than has been recognized heretofore. 4 If this thesis holds up to closer scrutiny, the era may come to be seen as a primary example of the potential macroeconomic costs of shifts in political and economic power.

Kickoff Campaign and Blanket Codes

The NRA began operations in a burst of “ballyhoo” during the summer of 1933. 5 The agency was formed upon passage of the National Industrial Recovery Act (NIRA) in mid-June. A kick-off campaign of parades and press events succeeded in getting over 2 million employers to sign a preliminary “blanket code” known as the “President’s Re-Employment Agreement.” Signatories of the PRA pledged to pay minimum wages ranging from around $12 to $15 per 40-hour week, depending on size of town. Some 16 million workers were covered, out of a non-farm labor force of some 25 million. “Share-the-work” provisions called for limits of 35 to 40 hours per week for most employees. 6

NRA Codes

Over the next year and a half, the blanket code was superseded by over 500 codes negotiated for individual industries. The NIRA provided that: “Upon the application to the President by one or more trade or industrial associations or groups, the President may approve a code or codes of fair competition for the trade or industry.” 7 The carrot held out to induce participation was enticing: “any code … and any action complying with the provisions thereof . . . shall be exempt from the provisions of the antitrust laws of the United States.” 8 Representatives of trade associations overran Washington, and by the time the NRA was abolished, hundreds of codes covering over three-quarters of private, non-farm employment had been approved.9 Code signatories were supposed to be allowed to use the NRA “Blue Eagle” as a symbol that “we do our part” only as long as they remained in compliance with code provisions.10

Disputes Arise

Almost 80 percent of the codes had provisions that were directed at establishment of price floors.11 The Act did not specifically authorize businesses to fix prices, and indeed it specified that ” . . .codes are not designed to promote monopolies.” 12 However, it is an understatement to say that there was never any consensus among firms, industries and NRA officials as to precisely what was to be allowed as part of an acceptable code. Arguments about exactly what the NIRA allowed, and how the NRA should implement the Act began during its drafting and continued unabated throughout its life. The arguments extended from the level of general principles to the smallest details of policy, unsurprising given the complete dependence of appropriate regulatory design on precise regulatory objectives, which here were embroiled in dispute from start to finish.

To choose just one out of many examples of such disputes: There was a debate within the NRA as to whether “code authorities” (industry governing bodies) should be allowed to use industry-wide or “representative” cost data to define a price floor based on “lowest reasonable cost.” Most economists would understand this type of rule as a device that would facilitate monopoly pricing. However, a charitable interpretation of the views of administration proponents is that they had some sort of “soft competition” in mind. That is, they wished to develop and allow the use of mechanisms that would extend to more fragmented industries a type of peaceful coexistence more commonly associated with oligopoly. Those NRA supporters of the representative-cost-based price floor imagined that a range of prices would emerge if such a floor were to be set, whereas detractors believed that “the minimum would become the maximum,” that is, the floor would simply be a cartel price, constraining competition across all firms in an industry.13

Price Floors

While a rule allowing emergency price floors based on “lowest reasonable cost” was eventually approved, there was no coherent NRA program behind it.14 Indeed, the NRA and code authorities often operated at cross-purposes. At the same time that some officials of the NRA arguably took actions to promote softened competition, some in industry tried to implement measures more likely to support hard-core cartels, even when they thereby reduced the chance of soft competition should collusion fail. For example, with the partial support of the NRA, many code authorities moved to standardize products, shutting off product differentiation as an arena of potential rivalry, in spite of its role as one of the strongest mechanisms that might soften price competition.15 Of course if one is looking to run a naked price-fixing scheme, it is helpful to eliminate product differentiation as an avenue for cost-raising, profit-eroding rivalry. An industry push for standardization can thus be seen as a way of supporting hard-core cartelization, while less enthusiasm on the part of some administration officials may have reflected an understanding, however intuitive, that socially more desirable soft competition required that avenues for product differentiation be left open.

National Recovery Review Board

According to some critical observers then and later, the codes did lead to an unsurprising sort of “golden age” of cartelization. The National Recovery Review Board, led by an outraged Clarence Darrow (of Scopes “monkey trial” fame) concluded in May of 1934 that “in certain industries monopolistic practices existed.” 16 While there are legitimate examples of every variety of cartelization occurring under the NRA, many contemporaneous and subsequent assessments of Darrow’s work dismiss the Board’s “analysis” as hopelessly biased. Thus although its conclusions are interesting as a matter of political economy, it is far from clear that the Board carried out any dispassionate inventory of conditions across industries, much less a real weighing of evidence.17

Compliance Crisis

In contrast to Darrow’s perspective, other commentators focus on the “compliance crisis” that erupted within a few months of passage of the NIRA.18 Many industries were faced with “chiselers” who refused to respect code pricing rules. Firms that attempted to uphold code prices in the face of defection lost both market share and respect for the NRA.

NRA state compliance offices had recorded over 30,000 “trade practice” complaints by early 1935.19 However, the compliance program was characterized by “a marked timidity on the part of NRA enforcement officials.” 20 This timidity was fatal to the program, since monopoly pricing can easily be more damaging than is the most bare-knuckled competition to a firm that attempts it without parallel action from its competitors. NRA hesitancy came about as a result of doubts about whether a vigorous enforcement effort would withstand constitutional challenge, a not-unrelated lack of support from the Department of Justice, public antipathy for enforcement actions aimed at forcing sellers to charge higher prices, and unabating internal NRA disputes about the advisability of the price-fixing core of the trade practice program.21 Consequently, by mid-1934, firms disinclined to respect code pricing rules were ignoring them. By that point then, contrary to the initial expectations of many code signatories, the new antitrust regime represented only permission to form voluntary cartelization agreements, not the advent of government-enforced cartels. Even there, participants had to be discreet, so as not to run afoul of the antimonopoly language of the Act.

It is still far from clear how much market power was conferred by the NRA’s loosening of antitrust constraints. Of course, modern observers of the alternating successes and failures of cartels such as OPEC will not be surprised that the NRA program led to mixed results. In the absence of government enforcement, the program simply amounted to de facto legalization of self-enforcing cartels. With respect to the ease of collusion, economic theory is clear only on the point that self-enforceability is an open question; self-interest may lead to either breakdown of agreements or success at sustaining them.

Conflicts between Large and Small Firms

Some part of the difficulties encountered by NRA cartels may have had roots in a progressive mandate to offer special protection to the “little guy.” The NIRA had specified that acceptable codes of fair competition must not “eliminate or oppress small enterprises,” 22 and that “any organization availing itself of the benefits of this title shall be truly representative of the trade or industry . . . Any organization violating … shall cease to be entitled to the benefits of this title.” 23 Majority rule provisions were exceedingly common in codes, and were most likely a reflection of this statutory mandate. The concern for small enterprise had strong progressive roots.24 Justice Brandeis’s well-known antipathy for large-scale enterprise and concentration of economic power reflected a widespread and long-standing debate about the legitimate goals of the American experiment.

In addition to evaluating monopolization under the codes, the Darrow board had been charged with assessing the impact of the NRA on small business. Its conclusion was that “in certain industries small enterprises were oppressed.” Again however, as with his review of monopolization, Darrow may have seen only what he was predisposed to see. A number of NRA “code histories” detail conflicts within industries in which small, higher-cost producers sought to use majority rule provisions to support pricing at levels above those desired by larger, lower-cost producers. In the absence of effective enforcement from the government, such prices were doomed to break down, triggering repeated price wars in some industries.25

By 1935, there was understandable bitterness about what many businesses viewed as the lost promise of the NRA. Undoubtedly, the bitterness was exacerbated by the fact that the NRA wanted higher wages while failing to deliver the tools needed for effective cartelization. However, it is not entirely clear that everyone in the business community felt that the labor provisions of the Act were undesirable.26

Labor and Employment Issues

By their nature, market economies give rise to surplus-eroding rivalry among those who would be better off collectively if they could only act in concert. NRA codes of fair competition, specifying agreements on pricing and terms of employment, arose from a perceived confluence of interests among representatives of “business,” “labor,” and “the public” in muting that rivalry. Many proponents of the NIRA held that competitive pressures on business had led to downward pressure on wages, which in turn caused low consumption, leading to greater pressure on business, and so on. Allowing workers to organize and bargain collectively, while their employers pledged to one another not to sell below cost, was identified as a way to arrest harmful deflationary forces. Knowledge that one’s rivals would also be forced to pay “code wages” had some potential for aiding cartel survival. Thus the rationale for NRA wage supports at the microeconomic level potentially dovetailed with the macroeconomic theory by which higher wages were held to support higher consumption and, in turn, higher prices.

Labor provisions of the NIRA appeared in Section 7: “. . . employees shall have the right to organize and bargain collectively through representatives of their own choosing … employers shall comply with the maximum hours of labor, minimum rates of pay, and other conditions of employment…” 27 Each “code of fair competition” had to include labor provisions acceptable to the National Recovery Administration, developed during a process of negotiations, hearings, and review. Thus in order to obtain the shield against antitrust prosecution for their “trade practices” offered by an approved code, significant concessions to workers had to be made.

The NRA is generally judged to have been a success for labor and a miserable failure for business. However, evaluation is complicated to the extent that labor could not have achieved gains with respect to collective bargaining rights over wages and working conditions, had those rights not been more or less willingly granted by employers operating under the belief that stabilization of labor costs would facilitate cartelization. The labor provisions may have indeed helped some industries as well as helping workers, and for firms in such industries, the NRA cannot have been judged a failure. Moreover, while some businesses may have found the Act beneficial, because labor cost stability or freedom to negotiate with rivals enhanced their ability to cooperate on price, it is not entirely obvious that workers as a class gained as much as is sometimes contended.

The NRA did help solidify new and important norms regarding child labor, maximum hours, and other conditions of employment; it will never be known if the same progress could have been made had not industry been more or less hornswoggled into giving ground, using the antitrust laws as bait. Whatever the long-term effects of the NRA on worker welfare, the short-term gains for labor associated with higher wages were questionable. While those workers who managed to stay employed throughout the nineteen thirties benefited from higher wages, to the extent that workers were also consumers, and often unemployed consumers at that, or even potential entrepreneurs, they may have been better off without the NRA.

The issue is far from settled. Ben Bernanke and Martin Parkinson examine the economic growth that occurred during the New Deal in spite of higher wages and suggest “part of the answer may be that the higher wages ‘paid for themselves’ through increased productivity of labor. Probably more important, though, is the observation that with imperfectly competitive product markets, output depends on aggregate demand as well as the real wage. Maybe Herbert Hoover and Henry Ford were right: Higher real wages may have paid for themselves in the broader sense that their positive effect on aggregate demand compensated for their tendency to raise cost.” 28 However, Christina Romer establishes a close connection between NRA programs and the failure of wages and prices to adjust to high unemployment levels. In her view, “By preventing the large negative deviations of output from trend in the mid-1930s from exerting deflationary pressure, [the NRA] prevented the economy’s self-correction mechanism from working.” 29

Aftermath of Supreme Court’s Ruling in Schecter Case

The Supreme Court struck down the NRA on May 27, 1935; the case was a dispute over violations of labor provisions of the “Live Poultry Code” allegedly perpetrated by the Schecter Poultry Corporation. The Court held the code to be invalid on grounds of “attempted delegation of legislative power and the attempted regulation of intrastate transactions which affect interstate commerce only indirectly.” 30 There were to be no more grand bargains between business and labor under the New Deal.

Riven by divergent agendas rooted in industry- and firm-specific technology and demand, “business” was never able to speak with even the tenuous degree of unity achieved by workers. Following the abortive attempt to get the government to enforce cartels, firms and industries went their own ways, using a variety of strategies to enhance their situations. A number of sectors did succeed in getting passage of “little NRAs” with mechanisms tailored to mute competition in their particular circumstances. These mechanisms included the Robinson-Patman Act, aimed at strengthening traditional retailers against the ability of chain stores to buy at lower prices, the Guffey Acts, in which high cost bituminous coal operators and coal miners sought protection from the competition of lower cost operators, and the Motor Carrier Act in which high cost incumbent truckers obtained protection against new entrants.31

On-going macroeconomic analysis suggests that the general public interest may have been poorly served by the experiment of the NRA. Like many macroeconomic theories, the validity of the underconsumption scenario that was put forth in support of the program depended on the strength and timing of the operation of its various mechanisms. Increasingly it appears that the NRA set off inflationary forces thought by some to be desirable at the time, but that in fact had depressing effects on demand for labor and on output. Pure monopolistic deadweight losses probably were less important than higher wage costs (although there has not been any close examination of inefficiencies that may have resulted from the NRA’s attempt to protect small higher-cost producers). The strength of any mitigating effects on aggregate demand remains to be established.

1 Leverett Lyon, P. Homan, L. Lorwin, G. Terborgh, C. Dearing, L. Marshall, The National Recovery Administration: An Analysis and Appraisal, Washington: Brooking Institution, 1935, p. 313, footnote 9.

2 See, for example, Charles Frederick Roos, NRA Economic Planning, Colorado Springs: Cowles Commission, 1935, p. 343.

3See, for example, Colin Gordon, New Deals: Business, Labor, and Politics in America, 1920-1935, New York: Cambridge University Press, 1993, especially chapter 5.

4Christina D. Romer, “Why Did Prices Rise in the 1930s?” Journal of Economic History 59, no. 1 (1999): 167-199; Michael Weinstein, Recovery and Redistribution under the NIRA, Amsterdam: North Holland, 1980, and Harold L. Cole and Lee E. Ohanian, “New Deal Policies and the Persistence of the Great Depression,” Working Paper 597, Federal Reserve Bank of Minneapolis, February 2001. But also see “Unemployment, Inflation and Wages in the American Depression: Are There Lessons for Europe?” Ben Bernanke and Martin Parkinson, American Economic Review: Papers and Proceedings 79, no. 2 (1989): 210-214.

5 See, for example, Donald Brand, Corporatism and the Rule of Law: A Study of the National Recovery Administration, Ithaca: Cornell University Press, 1988, p. 94.

6 See, for example, Roos, op. cit., pp. 77, 92.

7 Section 3(a) of The National Industrial Recovery Act, reprinted at p. 478 of Roos, op. Cit.

8 Section 5 of The National Industrial Recovery Act, reprinted at p. 483 of Roos, op. cit. Note though, that the legal status of actions taken during the NRA era was never clear; Roos points out that “…President Roosevelt signed an executive order on January 20, 1934, providing that any complainant of monopolistic practices … could press it before the Federal Trade Commission or request the assistance of the Department of Justice. And, on the same date, Donald Richberg issued a supplementary statement which said that the provisions of the anti-trust laws were still in effect and that the NRA would not tolerate monopolistic practices.” (Roos, op. cit. p. 376.)

9 Lyon, op. cit., p. 307, cited at p. 52 in Lee and Ohanian, op cit.

10 Roos, op. cit., p. 75; and Blackwell Smith, My Imprint on the Sands of Time: The Life of a New Dealer, Vantage Press, New York, p. 109.

11 Lyon, op. cit., p. 570.

12 Section 3 (a)(2) of The National Industrial Recovery Act, op. Cit.

13 Roos, op. cit., at pp. 254-259. Charles Roos comments that “Leon Henderson and Blackwell Smith, in particular, became intrigued with a notion that competition could be set up within limits and that in this way wide price variations tending to demoralize an industry could be prevented.”

14 Lyon, et al., op. cit., p. 605.

15 Smith, Assistant Counsel of the NRA (per Roos, op cit., p. 254), has the following to say about standardization: One of the more controversial subjects, which we didn’t get into too deeply, except to draw guidelines, was standardization.” Smith goes on to discuss the obvious need to standardize rail track gauges, plumbing fittings, and the like, but concludes, “Industry on the whole wanted more standardization than we could go with.” (Blackwell Smith, op. cit., pp. 106-7.) One must not go overboard looking for coherence among the various positions espoused by NRA administrators; along these lines it is worth remembering Smith’s statement some 60 years later: “Business’s reaction to my policy [Smith was speaking generally here of his collective proposals] to some extent was hostile. They wished that the codes were not as strict as I wanted them to be. Also, there was criticism from the liberal/labor side to the effect that the codes were more in favor of business than they should have been. I said, ‘We are guided by a squealometer. We tune policy until the squeals are the same pitch from both sides.'” (Smith, op. cit. p. 108.)

16 Quoted at p 378 of Roos, op. Cit.

17 Brand, op. cit. at pp. 159-60 cites in agreement extremely critical conclusions by Roos (op. cit. at p. 409) and Arthur Schlesinger, The Age of Roosevelt: The Coming of the New Deal, Boston: Houghton Mifflin, 1959, p. 133.

18 Roos acknowledges a breakdown by spring of 1934: “By March, 1934 something was urgently needed to encourage industry to observe code provisions; business support for the NRA had decreased materially and serious compliance difficulties had arisen.” (Roos, op. cit., at p. 318.) Brand dates the start of the compliance crisis much earlier, in the fall of 1933. (Brand, op. cit., p. 103.)

19 Lyon, op. cit., p. 264.

20 Lyon, op. cit., p. 268.

21 Lyon, op. cit., pp. 268-272. See also Peter H. Irons, The New Deal Lawyers, Princeton: Princeton University Press, 1982.

22 Section 3(a)(2) of The National Industrial Recovery Act, op. Cit.

23 Section 6(b) of The National Industrial Recovery Act, op. Cit.

24 Brand, op. Cit.

25 Barbara Alexander and Gary D. Libecap, “The Effect of Cost Heterogeneity in the Success and Failure of the New Deal’s Agricultural and Industrial Programs,” Explorations in Economic History, 37 (2000), pp. 370-400.

26 Gordon, op. Cit.

27 Section 7 of the National Industrial Recovery Act, reprinted at pp. 484-5 of Roos, op. Cit.

28 Bernanke and Parkinson, op. cit., p. 214.

29 Romer, op. cit., p. 197.

30 Supreme Court of the United States, Nos. 854 and 864, October term, 1934, (decision issued May 27, 1935). Reprinted in Roos, op. cit., p. 580.

31 Ellis W. Hawley, The New Deal and the Problem of Monopoly: A Study in Economic Ambivalence, 1966, Princeton: Princeton University Press, p. 249; Irons, op. cit., pp. 105-106, 248.

The US Coal Industry in the Nineteenth Century

Sean Patrick Adams, University of Central Florida

Introduction

The coal industry was a major foundation for American industrialization in the nineteenth century. As a fuel source, coal provided a cheap and efficient source of power for steam engines, furnaces, and forges across the United States. As an economic pursuit, coal spurred technological innovations in mine technology, energy consumption, and transportation. When mine managers brought increasing sophistication to the organization of work in the mines, coal miners responded by organizing into industrial trade unions. The influence of coal was so pervasive in the United States that by the advent of the twentieth century, it became a necessity of everyday life. In an era where smokestacks equaled progress, the smoky air and sooty landscape of industrial America owed a great deal to the growth of the nation’s coal industry. By the close of the nineteenth century, many Americans across the nation read about the latest struggle between coal companies and miners by the light of a coal-gas lamp and in the warmth of a coal-fueled furnace, in a house stocked with goods brought to them by coal-fired locomotives. In many ways, this industry served as a major factor of American industrial growth throughout the nineteenth century.

The Antebellum American Coal Trade

Although coal had served as a major source of energy in Great Britain for centuries, British colonists had little use for North America’s massive reserves of coal prior to American independence. With abundant supplies of wood, water, and animal fuel, there was little need to use mineral fuel in seventeenth and eighteenth-century America. But as colonial cities along the eastern seaboard grew in population and in prestige, coal began to appear in American forges and furnaces. Most likely this coal was imported from Great Britain, but a small domestic trade developed in the bituminous fields outside of Richmond, Virginia and along the Monongahela River near Pittsburgh, Pennsylvania.

The Richmond Basin

Following independence from Britain, imported coal became less common in American cities and the domestic trade became more important. Economic nationalists such as Tench Coxe, Albert Gallatin, and Alexander Hamilton all suggested that the nation’s coal trade — at that time centered in the Richmond coal basin of eastern Virginia — would serve as a strategic resource for the nation’s growth and independence. Although it labored under these weighty expectations, the coal trade of eastern Virginia was hampered by its existence on the margins of the Old Dominion’s plantation economy. Colliers of the Richmond Basin used slave labor effectively in their mines, but scrambled to fill out their labor force, especially during peak periods of agricultural activity. Transportation networks in the region also restricted the growth of coal mining. Turnpikes proved too expensive for the coal trade and the James River and Kanawha Canal failed to make necessary improvements in order to accommodate coal barge traffic and streamline the loading, conveyance, and distribution of coal at Richmond’s tidewater port. Although the Richmond Basin was nation’s first major coalfield, miners there found growth potential to be limited.

The Rise of Anthracite Coal

At the same time that the Richmond Basin’s coal trade declined in importance, a new type of mineral fuel entered urban markets of the American seaboard. Anthracite coal has higher carbon content and is much harder than bituminous coal, thus earning the nickname “stone coal” in its early years of use. In 1803, Philadelphians watched a load of anthracite coal actually squelch a fire during a trial run, and city officials used the load of “stone coal” as attractive gravel for sidewalks. Following the War of 1812, however, a series of events paved the way for anthracite coal’s acceptance in urban markets. Colliers like Jacob Cist saw the shortage of British and Virginia coal in urban communities as an opportunity to promote the use of “stone coal.” Philadelphia’s American Philosophical Society and Franklin Institute enlisted the aid of the area’s scientific community to disseminate information to consumers on the particular needs of anthracite. The opening of several links between Pennsylvania’s anthracite fields via the Lehigh Coal and Navigation Company (1820), the Schuylkill Navigation Company (1825), and the Delaware and Hudson (1829) insured that the flow of anthracite from mine to market would be cheap and fast. “Stone coal” became less a geological curiosity by the 1830s and instead emerged as a valuable domestic fuel for heating and cooking, as well as a powerful source of energy for urban blacksmiths, bakers, brewers, and manufacturers. As demonstrated in Figure 1, Pennsylvania anthracite dominated urban markets by the late 1830s. By 1840, annual production had topped one million tons, or about ten times the annual production of the Richmond bituminous field.

Figure One: Percentage of Seaboard Coal Consumption by Origin, 1822-1842

Sources:

Hunt’s Merchant’s Magazine and Commercial Review 8 (June 1843): 548;

Alfred Chandler, “Anthracite Coal and the Beginnings of the Industrial Revolution,” p. 154.

The Spread of Coalmining

The antebellum period also saw the expansion of coal mining into many more states than Pennsylvania and Virginia, as North America contains a variety of workable coalfields. Ohio’s bituminous fields employed 7,000 men and raised about 320,000 tons of coal in 1850 — only three years later the state’s miners had increased production to over 1,300,000 tons. In Maryland, the George’s Creek bituminous region began to ship coal to urban markets by the Baltimore and Ohio Railroad (1842) and the Chesapeake and Ohio Canal (1850). The growth of St. Louis provided a major boost to the coal industries of Illinois and Missouri, and by 1850 colliers in the two states raised about 350,000 tons of coal annually. By the advent of the Civil War, coal industries appeared in at least twenty states.

Organization of Antebellum Mines

Throughout the antebellum period, coal mining firms tended to be small and labor intensive. The seams that were first worked in the anthracite fields of eastern Pennsylvania or the bituminous fields in Virginia, western Pennsylvania, and Ohio tended to lie close to the surface. A skilled miner and a handful of laborers could easily raise several tons of coal a day through the use of a “drift” or “slope” mine that intersected a vein of coal along a hillside. In the bituminous fields outside of Pittsburgh, for example, coal seams were exposed along the banks of the Monongahela and colliers could simply extract the coal with a pickax or shovel and roll it down the riverbank via a handcart into a waiting barge. Once the coal left the mouth of the mine, however, the size of the business handling it varied. Proprietary colliers usually worked on land that was leased for five to fifteen years — often from a large landowner or corporation. The coal was often shipped to market via a large railroad or canal corporation such as the Baltimore and Ohio Railroad, or the Delaware and Hudson Canal. Competition between mining firms and increases in production kept prices and profit margins relatively low, and many colliers slipped in and out of bankruptcy. These small mining firms were typical of the “easy entry, easy exit” nature of American business competition in the antebellum period.

Labor Relations

Since most antebellum coal mining operations were often limited to a few skilled miners aided by lesser skilled laborers, the labor relations in American coal mining regions saw little extended conflict. Early coal miners also worked close to the surface, often in horizontal drift mines, which meant that work was not as dangerous in the era before deep shaft mining. Most mining operations were far-flung enterprises away from urban centers, which frustrated attempts to organize miners into a “critical mass” of collective power — even in the nation’s most developed anthracite fields. These factors, coupled with the mine operator’s belief that individual enterprise in the anthracite regions insured a harmonious system of independent producers, had inhibited the development of strong labor organizations in Pennsylvania’s antebellum mining industry. In less developed regions, proprietors often worked in the mines themselves, so the lines between ownership, management, and labor were often blurred.

Early Unions

Most disputes, when they did occur, were temporary affairs that focused upon the low wages spurred by the intense competition among colliers. The first such action in the anthracite industry occurred in July of 1842 when workers from Minersville in Schuylkill County marched on Pottsville to protest low wages. This short-lived strike was broken up by the Orwigsburgh Blues, a local militia company. In 1848 John Bates enrolled 5,000 miners and struck for higher pay in the summer of 1849. But members of the “Bates Union” found themselves locked out of work and the movement quickly dissipated. In 1853, the Delaware and Hudson Canal Company’s miners struck for a 2½ cent per ton increase in their piece rate. This strike was successful, but failed to produce any lasting union presence in the D&H’s operations. Reports of disturbances in the bituminous fields of western Pennsylvania and Ohio follow the same pattern, as antebellum strikes tended to be localized and short-lived. Production levels thus remained high, and consumers of mineral fuel could count upon a steady supply reaching market.

Use of Anthracite in the Iron Industry

The most important technological development in the antebellum American coal industry was the successful adoption of anthracite coal to iron making techniques. Since the 1780s, bituminous coal or coke — which is bituminous coal with the impurities burned away — had been the preferred fuel for British iron makers. Once anthracite had nearly successfully entered American hearths, there seemed to be no reason why stone coal could not be used to make iron. As with its domestic use, however, the industrial potential of anthracite coal faced major technological barriers. In British and American iron furnaces of the early nineteenth century, the high heat needed to smelt iron ore required a blast of excess air to aid the combustion of the fuel, whether it was coal, wood, or charcoal. While British iron makers in the 1820s attempted to increase the efficiency of the process by using superheated air, known commonly as a “hot blast,” American iron makers still used a “cold blast” to stoke their furnaces. The density of anthracite coal resisted attempts to ignite it through the cold blast and therefore appeared to be an inappropriate fuel for most American iron furnaces.

Anthracite iron first appeared in Pennsylvania in 1840, when David Thomas brought Welsh hot blast technology into practice at the Lehigh Crane Iron Company. The firm had been chartered in 1839 under the general incorporation act. The Allentown firm’s innovation created a stir in iron making circles, and iron furnaces for smelting ore with anthracite began to appear across eastern and central Pennsylvania. In 1841, only a year after the Lehigh Crane Iron Company’s success, Walter Johnson found no less than eleven anthracite iron furnaces in operation. That same year, an American correspondent of London bankers cited savings on iron making of up to twenty-five percent after the conversion to anthracite and noted that “wherever the coal can be procured the proprietors are changing to the new plan; and it is generally believed that the quality of the iron is much improved where the entire process is affected with anthracite coal.” Pennsylvania’s investment in anthracite iron paid dividends for the industrial economy of the state and proved that coal could be adapted to a number of industrial pursuits. By 1854, forty-six percent of all American pig iron had been smelted with anthracite coal as a fuel, and by 1860 anthracite’s share of pig iron was more than fifty-six percent.

Rising Levels of Coal Output and Falling Prices

The antebellum decades saw the coal industry emerge as a critical component of America’s industrial revolution. Anthracite coal became a fixture in seaboard cities up and down the east coast of North America — as cities grew, so did the demand for coal. To the west, Pittsburgh and Ohio colliers shipped their coal as far as Louisville, Cincinnati, or New Orleans. As wood, animal, and waterpower became scarcer, mineral fuel usually took their place in domestic consumption and small-scale manufacturing. The structure of the industry, many small-scale firms working on short-term leases, meant that production levels remained high throughout the antebellum period, even in the face of falling prices. In 1840, American miners raised 2.5 million tons of coal to serve these growing markets and by 1850 increased annual production to 8.4 million tons. Although prices tended to fluctuate with the season, in the long run, they fell throughout the antebellum period. For example, in 1830 anthracite coal sold for about $11 per ton. Ten years later, the price had dropped to $7 per ton and by 1860 anthracite sold for about $5.50 a ton in New York City. Annual production in 1860 also passed twenty million tons for the first time in history. Increasing production, intense competition, low prices, and quiet labor relations all were characteristics of the antebellum coal trade in the United States, but developments during and after the Civil War would dramatically alter the structure and character of this critical industrial pursuit.

Coal and the Civil War

The most dramatic expansion of the American coal industry occurred in the late antebellum decades but the outbreak of the Civil War led to some major changes. The fuel needs of the federal army and navy, along with their military suppliers, promised a significant increase in the demand for coal. Mine operators planned for rising, or at least stable, coal prices for the duration of the war. Their expectations proved accurate. Even when prices are adjusted for wartime inflation, they increased substantially over the course of the conflict. Over the years 1860 to 1863, the real (i.e., inflation-adjusted) price of a ton of anthracite rose by over thirty percent, and in 1864 the real price had increased to forty-five percent above its 1860 level. In response, the production of coal increased to over twelve million tons of anthracite and over twenty-four million tons nationwide by 1865.

The demand for mineral fuel in the Confederacy led to changes in southern coalfields as well. In 1862, the Confederate Congress organized the Niter and Mining Bureau within the War Department to supervise the collection of niter (also known as saltpeter) for the manufacture of gunpowder and the mining of copper, lead, iron, coal, and zinc. In addition to aiding the Richmond Basin’s production, the Niter and Mining Bureau opened new coalfields in North Carolina and Alabama and coordinated the flow of mineral fuel to Confederate naval stations along the coast. Although the Confederacy was not awash in coal during the conflict, the work of the Niter and Mining Bureau established the groundwork for the expansion of mining in the postbellum South.

In addition to increases in production, the Civil War years accelerated some qualitative changes in the structure of the industry. In the late 1850s, new railroads stretched to new bituminous coalfields in states like Maryland, Ohio, and Illinois. In the established anthracite coal regions of Pennsylvania, railroad companies profited immensely from the increased traffic spurred by the war effort. For example, the Philadelphia & Reading Railroad’s margin of profit increased from $0.88 per ton of coal in 1861 to $1.72 per ton in 1865. Railroad companies emerged from the Civil War as the most important actors in the nation’s coal trade.

The American Coal Trade after the Civil War

Railroads and the Expansion of the Coal Trade

In the years immediately following the Civil War, the expansion of the coal trade accelerated as railroads assumed the burden of carrying coal to market and opening up previously inaccessible fields. They did this by purchasing coal tracts directly and leasing them to subsidiary firms or by opening their own mines. In 1878, the Baltimore and Ohio Railroad shipped three million tons of bituminous coal from mines in Maryland and from the northern coalfields of the new state of West Virginia. When the Chesapeake and Ohio Railroad linked Huntington, West Virginia with Richmond, Virginia in 1873, the rich bituminous coal fields of southern West Virginia were open for development. The Norfolk and Western developed the coalfields of southwestern Virginia by completing their railroad from tidewater to remote Tazewell County in 1883. A network of smaller lines linking individual collieries to these large trunk lines facilitated the rapid development of Appalachian coal.

Railroads also helped open up the massive coal reserves west of the Mississippi. Small coal mines in Missouri and Illinois existed in the antebellum years, but were limited to the steamboat trade down the Mississippi River. As the nation’s web of railroad construction expanded across the Great Plains, coalfields in Colorado, New Mexico, and Wyoming witnessed significant development. Coal had truly become a national endeavor in the United States.

Technological Innovations

As the coal industry expanded, it also incorporated new mining methods. Early slope or drift mines intersected coal seams relatively close to the surface and needed only small capital investments to prepare. Most miners still used picks and shovels to extract the coal, but some miners used black powder to blast holes in the coal seams, then loaded the broken coal onto wagons by hand. But as miners sought to remove more coal, shafts were dug deeper below the water line. As a result, coal mining needed larger amounts of capital as new systems of pumping, ventilation, and extraction required the implementation of steam power in mines. By the 1890s, electric cutting machines replaced the blasting method of loosening the coal in some mines, and by 1900 a quarter of American coal was mined using these methods. As the century progressed, miners raised more and more coal by using new technology. Along with this productivity came the erosion of many traditional skills cherished by experienced miners.

The Coke Industry

Consumption patterns also changed. The late nineteenth century saw the emergence of coke — a form of processed bituminous coal in which impurities are “baked” out under high temperatures — as a powerful fuel in the iron and steel industry. The discovery of excellent coking coal in the Connellsville region of southwestern Pennsylvania spurred the aggressive growth of coke furnaces there. By 1880, the Connellsville region contained more than 4,200 coke ovens and the national production of coke in the United States stood at three million tons. Two decades later, the United States consumed over twenty million tons of coke fuel.

Competition and Profits

The successful incorporation of new mining methods and the emergence of coke as a major fuel source served as both a blessing and a curse to mining firms. With the new technology they raised more coal, but as more coalfields opened up and national production neared eighty million tons by 1880, coal prices remained relatively low. Cheap coal undoubtedly helped America’s rapidly industrializing economy, but it also created an industry structure characterized by boom and bust periods, low profit margins, and cutthroat competition among firms. But however it was raised, the United States became more and more dependent upon coal as the nineteenth century progressed, as demonstrated by Figure 2.

Figure 2: Coal as a Percentage of American Energy Consumption, 1850-1900

Source: Sam H. Schurr and Bruce C. Netschert, Energy in the American Economy, 1850-1975 (Baltimore: Johns Hopkins Press, 1960), 36-37.

The Rise of Labor Unions

As coal mines became more capital intensive over the course of the nineteenth century, the role of miners changed dramatically. Proprietary mines usually employed skilled miners as subcontractors in the years prior to the Civil War; by doing so they abdicated a great deal of control over the pace of mining. Corporate reorganization and the introduction of expensive machinery eroded the traditional authority of the skilled miner. By the 1870s, many mining firms employed managers to supervise the pace of work, but kept the old system of paying mine laborers per ton rather than an hourly wage. Falling piece rates quickly became a source of discontent in coal mining regions.

Miners responded to falling wages and the restructuring of mine labor by organizing into craft unions. The Workingmen’s Benevolent Association founded in Pennsylvania in 1868, united English, Irish, Scottish, and Welsh anthracite miners. The WBA won some concessions from coal companies until Franklin Gowen, acting president of the Philadelphia and Reading Railroad led a concerted effort to break the union in the winter of 1874-75. When sporadic violence plagued the anthracite fields, Gowen led the charge against the “Molly Maguires,” a clandestine organization supposedly led by Irish miners. After the breaking of the WBA, most coal mining unions served to organize skilled workers in specific regions. In 1890, a national mining union appeared when delegates from across the United States formed the United Mine Workers of America. The UMWA struggled to gain widespread acceptance until 1897, when widespread strikes pushed many workers into union membership. By 1903, the UMWA listed about a quarter of a million members, raised a treasury worth over one million dollars, and played a major role in industrial relations of the nation’s coal industry.

Coal at the Turn of the Century

By 1900, the American coal industry was truly a national endeavor that raised fifty-seven million tons of anthracite and 212 million tons of bituminous coal. (See Tables 1 and 2 for additional trends.) Some coal firms grew to immense proportions by nineteenth-century standards. The U.S. Coal and Oil Company, for example, was capitalized at six million dollars and owned the rights to 30,000 acres of coal-bearing land. But small mining concerns with one or two employees also persisted through the turn of the century. New developments in mine technology continued to revolutionize the trade as more and more coal fields across the United States became integrated into the national system of railroads. Industrial relations also assumed nationwide dimensions. John Mitchell, the leader of the UMWA, and L.M. Bowers of the Colorado Fuel and Iron Company, symbolized a new coal industry in which hard-line positions developed in both labor and capital’s respective camps. Since the bituminous coal industry alone employed over 300,000 workers by 1900, many Americans kept a close eye on labor relations in this critical trade. Although “King Coal” stood unchallenged as the nation’s leading supplier of domestic and industrial fuel, tension between managers and workers threatened the stability of the coal industry in the twentieth century.

 

Table 1: Coal Production in the United States, 1829-1899

Year Coal Production (thousands of tons) Percent Increase over Decade Tons per capita
Anthracite Bituminous
1829 138 102 0.02
1839 1008 552 550 0.09
1849 3995 2453 313 0.28
1859 9620 6013 142 0.50
1869 17,083 15,821 110 0.85
1879 30,208 37,898 107 1.36
1889 45,547 95,683 107 2.24
1899 60,418 193,323 80 3.34

Source: Fourteenth Census of the United States, Vol. XI, Mines and Quarries, 1922, Tables 8 and 9, pp. 258 and 260.

Table 2: Leading Coal Producing States, 1889

State Coal Production (thousands of tons)
Pennsylvania 81,719
Illinois 12,104
Ohio 9977
West Virginia 6232
Iowa 4095
Alabama 3573
Indiana 2845
Colorado 2544
Kentucky 2400
Kansas 2221
Tennessee 1926

Source: Thirteenth Census of the United States, Vol. XI, Mines and Quarries, 1913, Table 4, p. 187

Suggestions for Further Reading

Adams, Sean Patrick. “Different Charters, Different Paths: Corporations and Coal in Antebellum Pennsylvania and Virginia,” Business and Economic History 27 (Fall 1998): 78-90.

Binder, Frederick Moore. Coal Age Empire: Pennsylvania Coal and Its Utilization to 1860. Harrisburg: Pennsylvania Historical and Museum Commission, 1974.

Blatz, Perry. Democratic Miners: Work and Labor Relations in the Anthracite Coal Industry, 1875-1925. Albany: SUNY Press, 1994.

Broehl, Wayne G. The Molly Maguires. Cambridge, MA: Harvard University Press, 1964.

Bruce, Kathleen. Virginia Iron Manufacture in the Slave Era. New York: The Century Company, 1931.

Chandler, Alfred. “Anthracite Coal and the Beginnings of the ‘Industrial Revolution’ in the United States,” Business History Review 46 (1972): 141-181.

DiCiccio, Carmen. Coal and Coke in Pennsylvania. Harrisburg: Pennsylvania Historical and Museum Commission, 1996

Eavenson, Howard. The First Century and a Quarter of the American Coal Industry. Pittsburgh: Privately Printed, 1942.

Eller, Ronald. Miners, Millhands, and Mountaineers: Industrialization of the Appalachian South, 1880-1930. Knoxville: University of Tennessee Press, 1982.

Harvey, Katherine. The Best Dressed Miners: Life and Labor in the Maryland Coal Region, 1835-1910. Ithaca, NY: Cornell University Press, 1993.

Hoffman, John. “Anthracite in the Lehigh Valley of Pennsylvania, 1820-1845,” United States National Museum Bulletin 252 (1968): 91-141.

Laing, James T. “The Early Development of the Coal Industry in the Western Counties of Virginia,” West Virginia History 27 (January 1966): 144-155.

Laslett, John H.M. editor. The United Mine Workers: A Model of Industrial Solidarity? University Park: Penn State University Press, 1996.

Letwin, Daniel. The Challenge of Interracial Unionism: Alabama Coal Miners, 1878-1921 Chapel Hill: University of North Carolina Press, 1998.

Lewis, Ronald. Coal, Iron, and Slaves. Industrial Slavery in Maryland and Virginia, 1715-1865. Westport, Connecticut: Greenwood Press, 1979.

Long, Priscilla. Where the Sun Never Shines: A History of America’s Bloody Coal Industry. New York: Paragon, 1989.

Nye, David E.. Consuming Power: A Social History of American Energies. Cambridge: Massachusetts Institute of Technology Press, 1998.

Palladino, Grace. Another Civil War: Labor, Capital, and the State in the Anthracite Regions of Pennsylvania, 1840-1868. Urbana: University of Illinois Press, 1990.

Powell, H. Benjamin. Philadelphia’s First Fuel Crisis. Jacob Cist and the Developing Market for Pennsylvania Anthracite. University Park: The Pennsylvania State University Press, 1978.

Schurr, Sam H. and Bruce C. Netschert. Energy in the American Economy, 1850-1975: An Economic Study of Its History and Prospects. Baltimore: Johns Hopkins Press, 1960.

Stapleton, Darwin. The Transfer of Early Industrial Technologies to America. Philadelphia: American Philosophical Society, 1987.

Stealey, John E.. The Antebellum Kanawha Salt Business and Western Markets. Lexington: The University Press of Kentucky, 1993.

Wallace, Anthony F.C. St. Clair. A Nineteenth-Century Coal Town’s Experience with a Disaster-Prone Industry. New York: Alfred A. Knopf, 1981.

Warren, Kenneth. Triumphant Capitalism: Henry Clay Frick and the Industrial Transformation of America. Pittsburgh: University of Pittsburgh Press, 1996.

Woodworth, J. B.. “The History and Conditions of Mining in the Richmond Coal-Basin, Virginia.” Transactions of the American Institute of Mining Engineers 31 (1902): 477-484.

Yearley, Clifton K.. Enterprise and Anthracite: Economics and Democracy in Schuylkill County, 1820-1875. Baltimore: The Johns Hopkins University Press, 1961.

The American Economy during World War II

Christopher J. Tassava

For the United States, World War II and the Great Depression constituted the most important economic event of the twentieth century. The war’s effects were varied and far-reaching. The war decisively ended the depression itself. The federal government emerged from the war as a potent economic actor, able to regulate economic activity and to partially control the economy through spending and consumption. American industry was revitalized by the war, and many sectors were by 1945 either sharply oriented to defense production (for example, aerospace and electronics) or completely dependent on it (atomic energy). The organized labor movement, strengthened by the war beyond even its depression-era height, became a major counterbalance to both the government and private industry. The war’s rapid scientific and technological changes continued and intensified trends begun during the Great Depression and created a permanent expectation of continued innovation on the part of many scientists, engineers, government officials and citizens. Similarly, the substantial increases in personal income and frequently, if not always, in quality of life during the war led many Americans to foresee permanent improvements to their material circumstances, even as others feared a postwar return of the depression. Finally, the war’s global scale severely damaged every major economy in the world except for the United States, which thus enjoyed unprecedented economic and political power after 1945.

The Great Depression

The global conflict which was labeled World War II emerged from the Great Depression, an upheaval which destabilized governments, economies, and entire nations around the world. In Germany, for instance, the rise of Adolph Hitler and the Nazi party occurred at least partly because Hitler claimed to be able to transform a weakened Germany into a self-sufficient military and economic power which could control its own destiny in European and world affairs, even as liberal powers like the United States and Great Britain were buffeted by the depression.

In the United States, President Franklin Roosevelt promised, less dramatically, to enact a “New Deal” which would essentially reconstruct American capitalism and governance on a new basis. As it waxed and waned between 1933 and 1940, Roosevelt’s New Deal mitigated some effects of the Great Depression, but did not end the economic crisis. In 1939, when World War II erupted in Europe with Germany’s invasion of Poland, numerous economic indicators suggested that the United States was still deeply mired in the depression. For instance, after 1929 the American gross domestic product declined for four straight years, then slowly and haltingly climbed back to its 1929 level, which was finally exceeded again in 1936. (Watkins, 2002; Johnston and Williamson, 2004)

Unemployment was another measure of the depression’s impact. Between 1929 and 1939, the American unemployment rate averaged 13.3 percent (calculated from “Corrected BLS” figures in Darby, 1976, 8). In the summer of 1940, about 5.3 million Americans were still unemployed — far fewer than the 11.5 million who had been unemployed in 1932 (about thirty percent of the American workforce) but still a significant pool of unused labor and, often, suffering citizens. (Darby, 1976, 7. For somewhat different figures, see Table 3 below.)

In spite of these dismal statistics, the United States was, in other ways, reasonably well prepared for war. The wide array of New Deal programs and agencies which existed in 1939 meant that the federal government was markedly larger and more actively engaged in social and economic activities than it had been in 1929. Moreover, the New Deal had accustomed Americans to a national government which played a prominent role in national affairs and which, at least under Roosevelt’s leadership, often chose to lead, not follow, private enterprise and to use new capacities to plan and administer large-scale endeavors.

Preparedness and Conversion

As war spread throughout Europe and Asia between 1939 and 1941, nowhere was the federal government’s leadership more important than in the realm of “preparedness” — the national project to ready for war by enlarging the military, strengthening certain allies such as Great Britain, and above all converting America’s industrial base to produce armaments and other war materiel rather than civilian goods. “Conversion” was the key issue in American economic life in 1940-1942. In many industries, company executives resisted converting to military production because they did not want to lose consumer market share to competitors who did not convert. Conversion thus became a goal pursued by public officials and labor leaders. In 1940, Walter Reuther, a high-ranking officer in the United Auto Workers labor union, provided impetus for conversion by advocating that the major automakers convert to aircraft production. Though initially rejected by car-company executives and many federal officials, the Reuther Plan effectively called the public’s attention to America’s lagging preparedness for war. Still, the auto companies only fully converted to war production in 1942 and only began substantially contributing to aircraft production in 1943.

Even for contemporary observers, not all industries seemed to be lagging as badly as autos, though. Merchant shipbuilding mobilized early and effectively. The industry was overseen by the U.S. Maritime Commission (USMC), a New Deal agency established in 1936 to revive the moribund shipbuilding industry, which had been in a depression since 1921, and to ensure that American shipyards would be capable of meeting wartime demands. With the USMC supporting and funding the establishment and expansion of shipyards around the country, including especially the Gulf and Pacific coasts, merchant shipbuilding took off. The entire industry had produced only 71 ships between 1930 and 1936, but from 1938 to 1940, commission-sponsored shipyards turned out 106 ships, and then almost that many in 1941 alone (Fischer, 41). The industry’s position in the vanguard of American preparedness grew from its strategic import — ever more ships were needed to transport American goods to Great Britain and France, among other American allies — and from the Maritime Commission’s ability to administer the industry through means as varied as construction contracts, shipyard inspectors, and raw goading of contractors by commission officials.

Many of the ships built in Maritime Commission shipyards carried American goods to the European allies as part of the “Lend-Lease” program, which was instituted in 1941 and provided another early indication that the United States could and would shoulder a heavy economic burden. By all accounts, Lend-Lease was crucial to enabling Great Britain and the Soviet Union to fight the Axis, not least before the United States formally entered the war in December 1941. (Though scholars are still assessing the impact of Lend-Lease on these two major allies, it is likely that both countries could have continued to wage war against Germany without American aid, which seems to have served largely to augment the British and Soviet armed forces and to have shortened the time necessary to retake the military offensive against Germany.) Between 1941 and 1945, the U.S. exported about $32.5 billion worth of goods through Lend-Lease, of which $13.8 billion went to Great Britain and $9.5 billion went to the Soviet Union (Milward, 71). The war dictated that aircraft, ships (and ship-repair services), military vehicles, and munitions would always rank among the quantitatively most important Lend-Lease goods, but food was also a major export to Britain (Milward, 72).

Pearl Harbor was an enormous spur to conversion. The formal declarations of war by the United States on Japan and Germany made plain, once and for all, that the American economy would now need to be transformed into what President Roosevelt had called “the Arsenal of Democracy” a full year before, in December 1940. From the perspective of federal officials in Washington, the first step toward wartime mobilization was the establishment of an effective administrative bureaucracy.

War Administration

From the beginning of preparedness in 1939 through the peak of war production in 1944, American leaders recognized that the stakes were too high to permit the war economy to grow in an unfettered, laissez-faire manner. American manufacturers, for instance, could not be trusted to stop producing consumer goods and to start producing materiel for the war effort. To organize the growing economy and to ensure that it produced the goods needed for war, the federal government spawned an array of mobilization agencies which not only often purchased goods (or arranged their purchase by the Army and Navy), but which in practice closely directed those goods’ manufacture and heavily influenced the operation of private companies and whole industries.

Though both the New Deal and mobilization for World War I served as models, the World War II mobilization bureaucracy assumed its own distinctive shape as the war economy expanded. Most importantly, American mobilization was markedly less centralized than mobilization in other belligerent nations. The war economies of Britain and Germany, for instance, were overseen by war councils which comprised military and civilian officials. In the United States, the Army and Navy were not incorporated into the civilian administrative apparatus, nor was a supreme body created to subsume military and civilian organizations and to direct the vast war economy.

Instead, the military services enjoyed almost-unchecked control over their enormous appetites for equipment and personnel. With respect to the economy, the services were largely able to curtail production destined for civilians (e.g., automobiles or many non-essential foods) and even for war-related but non-military purposes (e.g., textiles and clothing). In parallel to but never commensurate with the Army and Navy, a succession of top-level civilian mobilization agencies sought to influence Army and Navy procurement of manufactured goods like tanks, planes, and ships, raw materials like steel and aluminum, and even personnel. One way of gauging the scale of the increase in federal spending and the concomitant increase in military spending is through comparison with GDP, which itself rose sharply during the war. Table 1 shows the dramatic increases in GDP, federal spending, and military spending.

Table 1: Federal Spending and Military Spending during World War II

(dollar values in billions of constant 1940 dollars)

Nominal GDP Federal Spending Defense Spending
Year total $ % increase total $ % increase % of GDP total $ % increase % of GDP % of federal spending
1940 101.4 9.47 9.34% 1.66 1.64% 17.53%
1941 120.67 19.00% 13.00 37.28% 10.77% 6.13 269.28% 5.08% 47.15%
1942 139.06 15.24% 30.18 132.15% 21.70% 22.05 259.71% 15.86% 73.06%
1943 136.44 -1.88% 63.57 110.64% 46.59% 43.98 99.46% 32.23% 69.18%
1944 174.84 28.14% 72.62 14.24% 41.54% 62.95 43.13% 36.00% 86.68%
1945 173.52 -0.75% 72.11 -0.70% 41.56% 64.53 2.51% 37.19% 89.49%

Sources: 1940 GDP figure from “Nominal GDP: Louis Johnston and Samuel H. Williamson, “The Annual Real and Nominal GDP for the United States, 1789 — Present,” Economic History Services, March 2004, available at http://www.eh.net/hmit/gdp/ (accessed 27 July 2005). 1941-1945 GDP figures calculated using Bureau of Labor Statistics, “CPI Inflation Calculator,” available at http://data.bls.gov/cgi-bin/cpicalc.pl. Federal and defense spending figures from Government Printing Office, “Budget of the United States Government: Historical Tables Fiscal Year 2005,” Table 6.1—Composition of Outlays: 1940—2009 and Table 3.1—Outlays by Superfunction and Function: 1940—2009.

Preparedness Agencies

To oversee this growth, President Roosevelt created a number of preparedness agencies beginning in 1939, including the Office for Emergency Management and its key sub-organization, the National Defense Advisory Commission; the Office of Production Management; and the Supply Priorities Allocation Board. None of these organizations was particularly successful at generating or controlling mobilization because all included two competing parties. On one hand, private-sector executives and managers had joined the federal mobilization bureaucracy but continued to emphasize corporate priorities such as profits and positioning in the marketplace. On the other hand, reform-minded civil servants, who were often holdovers from the New Deal, emphasized the state’s prerogatives with respect to mobilization and war making. As a result of this basic division in the mobilization bureaucracy, “the military largely remained free of mobilization agency control” (Koistinen, 502).

War Production Board

In January 1942, as part of another effort to mesh civilian and military needs, President Roosevelt established a new mobilization agency, the War Production Board, and placed it under the direction of Donald Nelson, a former Sears Roebuck executive. Nelson understood immediately that the staggeringly complex problem of administering the war economy could be reduced to one key issue: balancing the needs of civilians — especially the workers whose efforts sustained the economy — against the needs of the military — especially those of servicemen and women but also their military and civilian leaders.

Though neither Nelson nor other high-ranking civilians ever fully resolved this issue, Nelson did realize several key economic goals. First, in late 1942, Nelson successfully resolved the so-called “feasibility dispute,” a conflict between civilian administrators and their military counterparts over the extent to which the American economy should be devoted to military needs during 1943 (and, by implication, in subsequent war years). Arguing that “all-out” production for war would harm America’s long-term ability to continue to produce for war after 1943, Nelson convinced the military to scale back its Olympian demands. He thereby also established a precedent for planning war production so as to meet most military and some civilian needs. Second (and partially as a result of the feasibility dispute), the WPB in late 1942 created the “Controlled Materials Plan,” which effectively allocated steel, aluminum, and copper to industrial users. The CMP obtained throughout the war, and helped curtail conflict among the military services and between them and civilian agencies over the growing but still scarce supplies of those three key metals.

Office of War Mobilization

By late 1942 it was clear that Nelson and the WPB were unable to fully control the growing war economy and especially to wrangle with the Army and Navy over the necessity of continued civilian production. Accordingly, in May 1943 President Roosevelt created the Office of War Mobilization and in July put James Byrne — a trusted advisor, a former U.S. Supreme Court justice, and the so-called “assistant president” — in charge. Though the WPB was not abolished, the OWM soon became the dominant mobilization body in Washington. Unlike Nelson, Byrnes was able to establish an accommodation with the military services over war production by “acting as an arbiter among contending forces in the WPB, settling disputes between the board and the armed services, and dealing with the multiple problems” of the War Manpower Commission, the agency charged with controlling civilian labor markets and with assuring a continuous supply of draftees to the military (Koistinen, 510).

Beneath the highest-level agencies like the WPB and the OWM, a vast array of other federal organizations administered everything from labor (the War Manpower Commission) to merchant shipbuilding (the Maritime Commission) and from prices (the Office of Price Administration) to food (the War Food Administration). Given the scale and scope of these agencies’ efforts, they did sometimes fail, and especially so when they carried with them the baggage of the New Deal. By the midpoint of America’s involvement in the war, for example, the Civilian Conservation Corps, the Works Progress Administration, and the Rural Electrification Administration — all prominent New Deal organizations which tried and failed to find a purpose in the mobilization bureaucracy — had been actually or virtually abolished.

Taxation

However, these agencies were often quite successful in achieving their respective, narrower aims. The Department of the Treasury, for instance, was remarkably successful at generating money to pay for the war, including the first general income tax in American history and the famous “war bonds” sold to the public. Beginning in 1940, the government extended the income tax to virtually all Americans and began collecting the tax via the now-familiar method of continuous withholdings from paychecks (rather than lump-sum payments after the fact). The number of Americans required to pay federal taxes rose from 4 million in 1939 to 43 million in 1945. With such a large pool of taxpayers, the American government took in $45 billion in 1945, an enormous increase over the $8.7 billion collected in 1941 but still far short of the $83 billion spent on the war in 1945. Over that same period, federal tax revenue grew from about 8 percent of GDP to more than 20 percent. Americans who earned as little as $500 per year paid income tax at a 23 percent rate, while those who earned more than $1 million per year paid a 94 percent rate. The average income tax rate peaked in 1944 at 20.9 percent (“Fact Sheet: Taxes”).

War Bonds

All told, taxes provided about $136.8 billion of the war’s total cost of $304 billion (Kennedy, 625). To cover the other $167.2 billion, the Treasury Department also expanded its bond program, creating the famous “war bonds” hawked by celebrities and purchased in vast numbers and enormous values by Americans. The first war bond was purchased by President Roosevelt on May 1, 1941 (“Introduction to Savings Bonds”). Though the bonds returned only 2.9 percent annual interest after a 10-year maturity, they nonetheless served as a valuable source of revenue for the federal government and an extremely important investment for many Americans. Bonds served as a way for citizens to make an economic contribution to the war effort, but because interest on them accumulated slower than consumer prices rose, they could not completely preserve income which could not be readily spent during the war. By the time war-bond sales ended in 1946, 85 million Americans had purchased more than $185 billion worth of the securities, often through automatic deductions from their paychecks (“Brief History of World War Two Advertising Campaigns: War Loans and Bonds”). Commercial institutions like banks also bought billions of dollars of bonds and other treasury paper, holding more than $24 billion at the war’s end (Kennedy, 626).

Price Controls and the Standard of Living

Fiscal and financial matters were also addressed by other federal agencies. For instance, the Office of Price Administration used its “General Maximum Price Regulation” (also known as “General Max”) to attempt to curtail inflation by maintaining prices at their March 1942 levels. In July, the National War Labor Board (NWLB; a successor to a New Deal-era body) limited wartime wage increases to about 15 percent, the factor by which the cost of living rose from January 1941 to May 1942. Neither “General Max” nor the wage-increase limit was entirely successful, though federal efforts did curtail inflation. Between April 1942 and June 1946, the period of the most stringent federal controls on inflation, the annual rate of inflation was just 3.5 percent; the annual rate had been 10.3 percent in the six months before April 1942 and it soared to 28.0 percent in the six months after June 1946 (Rockoff, “Price and Wage Controls in Four Wartime Periods,” 382).With wages rising about 65 percent over the course of the war, this limited success in cutting the rate of inflation meant that many American civilians enjoyed a stable or even improving quality of life during the war (Kennedy, 641). Improvement in the standard of living was not ubiquitous, however. In some regions, such as rural areas in the Deep South, living standards stagnated or even declined, and according to some economists, the national living standard barely stayed level or even declined (Higgs, 1992).

Labor Unions

Labor unions and their members benefited especially. The NWLB’s “maintenance-of-membership” rule allowed unions to count all new employees as union members and to draw union dues from those new employees’ paychecks, so long as the unions themselves had already been recognized by the employer. Given that most new employment occurred in unionized workplaces, including plants funded by the federal government through defense spending, “the maintenance-of-membership ruling was a fabulous boon for organized labor,” for it required employers to accept unions and allowed unions to grow dramatically: organized labor expanded from 10.5 million members in 1941 to 14.75 million in 1945 (Blum, 140). By 1945, approximately 35.5 percent of the non-agricultural workforce was unionized, a record high.

The War Economy at High Water

Despite the almost-continual crises of the civilian war agencies, the American economy expanded at an unprecedented (and unduplicated) rate between 1941 and 1945. The gross national product of the U.S., as measured in constant dollars, grew from $88.6 billion in 1939 — while the country was still suffering from the depression — to $135 billion in 1944. War-related production skyrocketed from just two percent of GNP to 40 percent in 1943 (Milward, 63).

As Table 2 shows, output in many American manufacturing sectors increased spectacularly from 1939 to 1944, the height of war production in many industries.

Table 2: Indices of American Manufacturing Output (1939 = 100)

1940 1941 1942 1943 1944
Aircraft 245 630 1706 2842 2805
Munitions 140 423 2167 3803 2033
Shipbuilding 159 375 1091 1815 1710
Aluminum 126 189 318 561 474
Rubber 109 144 152 202 206
Steel 131 171 190 202 197

Source: Milward, 69.

Expansion of Employment

The wartime economic boom spurred and benefited from several important social trends. Foremost among these trends was the expansion of employment, which paralleled the expansion of industrial production. In 1944, unemployment dipped to 1.2 percent of the civilian labor force, a record low in American economic history and as near to “full employment” as is likely possible (Samuelson). Table 3 shows the overall employment and unemployment figures during the war period.

Table 3: Civilian Employment and Unemployment during World War II

(Numbers in thousands)

1940 1941 1942 1943 1944 1945
All Non-institutional Civilians 99,840 99,900 98,640 94,640 93,220 94,090
Civilian Labor Force Total 55,640 55,910 56,410 55,540 54,630 53,860
% of Population 55.7% 56% 57.2% 58.7% 58.6% 57.2%
Employed Total 47,520 50,350 53,750 54,470 53,960 52,820
% of Population 47.6% 50.4% 54.5% 57.6% 57.9% 56.1%
% of Labor Force 85.4% 90.1% 95.3% 98.1% 98.8% 98.1%
Unemployed Total 8,120 5,560 2,660 1,070 670 1,040
% of Population 8.1% 5.6% 2.7% 1.1% 0.7% 1.1%
% of Labor Force 14.6% 9.9% 4.7% 1.9% 1.2% 1.9%

Source: Bureau of Labor Statistics, “Employment status of the civilian noninstitutional population, 1940 to date.” Available at http://www.bls.gov/cps/cpsaat1.pdf.

Not only those who were unemployed during the depression found jobs. So, too, did about 10.5 million Americans who either could not then have had jobs (the 3.25 million youths who came of age after Pearl Harbor) or who would not have then sought employment (3.5 million women, for instance). By 1945, the percentage of blacks who held war jobs — eight percent — approximated blacks’ percentage in the American population — about ten percent (Kennedy, 775). Almost 19 million American women (including millions of black women) were working outside the home by 1945. Though most continued to hold traditional female occupations such as clerical and service jobs, two million women did labor in war industries (half in aerospace alone) (Kennedy, 778). Employment did not just increase on the industrial front. Civilian employment by the executive branch of the federal government — which included the war administration agencies — rose from about 830,000 in 1938 (already a historical peak) to 2.9 million in June 1945 (Nash, 220).

Population Shifts

Migration was another major socioeconomic trend. The 15 million Americans who joined the military — who, that is, became employees of the military — all moved to and between military bases; 11.25 million ended up overseas. Continuing the movements of the depression era, about 15 million civilian Americans made a major move (defined as changing their county of residence). African-Americans moved with particular alacrity and permanence: 700,000 left the South and 120,000 arrived in Los Angeles during 1943 alone. Migration was especially strong along rural-urban axes, especially to war-production centers around the country, and along an east-west axis (Kennedy, 747-748, 768). For instance, as Table 4 shows, the population of the three Pacific Coast states grew by a third between 1940 and 1945, permanently altering their demographics and economies.

Table 4: Population Growth in Washington, Oregon, and California, 1940-1945

(populations in millions)

1940 1941 1942 1943 1944 1945 % growth
1940-1945
Washington 1.7 1.8 1.9 2.1 2.1 2.3 35.3%
Oregon 1.1 1.1 1.1 1.2 1.3 1.3 18.2%
California 7.0 7.4 8.0 8.5 9.0 9.5 35.7%
Total 9.8 10.3 11.0 11.8 12.4 13.1 33.7%

Source: Nash, 222.

A third wartime socioeconomic trend was somewhat ironic, given the reduction in the supply of civilian goods: rapid increases in many Americans’ personal incomes. Driven by the federal government’s abilities to prevent price inflation and to subsidize high wages through war contracting and by the increase in the size and power of organized labor, incomes rose for virtually all Americans — whites and blacks, men and women, skilled and unskilled. Workers at the lower end of the spectrum gained the most: manufacturing workers enjoyed about a quarter more real income in 1945 than in 1940 (Kennedy, 641). These rising incomes were part of a wartime “great compression” of wages which equalized the distribution of incomes across the American population (Goldin and Margo, 1992). Again focusing on three war-boom states in the West, Table 5 shows that personal-income growth continued after the war, as well.

Table 5: Personal Income per Capita in Washington, Oregon, and California, 1940 and 1948

1940 1948 % growth
Washington $655 $929 42%
Oregon $648 $941 45%
California $835 $1,017 22%

Source: Nash, 221. Adjusted for inflation using Bureau of Labor Statistics, “CPI Inflation Calculator,” available at http://data.bls.gov/cgi-bin/cpicalc.pl

Despite the focus on military-related production in general and the impact of rationing in particular, spending in many civilian sectors of the economy rose even as the war consumed billions of dollars of output. Hollywood boomed as workers bought movie tickets rather than scarce clothes or unavailable cars. Americans placed more legal wagers in 1943 and 1944, and racetracks made more money than at any time before. In 1942, Americans spent $95 million on legal pharmaceuticals, $20 million more than in 1941. Department-store sales in November 1944 were greater than in any previous month in any year (Blum, 95-98). Black markets for rationed or luxury goods — from meat and chocolate to tires and gasoline — also boomed during the war.

Scientific and Technological Innovation

As observers during the war and ever since have recognized, scientific and technological innovations were a key aspect in the American war effort and an important economic factor in the Allies’ victory. While all of the major belligerents were able to tap their scientific and technological resources to develop weapons and other tools of war, the American experience was impressive in that scientific and technological change positively affected virtually every facet of the war economy.

The Manhattan Project

American techno-scientific innovations mattered most dramatically in “high-tech” sectors which were often hidden from public view by wartime secrecy. For instance, the Manhattan Project to create an atomic weapon was a direct and massive result of a stunning scientific breakthrough: the creation of a controlled nuclear chain reaction by a team of scientists at the University of Chicago in December 1942. Under the direction of the U.S. Army and several private contractors, scientists, engineers, and workers built a nationwide complex of laboratories and plants to manufacture atomic fuel and to fabricate atomic weapons. This network included laboratories at the University of Chicago and the University of California-Berkeley, uranium-processing complexes at Oak Ridge, Tennessee, and Hanford, Washington, and the weapon-design lab at Los Alamos, New Mexico. The Manhattan Project climaxed in August 1945, when the United States dropped two atomic weapons on Hiroshima and Nagasaki, Japan; these attacks likely accelerated Japanese leaders’ decision to seek peace with the United States. By that time, the Manhattan Project had become a colossal economic endeavor, costing approximately $2 billion and employing more than 100,000.

Though important and gigantic, the Manhattan Project was an anomaly in the broader war economy. Technological and scientific innovation also transformed less-sophisticated but still complex sectors such as aerospace or shipbuilding. The United States, as David Kennedy writes, “ultimately proved capable of some epochal scientific and technical breakthroughs, [but] innovated most characteristically and most tellingly in plant layout, production organization, economies of scale, and process engineering” (Kennedy, 648).

Aerospace

Aerospace provides one crucial example. American heavy bombers, like the B-29 Superfortress, were highly sophisticated weapons which could not have existed, much less contributed to the air war on Germany and Japan, without innovations such as bombsights, radar, and high-performance engines or advances in aeronautical engineering, metallurgy, and even factory organization. Encompassing hundreds of thousands of workers, four major factories, and $3 billion in government spending, the B-29 project required almost unprecedented organizational capabilities by the U.S. Army Air Forces, several major private contractors, and labor unions (Vander Meulen, 7). Overall, American aircraft production was the single largest sector of the war economy, costing $45 billion (almost a quarter of the $183 billion spent on war production), employing a staggering two million workers, and, most importantly, producing over 125,000 aircraft, which Table 6 describe in more detail.

Table 6: Production of Selected U.S. Military Aircraft (1941-1945)

Bombers 49,123
Fighters 63,933
Cargo 14,710
Total 127,766

Source: Air Force History Support Office

Shipbuilding

Shipbuilding offers a third example of innovation’s importance to the war economy. Allied strategy in World War II utterly depended on the movement of war materiel produced in the United States to the fighting fronts in Africa, Europe, and Asia. Between 1939 and 1945, the hundred merchant shipyards overseen by the U.S. Maritime Commission (USMC) produced 5,777 ships at a cost of about $13 billion (navy shipbuilding cost about $18 billion) (Lane, 8). Four key innovations facilitated this enormous wartime output. First, the commission itself allowed the federal government to direct the merchant shipbuilding industry. Second, the commission funded entrepreneurs, the industrialist Henry J. Kaiser chief among them, who had never before built ships and who were eager to use mass-production methods in the shipyards. These methods, including the substitution of welding for riveting and the addition of hundreds of thousands of women and minorities to the formerly all-white and all-male shipyard workforces, were a third crucial innovation. Last, the commission facilitated mass production by choosing to build many standardized vessels like the ugly, slow, and ubiquitous “Liberty” ship. By adapting well-known manufacturing techniques and emphasizing easily-made ships, merchant shipbuilding became a low-tech counterexample to the atomic-bomb project and the aerospace industry, yet also a sector which was spectacularly successful.

Reconversion and the War’s Long-term Effects

Reconversion from military to civilian production had been an issue as early as 1944, when WPB Chairman Nelson began pushing to scale back war production in favor of renewed civilian production. The military’s opposition to Nelson had contributed to the accession by James Byrnes and the OWM to the paramount spot in the war-production bureaucracy. Meaningful planning for reconversion was postponed until 1944 and the actual process of reconversion only began in earnest in early 1945, accelerating through V-E Day in May and V-J Day in September.

The most obvious effect of reconversion was the shift away from military production and back to civilian production. As Table 7 shows, this shift — as measured by declines in overall federal spending and in military spending — was dramatic, but did not cause the postwar depression which many Americans dreaded. Rather, American GDP continued to grow after the war (albeit not as rapidly as it had during the war; compare Table 1). The high level of defense spending, in turn, contributed to the creation of the “military-industrial complex,” the network of private companies, non-governmental organizations, universities, and federal agencies which collectively shaped American national defense policy and activity during the Cold War.

Table 7: Federal Spending, and Military Spending after World War II

(dollar values in billions of constant 1945 dollars)

Nominal GDP Federal Spending Defense Spending
Year Total % increase total % increase % of GDP Total % increase % of GDP % of federal
spending
1945 223.10 92.71 1.50% 41.90% 82.97 4.80% 37.50% 89.50%
1946 222.30 -0.36% 55.23 -40.40% 24.80% 42.68 -48.60% 19.20% 77.30%
1947 244.20 8.97% 34.5 -37.50% 14.80% 12.81 -70.00% 5.50% 37.10%
1948 269.20 9.29% 29.76 -13.70% 11.60% 9.11 -28.90% 3.50% 30.60%
1949 267.30 -0.71% 38.84 30.50% 14.30% 13.15 44.40% 4.80% 33.90%
1950 293.80 9.02% 42.56 9.60% 15.60% 13.72 4.40% 5.00% 32.20%

1945 GDP figure from “Nominal GDP: Louis Johnston and Samuel H. Williamson, “The Annual Real and Nominal GDP for the United States, 1789 — Present,” Economic History Services, March 2004, available at http://www.eh.net/hmit/gdp/ (accessed 27 July 2005). 1946-1950 GDP figures calculated using Bureau of Labor Statistics, “CPI Inflation Calculator,” available at http://data.bls.gov/cgi-bin/cpicalc.pl. Federal and defense spending figures from Government Printing Office, “Budget of the United States Government: Historical Tables Fiscal Year 2005,” Table 6.1—Composition of Outlays: 1940—2009 and Table 3.1—Outlays by Superfunction and Function: 1940—2009.

Reconversion spurred the second major restructuring of the American workplace in five years, as returning servicemen flooded back into the workforce and many war workers left, either voluntarily or involuntarily. For instance, many women left the labor force beginning in 1944 — sometimes voluntarily and sometimes involuntarily. In 1947, about a quarter of all American women worked outside the home, roughly the same number who had held such jobs in 1940 and far off the wartime peak of 36 percent in 1944 (Kennedy, 779).

G.I. Bill

Servicemen obtained numerous other economic benefits beyond their jobs, including educational assistance from the federal government and guaranteed mortgages and small-business loans via the Serviceman’s Readjustment Act of 1944 or “G.I. Bill.” Former servicemen thus became a vast and advantaged class of citizens which demanded, among other goods, inexpensive, often suburban housing; vocational training and college educations; and private cars which had been unobtainable during the war (Kennedy, 786-787).

The U.S.’s Position at the End of the War

At a macroeconomic scale, the war not only decisively ended the Great Depression, but created the conditions for productive postwar collaboration between the federal government, private enterprise, and organized labor, the parties whose tripartite collaboration helped engender continued economic growth after the war. The U.S. emerged from the war not physically unscathed, but economically strengthened by wartime industrial expansion, which placed the United States at absolute and relative advantage over both its allies and its enemies.

Possessed of an economy which was larger and richer than any other in the world, American leaders determined to make the United States the center of the postwar world economy. American aid to Europe ($13 billion via the Economic Recovery Program (ERP) or “Marshall Plan,” 1947-1951) and Japan ($1.8 billion, 1946-1952) furthered this goal by tying the economic reconstruction of West Germany, France, Great Britain, and Japan to American import and export needs, among other factors. Even before the war ended, the Bretton Woods Conference in 1944 determined key aspects of international economic affairs by establishing standards for currency convertibility and creating institutions such as the International Monetary Fund and the precursor of the World Bank.

In brief, as economic historian Alan Milward writes, “the United States emerged in 1945 in an incomparably stronger position economically than in 1941″… By 1945 the foundations of the United States’ economic domination over the next quarter of a century had been secured”… [This] may have been the most influential consequence of the Second World War for the post-war world” (Milward, 63).

Selected References

Adams, Michael C.C. The Best War Ever: America and World War II. Baltimore: Johns Hopkins University Press, 1994.

Anderson, Karen. Wartime Women: Sex Roles, Family Relations, and the Status of Women during World War II. Westport, CT: Greenwood Press, 1981.

Air Force History Support Office. “Army Air Forces Aircraft: A Definitive Moment.” U.S. Air Force, 1993. Available at http://www.airforcehistory.hq.af.mil/PopTopics/AAFaircraft.htm.

Blum, John Morton. V Was for Victory: Politics and American Culture during World War II. New York: Harcourt Brace, 1976.

Bordo, Michael. “The Gold Standard, Bretton Woods, and Other Monetary Regimes: An Historical Appraisal.” NBER Working Paper No. 4310. April 1993.

“Brief History of World War Two Advertising Campaigns.” Duke University Rare Book, Manuscript, and Special Collections, 1999. Available at http://scriptorium.lib.duke.edu/adaccess/wwad-history.html

Brody, David. “The New Deal and World War II.” In The New Deal, vol. 1, The National Level, edited by John Braeman, Robert Bremmer, and David Brody, 267-309. Columbus: Ohio State University Press, 1975.

Connery, Robert. The Navy and Industrial Mobilization in World War II. Princeton: Princeton University Press, 1951.

Darby, Michael R. “Three-and-a-Half Million U.S. Employees Have Been Mislaid: Or, an Explanation of Unemployment, 1934-1941.” Journal of Political Economy 84, no. 1 (February 1976): 1-16.

Field, Alexander J. “The Most Technologically Progressive Decade of the Century.” American Economic Review 93, no 4 (September 2003): 1399-1414.

Field, Alexander J. “U.S. Productivity Growth in the Interwar Period and the 1990s.” (Paper presented at “Understanding the 1990s: the Long Run Perspective” conference, Duke University and the University of North Carolina, March 26-27, 2004) Available at www.unc.edu/depts/econ/seminars/Field.pdf.

Fischer, Gerald J. A Statistical Summary of Shipbuilding under the U.S. Maritime Commission during World War II. Washington, DC: Historical Reports of War Administration; United States Maritime Commission, no. 2, 1949.

Friedberg, Aaron. In the Shadow of the Garrison State. Princeton: Princeton University Press, 2000.

Gluck, Sherna Berger. Rosie the Riveter Revisited: Women, the War, and Social Change. Boston: Twayne Publishers, 1987.

Goldin, Claudia. “The Role of World War II in the Rise of Women’s Employment.” American Economic Review 81, no. 4 (September 1991): 741-56.

Goldin, Claudia and Robert A. Margo. “The Great Compression: Wage Structure in the United States at Mid-Century.” Quarterly Journal of Economics 107, no. 2 (February 1992): 1-34.

Harrison, Mark, editor. The Economics of World War II: Six Great Powers in International Comparison. Cambridge: Cambridge University Press, 1998.

Higgs, Robert. “Wartime Prosperity? A Reassessment of the U.S. Economy in the 1940s.” Journal of Economic History 52, no. 1 (March 1992): 41-60.

Holley, I.B. Buying Aircraft: Materiel Procurement for the Army Air Forces. Washington, DC: U.S. Government Printing Office, 1964.

Hooks, Gregory. Forging the Military-Industrial Complex: World War II’s Battle of the Potomac. Urbana: University of Illinois Press, 1991.

Janeway, Eliot. The Struggle for Survival: A Chronicle of Economic Mobilization in World War II. New Haven: Yale University Press, 1951.

Jeffries, John W. Wartime America: The World War II Home Front. Chicago: Ivan R. Dee, 1996.

Johnston, Louis and Samuel H. Williamson. “The Annual Real and Nominal GDP for the United States, 1789 – Present.” Available at Economic History Services, March 2004, URL: http://www.eh.net/hmit/gdp/; accessed 3 June 2005.

Kennedy, David M. Freedom from Fear: The American People in Depression and War, 1929-1945. New York: Oxford University Press, 1999.

Kryder, Daniel. Divided Arsenal: Race and the American State during World War II. New York: Cambridge University Press, 2000.

Lane, Frederic, with Blanche D. Coll, Gerald J. Fischer, and David B. Tyler. Ships for Victory: A History of Shipbuilding under the U.S. Maritime Commission in World War II. Baltimore: Johns Hopkins University Press, 1951; republished, 2001.

Koistinen, Paul A.C. Arsenal of World War II: The Political Economy of American Warfare, 1940-1945. Lawrence, KS: University Press of Kansas, 2004.

Lichtenstein, Nelson. Labor’s War at Home: The CIO in World War II. New York: Cambridge University Press, 1982.

Lingeman, Richard P. Don’t You Know There’s a War On? The American Home Front, 1941-1945. New York: G.P. Putnam’s Sons, 1970.

Milkman, Ruth. Gender at Work: The Dynamics of Job Segregation by Sex during World War II. Urbana: University of Illinois Press, 1987.

Milward, Alan S. War, Economy, and Society, 1939-1945. Berkeley: University of California Press, 1979.

Nash, Gerald D. The American West Transformed: The Impact of the Second World War. Lincoln: University of Nebraska Press, 1985.

Nelson, Donald M. Arsenal of Democracy: The Story of American War Production. New York: Harcourt Brace, 1946.

O’Neill, William L. A Democracy at War: America’s Fight at Home and Abroad in World War II. New York: Free Press, 1993.

Overy, Richard. How the Allies Won. New York: W.W. Norton, 1995.

Rockoff, Hugh. “The Response of the Giant Corporations to Wage and Price Control in World War II.” Journal of Economic History 41, no. 1 (March 1981): 123-28.

Rockoff, Hugh. “Price and Wage Controls in Four Wartime Periods.” Journal of Economic History 41, no. 2 (June 1981): 381-401.

Samuelson, Robert J., “Great Depression.” The Concise Encyclopedia of Economics. Indianapolis: Liberty Fund, Inc., ed. David R. Henderson, 2002. Available at http://www.econlib.org/library/Enc/GreatDepression.html

U.S. Department of the Treasury, “Fact Sheet: Taxes,” n. d. Available at http://www.treas.gov/education/fact-sheets/taxes/ustax.shtml

U.S. Department of the Treasury, “Introduction to Savings Bonds,” n.d. Available at http://www.treas.gov/offices/treasurer/savings-bonds.shtml

Vander Meulen, Jacob. Building the B-29. Washington, DC: Smithsonian Institution Press, 1995.

Watkins, Thayer. “The Recovery from the Depression of the 1930s.” 2002. Available at http://www2.sjsu.edu/faculty/watkins/recovery.htm

Citation: Tassava, Christopher. “The American Economy during World War II”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/the-american-economy-during-world-war-ii/

An Economic History of Weather Forecasting

Erik D. Craft, University of Richmond

Introduction

The United States Congress established a national weather organization in 1870 when it instructed the Secretary of War to organize the collection of meteorological observations and forecasting of storms on the Great Lakes and Atlantic Seaboard. Large shipping losses on the Great Lakes during the 1868 and 1869 seasons, growing acknowledgement that storms generally traveled from the West to the East, a telegraphic network that extended west of the Great Lakes and the Atlantic Seaboard, and an eager Army officer promising military discipline are credited with convincing Congress that a storm-warning system was feasible. The United States Army Signal Service weather organization immediately dwarfed its European counterparts in budget and geographical size and shortly thereafter created storm warnings that on the Great Lakes alone led to savings in shipping losses that exceeded the entire network’s expenses.

Uses of Weather Information

Altering Immediate Behavior

The most obvious use of weather information is to change behavior in response to expected weather outcomes. The motivating force behind establishing weather organizations in England, France, Germany, and the United States was to provide warnings to ships of forthcoming storms, so that the ships might remain in harbor. But it soon became obvious that agricultural and commercial interests would benefit from weather forecasts as well. Farmers could protect fruit sensitive to freezes, and shippers could limit spoilage of produce while en route. Beyond preparation for severe weather, weather forecasts are now created for ever more specialized activities: implementing military operations, scheduling operation of power generation facilities, routing aircraft safely and efficiently, planning professional sports teams’ strategies, estimating demand for commodities sensitive to weather outcomes, planning construction projects, and optimizing the use of irrigation and reservoir systems’ resources.

Applying Climatological Knowledge

Climatological data can be used to match crop varieties, construction practices, and other activities appropriately to different regions. For example, in 1947 the British Government planned to grow groundnuts on 3.2 million acres in East and Central Africa. The groundnut was chosen because it was suited to the average growing conditions of the chosen regions. But due a lack of understanding of the variance in amount and timing of rainfall, the project was abandoned after five years and initial capital outlays of 24 million British pounds and annual operating costs of 7 million pounds. The preparation of ocean wind and weather charts in the 1850s by Matthew Fontaine Maury, Superintendent of the U.S. Navy’s Depot of Charts and Instruments, identified better routes for vessels sailing between America and Europe and from the United States East Cost to United States West Coast. The reduced sailing durations are alleged to have saved millions of dollars annually. Climatological data can also be used in modern environmental forecasts of air quality and how pollution is dispersed in the air. There are even forensic meteorologists who specialize in identifying weather conditions at a given point in time after accidents and subsequent litigation. Basic climatological information is also one reason why the United States cinema industry became established in Southern California; it was known that a high percentage of all days were sunny, so that outdoor filming would not be delayed.

Smoothing Consumption of Weather-Sensitive Commodities

An indirect use of weather forecasts and subsequent weather occurrences is their influence on the prices of commodities that are affected by weather outcomes. Knowledge that growing conditions will be poor or have been poor will lead to expectations of a smaller crop harvest. This causes expected prices of the crop to rise, thereby slowing consumption. This is socially efficient, since the present inventory and now smaller future harvest will have to be consumed more slowly over the time period up until the next season’s crop can be planted, cultivated, and harvested. Without an appropriate rise in price after bad weather outcomes, an excessive depletion of the crop’s inventory could result, leading to more variability in the consumption path of the commodity. People generally prefer consuming their income and individual products in relatively smooth streams, rather than in large amounts in some periods and small amounts in other periods. Both improved weather forecasts and United State Department of Agriculture crop forecasts help buyers more effectively consume a given quantity of a crop.

The History Weather Forecasts in the United States

An important economic history question is whether or not it was necessary for the United States Federal Government to found a weather forecasting organization. There are two challenges in answering that question: establishing that the weather information was socially valuable and determining if private organizations were incapable of providing the appropriate level of services. Restating the latter issue, did weather forecasts and the gathering of climatological information possess enough attributes of a public good such that private organizations would create an insufficiently large amount of socially- beneficial information? There are also two parts to this latter public good problem: nonexcludability and nonrivalry. Could private producers of weather information create a system whereby they earned enough money from users of weather information to cover the costs of creating the information? Would such a weather system be of the socially optimal size?

Potential Organizational Sources of Weather Forecasts

There were many organizations during the 1860s that the observer might imagine would benefit from the creation of weather forecasts. After the consolidation of most telegraphic service in the United States into Western Union in 1866, an organization with employees throughout the country existed. The Associated Press had a weather-reporting network, but there is no evidence that it considered supplementing its data with forecasts. One Ebenezer E. Merriam began supplying New York newspapers with predictions in 1856. Many years later, astronomer turned Army Signal Service forecaster Cleveland Abbe concluded that Merriam made his predictions using newspaper weather reports. The Chicago Board of Trade declined an invitation in 1869 to support a weather forecasting service based in Cincinnati. Neither ship-owners nor marine insurers appear to have expressed any interest in creating or buying weather information. Great Lakes marine insurers had even already overcome organizational problems by forming the Board of Lake Underwriters in 1855. For example, the group incurred expenses of over $11,000 in 1861 inspecting vessels and providing ratings on behalf of its members in the annual Lake Vessel Register. The Board of Lake Underwriters even had nine inspectors distributed on the Great Lakes to inspect wrecks on behalf of its members. Although there was evidence that storms generally traveled in a westerly direction, none of these groups apparently expected the benefits to itself to exceed the costs of establishing the network necessary to provide useful weather information.

Cleveland Abbe at the Cincinnati Observatory began the most serious attempt to establish a quasi-private meteorological organization in 1868 when he sought financial support from the Associated Press, Western Union, local newspapers, and the Cincinnati Chamber of Commerce. His initial plan included a system of one hundred reporting stations with the Associated Press covering the $100 instrument costs at half of the stations and the dispatch costs. In the following year, he widened his scope to include the Chicago Board of Trade and individual subscribers and proposed a more limited network of between sixteen and twenty-two stations. The Cincinnati Chamber of Commerce, whose president published the Cincinnati Commercial, funded the experiment from September through November of 1869. Abbe likely never had more than ten observers report on any given day and could not maintain more than about thirty local subscribers for his service, which provided at most only occasional forecasts. Abbe continued to receive assistance from Western Union in the collection and telegraphing of observations after the three-month trial, but he fell short in raising funds to allow the expansion of his network to support weather forecasts. His ongoing “Weather Bulletin of the Cincinnati Observatory” was not even published in the Cincinnati Commercial.

Founding of the Army Signal Service Weather Organization

Just as the three-month trial of Abbe’s weather bulletin concluded, Increase A. Lapham, a Milwaukee natural scientist, distributed his second list of Great Lakes shipping losses, entitled “Disaster on the Lakes.” The list included 1,164 vessel casualties, 321 deaths, and $3.1 million in property damaged in 1868 and 1,914 vessel casualties, 209 lives lost, and $4.1 million in financial losses in 1869. The number of ships that were totally destroyed was 105 and 126 in each year, respectively. According to a separate account, the storm of November 16-19, 1869 alone destroyed vessels whose value exceeded $420,000. Lapham’s list of losses included a petition to establish a weather forecasting service. In 1850, he had prepared a similar proposal alongside a list of shipping of losses, and twice during the 1850s he had tracked barometric lows across Wisconsin to provide evidence that storms could be forecast.

Recipients of Lapham’s petitions included the Wisconsin Academy of Sciences, the Chicago Academy of Sciences, the National Board of Trade meeting in Richmond, a new Chicago monthly business periodical entitled The Bureau, and Congressman Halbert E. Paine of Milwaukee. Paine had studied meteorological theories under Professor Elias Loomis at Western Reserve College and would introduce storm-warning service bills and eventually the final joint resolution in the House that gave the Army Signal Service storm-warning responsibilities. In his book Treatise on Meteorology (1868), Loomis claimed that the approach of storms to New York could be predicted reliably given telegraphic reports from several locations in the Mississippi Valley. From December 1869 through February 1870, Lapham’s efforts received wider attention. The Bureau featured nine pieces on meteorology from December until March, including at least two by Lapham.

Following the Civil War, the future of a signaling organization in the Army was uncertain. Having had budget requests for telegraph and signal equipment for years 1870 and 1871 cut in half to $5000, Colonel Albert J. Myer, Chief Signal Officer, led a small organization seeking a permanent existence. He visited Congressman Paine’s office in December of 1869 with maps showing proposed observation stations throughout the United Stations. Myer’s eagerness for the weather responsibilities, as well as the discipline of the Army organization and a network of military posts in the West, many linked via telegraph, would appear to have made the Army Signal Service a natural choice. The marginal costs of an Army weather organization using Signal Service personnel included only instruments and commercial telegraphy expenses. On February 4, 1870, Congress approved the Congressional Joint Resolution which “authorizes and requires the Secretary of War to provide for taking of meteorological observations . . . and for giving notice on the northern lakes and on the sea-coast of the approach and force of storms.” Five days later, President Grant signed the bill.

Expansion of the Army Signal Service’s Weather Bureau

Observer-sergeants in the Signal Service recorded their first synchronous observations November 1, 1870, 7:35 a.m. Washington time at twenty-four stations. The storm-warning system began formal operation October 23, 1871 with potential flag displays at eight ports on the Great Lakes and sixteen ports on the Atlantic seaboard. At that time, only fifty general observation stations existed. Already by June 1872, Congress expanded the Army Signal Service’s explicit forecast responsibilities via an appropriations act to most of the United States “for such stations, reports, and signal as may be found necessary for the benefit of agriculture and commercial interests.” In 1872, the Signal Service also began publication of the Weekly Weather Chronicle during the growing seasons. It disappeared in 1877, reemerging in 1887 as the Weather Crop Bulletin. As the fall of 1872 began, confidence in the utility of weather information was so high that 89 agricultural societies and 38 boards of trade and chambers of commerce had appointed meteorological committees to communicate with the Army Signal Service. In addition to dispensing general weather forecasts for regions of the country three times a day, the Signal Service soon sent special warnings to areas in danger of cold waves and frosts.

The original method of warning ships of dangerous winds was hoisting a single red flag with a black square located in the middle. This was known as a cautionary signal, and Army personnel at Signal Service observation stations or civilians at display stations would raise the flag on a pole “whenever the winds are expected to be as strong as twenty-five miles per hour, and to continue so for several hours, within a radius of one hundred miles from the station.” In the first year of operation ending 1 September 1872, 354 cautionary signals were flown on both the Great Lakes and the Atlantic Seaboard, approximately 70% of which were verified as having met the above definition. Such a measure of accuracy is incomplete, however, as it can always be raised artificially by not forecasting storms under marginal conditions, even though such a strategy might diminish the value of the service.

The United States and Canada shared current meteorological information beginning in 1871. By 1880, seventeen Canadian stations reported meteorological data to the United States at least twice daily by telegraph. The number of Army Signal Service stations providing telegraphic reports three times a day stabilized at 138 stations in 1880, dipped to 121 stations in 1883, and grew to approximately 149 stations by 1888. (See Table 1 for a summary of the growth of the Army Signal Service Meteorological Network from 1870 to 1890.) Additional display stations only provided storm warnings at sea and lake ports. River stations monitored water levels in order to forecast floods. Special cotton-region stations, beginning in 1883, comprised a dense network of daily reporters of rainfall and maximum and minimum temperatures. Total Army Signal Service expenditures grew from a $15,000 supplemental appropriation for weather operations in fiscal year 1870 to about one million dollars for all Signal Service costs around 1880 and stabilized at that level. Figure 1 shows the extent geographical extent of the Army Signal Service telegraphic observation network in 1881.

Figure 1: Army Signal Service Observation Network in 1881
Click on the image for the larger, more detailed image (~600K)Source: Map between pages 250-51, Annual Report of the Chief Signal Officer, October 1, 1881, Congressional Serial Set Volume 2015. See the detailed map between pages 304-05 for the location of each of the different types of stations listed in Table 1.

Table 1: Growth of the United States Army Signal Service Meteorological Network

Budget (Real 1880 Dollars)

Stations of the Second Order

Stations of the Third Order

Repair Stations

Display Stations

Special River Stations

Special Cotton-Region Stations

1870

32,487

25

1871

112,456

54

1872

220,269

65

1873

549,634

80

9

1874

649,431

92

20

1875

749,228

98

20

1876

849,025

106

38

23

1877

849,025

116

29

10

9

23

1878

978,085

136

36

12

11

23

1879

1,043,604

158

30

17

46

30

1880

1,109,123

173

39

49

50

29

1881

1,080,254

171

47

44

61

29

87

1882

937,077

169

45

3

74

30

127

1883

950,737

143

42

27

7

30

124

1884

1,014,898

138

68

7

63

40

138

1885

1,085,479

152

58

8

64

66

137

1886

1,150,673

146

33

11

66

69

135

1887

1,080,291

145

31

13

63

70

133

1888

1,063,639

149

30

24

68

78

116

1889

1,022,031

148

32

23

66

72

114

1890

994,629

144

34

15

73

72

114

Sources: Report of the Chief Signal Officer: 1888, p. 171; 1889, p. 136; 1890, p. 203 and “Provision of Value of Weather Information Services,” Craft (1995), p. 34.

Notes: The actual total budgets for years 1870 through 1881 are estimated. Stations of the second order recorded meteorological conditions three times per day. Most immediately telegraphed the data. Stations of the third order recorded observations at sunset. Repair stations maintained Army telegraph lines. Display stations displayed storm warnings on the Great Lakes and Atlantic seaboard. Special river stations monitored water levels in order to forecast floods. Special cotton-region stations collected high temperature, low temperature, and precipitation data from a denser network of observation locations

Early Value of Weather Information

Budget reductions in the Army Signal Service’s weather activities in 1883 led to the reduction of fall storm-warning broadcast locations on the Great Lakes from 80 in 1882 to 43 in 1883. This one-year drop in the availability of storm-warnings creates a special opportunity to measure the value of warnings of extremely high winds on the Great Lakes (see Figure 2). Many other factors can be expected to affect the value of shipping losses on the Great Lakes: the level of commerce in a given season, the amount of shipping tonnage available to haul a season’s commerce, the relative composition of the tonnage (steam versus sail), the severity of the weather, and long-term trends in technological change or safety. Using a statistical technique know as multiple regression, in which the effect of these many factors on shipping losses are analyzed concurrently, Craft (1998) argued that each extra storm-warning location on the Great Lakes lowered losses by about one percent. This implies that the storm-warning system reduced losses on the Great Lakes by approximately one million dollars annually in the mid 1870s and between $1 million and $4.5 million dollars per year by the early 1880s.

Source: The data are found in the following: Chicago Daily Inter Ocean (December 5, 1874 p. 2; December 18, 1875; December 27, 1876 p. 6; December 17, 1878; December 29, 1879 p. 6; February 3, 1881 p. 12; December 28, 1883 p. 3; December 5, 1885 p. 4); Marine Record (December 27, 1883 p. 5; December 25, 1884 pp. 4-5; December 24, 1885 pp. 4-5; December 30, 1886 p. 6; December 15, 1887 pp 4-5); Chief Signal Officer, Annual Report of the Chief Signal Officer, 1871- 1890.

Note: Series E 52 of the Historical Statistics of the United States (U.S. Bureau of the Census, 1975) was used to adjust all values to real 1880 dollars.

There are additional indirect methods with which to confirm the preceding estimate of the value of early weather information. If storm-warnings actually reduced the risk of damage to cargo and ships due to bad weather, then the cost of shipping cargo would be expected to decline. In particular, such reductions in shipping prices due to savings in losses caused by storms can be differentiated from other types of technological improvements by studying how fall shipping prices changed relative to summer shipping prices. It was during the fall that ships were particularly vulnerable to accidents caused by storms. Changes is shipping prices of grain from Chicago to Buffalo during the summers and falls from the late 1860s to late 1880s imply that storm-warnings were valuable and are consistent with the more direct method estimating reductions in shipping losses. Although marine insurance premia data for shipments on the Great Lakes are limited and difficult to interpret due the waning and waxing of the insurance cartel’s cohesion, such data are also supportive of the overall interpretation.

Given Army Signal Service budgets of about one million dollars for providing meteorological services to the entire United States, a reasonable minimum bound for the rate of return to the creation of weather information from 1870 to 1888 is 64 percent. The figure includes no social benefits from any weather information other than Great Lakes storm warnings. This estimate of nineteenth century information implies that the creation and distribution of storm warnings by the United States Federal Government were a socially beneficial investment.

Transfer of Weather Services to the Department of Agriculture

The Allison Commission hearings in 1884 and 1885 sought to determine the appropriate organization of Federal agencies whose activities included scientific research. The Allison Commission’s long report included testimony and discussion relating to the organization of the Army Signal Service, the United States Geological Survey, the Coast and Geodetic Survey, and the Navy Hydrographic Office. Weather forecasting required a reliable network of observers, some of whom were the sole Army personnel at a location. Advantages of a military organizational structure included a greater range of disciplinary responses, including court-martials for soldiers, for deficient job performance. Problems, however, of the military organization included the limited ability to increase one’s rank while working for the Signal Service and tension between the civilian and military personnel. In 1891, after an unsuccessful Congressional attempt at reform in 1887, the Weather Bureau became a civilian organization when it joined the young Department of Agriculture.

Aviation and World War I

Interest in upper air weather conditions grew rapidly after the turn of the century on account of two related events: the development of aviation and World War I. Safe use of aircraft depended on more precise knowledge of weather conditions (winds, storms, and visibility) between takeoff and landing locations. Not only were military aircraft introduced during World War I, but understanding wind conditions was also crucial to the use of poison gas on the front lines. In the most important change of the Weather Bureau’s organizational direction since transfer to the Department of Agricultural, Congress passed the Air Commerce Act in 1926, which by 1932 led to 38% of the Weather Bureau’s budget being directed toward aerology research and support.

Transfer of the Weather Bureau to the Department of Commerce

Even though aerological expenditures by the Weather Bureau in support of aviation rivaled funding for general weather services by the late 1930s, the Weather Bureau came under increasing criticism from aviation interests. The Weather Bureau was transferred to the Department of Commerce in 1940 where other support for aviation already originated. This transition mirrored the declining role of agriculture in the United States and movement toward a more urban economy. Subsequently known as the United States Weather Service, it has remained there since.

World War II

During World War II, weather forecasts assumed greater importance, as aircraft and rapid troop movements became key parts of military strategy. Accurate long-range artillery use also depended on knowledge of prevailing winds. For extensive use of weather forecasts and climatological information during wartime, consider Allied plans the strike German oil refineries in Ploesti, Romania. In the winter of 1943 military weather teams parachuted into the mountains of Yugoslavia to relay weather data. Bombers from North Africa could only reach the refineries in the absence of headwinds in either direction of the sortie. Cloud cover en route was important for protection, clear skies were helpful for identification of targets, and southerly winds permitted the bombers to drop their ordinance on the first pass on the south side of the area’s infrastructure, allowing the winds to assist in spreading the fire. Historical data indicated that only March or August were possible windows. Though many aircraft were lost, the August 1 raid was considered a success.

Tide, wind, and cloud conditions were also crucial in the planning of the invasion of Normandy (planned for June 5 and postponed until June 6 in 1944). The German High Command had been advised by its chief meteorologist that conditions were not opportune for an Allied invasion on the days following June 4. Dissention among American and British military forecasters nearly delayed the invasion further. Had it been deferred until the next date of favorable tide conditions, the invasion would have taken place during the worst June storm in twenty years in the English Channel.

Forecasting in Europe

A storm on November 14, 1854 destroyed the French warship Henri IV and damaged other British and French vessels on the Black Sea involved in the Crimean War. A report from the state-supported Paris Observatory indicated that barometric readings showed that the storm has passed across Europe in about four days. Urban Leverrier, director of the Paris Observatory, concluded that had there been a telegraph line between Vienna and the Crimea, the British and French fleets could have received warnings. Although the United States weather network was preceded by storm-warning systems in the Netherlands in 1860, Great Britain in 1861, and France in 1863, the new United States observation network immediately dwarfed the European organizations in both financial resources and geographical magnitude.

Robert FitzRoy, captain of the Beagle during Darwin’s famous voyage, was appointed director of the Meteorological Department established by the British Board of Trade (a government organization) in 1854. The wreck of the well-constructed iron vessel Royal Charter in a storm with much loss of life in October of 1859 provided another opportunity for a meteorological leader to argue that storms could be tracked and forecast. With support from the Prince Consort, FitzRoy and the Meteorological Department were granted approval to establish a storm-warning service. On February 6, 1861 the first warnings were issued. By August 1861 weather forecasts were issued regularly. By 1863, the Meteorological Department had a budget of three thousand English pounds. Criticism arose from different groups. Scientists wished to establish meteorology on a sound theoretical foundation and differentiate it from astrology. At the time, many publishers of weather almanacs subscribed to various theories of the influence of the moon or other celestial bodies on weather (This is not as outlandish one might suppose; in 1875, well-known economist William Stanley Jevons studied connections between sunspot activity and meteorology with business cycles). Some members of this second group supported the practice of forecasting but were critical of FitzRoy’s technique, perhaps hoping to become alternative sources of forecasts. Amidst the criticism, FitzRoy committed suicide in 1865. Forecasts and warnings were discontinued in 1866 until the warnings resumed two years later. General forecasts were suspended until 1877.

In 1862, Leverrier wrote the French Ministry of Public Education that French naval and commercial interests might be compromised by their dependence on warnings from the British Board of Trade. A storm-warning service in France commenced in July of 1863. Given the general movement of storms westward, neither France nor Britain had the luxury of tracking storms well before they arrived, as would have been possible with the November 1854 storm in the Crimea and as the Army Signal Service soon would be able to do in America. On account of administrative difficulties that were to hinder effective functioning of the service until 1877, French warnings ceased in October 1865 but resumed in May the next year. The French Central Bureau Meteorology was founded only in 1878 with a budget of only $12,000.

After the initiation of storm warning systems that preceded the Army Signal Service weather network, Europe would not achieve meteorological prominence again until the Bergen School of meteorology developed new storm analysis techniques after World War I, which incorporated cold and warm fronts. In the difficult days in Norway during the conclusion of the Great War, meteorological information from the rest of Europe was unavailable. Theoretical physicist turned meteorological researcher Wilhelm Bjerknes appealed to Norway’s national interests in defense, in the development of commercial aviation, and in increased agricultural output to build a dense observation network, whose data helped yield a new paradigm for meteorology.

Conclusion

The first weather forecasts in the United States that were based on a large network of simultaneous observations provided information to society that was much more valuable than the cost of production. There was discussion in the early winter of 1870 between the scientist Increase Lapham and a businessman in Chicago of the feasibility of establishing a private forecasting organization in Wisconsin or Illinois (see Craft 1999). But previous attempts by private organizations in the United States had been unsuccessful in supporting any private weather-forecasting service. In the contemporary United States, the Federal government both collects data and offers forecasts, while private weather organizations provide a variety of customized services.

Weather Forecasting Timeline

1743

Benjamin Franklin, using reports of numerous postmasters, determined the northeastward path of a hurricane from the West Indies.

1772-1777

Thomas Jefferson at Monticello, Virginia and James Madison at Williamsburg, Virginia collect a series of contemporaneous weather observations.

1814

Surgeon General Tilton issues an order directing Army surgeons to keep a diary of the weather in order to ascertain any influences of weather upon disease.

1817

Josiah Meigs, Commission of the General Land Office, requests officials at their land offices to record meteorological observations.

1846-1848

Matthew F. Maury, Superintendent of the U.S. Naval Observatory, publishes his first charts compiled from ships’ log showing efficient sailing routes.

1847

Barometer used to issue storm warnings in Barbadoes.

1848

J. Jones of New York advertises meteorological reports costing between twelve and one half and twenty-five cents per city per day. There is no evidence the service was ever sold.

1848

Publication in the British Daily News of the first telegraphic daily weather report.

1849

The Smithsonian Institution begins a nearly three decade long project of collecting meteorological data with the goal of understanding storms.

1849

Captain Joseph Brooks, manager of the Portland Steamship Line, receives telegraphic reports three times a day from Albany, New York, and Plattsburg in order to determine if the line’s ships should remain in port in Maine.

1853-1855

Ebenezer E. Merriam of New York, using newspaper telegraphic reports, offers weather forecasts in New York’s newspapers on an apparently irregular basis.

1858

The U.S. Army Engineers begin collecting meteorological observations while surveying the Great Lakes.

1860

Christoph Buys Ballot issues first storm warnings in the Netherlands.

1861

Admiral Robert FitzRoy of the British Meteorological Office begins issuing storm-warnings.

1863

Urban Leverrier, director of the Paris Observatory, organizes a storm-warning service.

1868

Cleveland Abbe of the Cincinnati Observatory unsuccessfully proposes a weather service of one hundred observation stations to be supported by the Cincinnati Chamber of Commerce, Associated Press, Western Union, and local newspapers.

1869

The Cincinnati Chamber of Commerce funds a three-month trial of the Cincinnati Observatory’s weather bulletin. The Chicago Board of Trade declines to participate.

1869

Increase A. Lapham publishes a list of the shipping losses on the Great Lakes during the 1868 and 1869 seasons.

1870

Congress passes a joint resolution directing the Secretary of War to establish a meteorological network for the creation of storm warnings on the Great Lakes and Atlantic Seaboard. Storm-warnings are offered on November 8. Forecasts begin the following February 19.

1872

Congressional appropriations bill extends Army Signal Service duties to provide forecasts for agricultural and commercial interests.

1880

Frost warnings offered for Louisiana sugar producers.

1881-1884

Army Signal Service expedition to Lady Franklin Bay in support of international polar weather research. Only seven of the twenty-five member team survives.

1881

Special cotton-region weather reporting network established.

1891

Weather Bureau transferred to the Department of Agriculture.

1902

Daily weather forecasts sent by radio to Cunard Line steamships.

1905

First wireless weather report from a ship at sea.

1918

Norway expands its meteorological network and organization leading to the development of new forecasting theories centered on three-dimensional interaction of cold and warm fronts.

1919

American Meteorological Society founded.

1926

Air Commerce Act gives the Weather Bureau responsibility for providing weather services to aviation.

1934

First private sector meteorologist hired by a utility company.

1940

The Weather Bureau is transferred from the Department of Agriculture to the Department of Commerce.

1946

First private weather forecast companies begin service.

1960

The first meteorological satellite, Tiros I, enters orbit successfully.

1976

The United States launches its first geostationary weather satellites.

References

Abbe, Cleveland, Jr. “A Chronological Outline of the History of Meteorology in the United States.” Monthly Weather Review 37, no. 3-6 (1909): 87-89, 146- 49, 178-80, 252-53.

Alter, J. Cecil. “National Weather Service Origins.” Bulletin of the Historical and Philosophical Society of Ohio 7, no. 3 (1949): 139-85.

Anderson, Katharine. “The Weather Prophets: Science and Reputation in Victorian Meteorology.” History of Science 37 (1999): 179-216.

Burton, Jim. “Robert Fitzroy and the Early History of the Meteorological Office.” British Journal for the History of Science 19 (1986): 147-76.

Chief Signal Officer. Report of the Chief Signal Officer. Washington: GPO, 1871-1890.

Craft, Erik. “The Provision and Value of Weather Information Services in the United States during the Founding Period of the Weather Bureau with Special Reference to Transportation on the Great Lakes.” Ph.D. diss., University of Chicago, 1995.

Craft, Erik. “The Value of Weather Information Services for Nineteenth-Century Great Lakes Shipping.” American Economic Review 88, no.5 (1998): 1059-1076.

Craft, Erik. “Private Weather Organizations and the Founding of the United States Weather Bureau.” Journal of Economic History 59, no. 4 (1999): 1063- 1071.

Davis, John L. “Weather Forecasting and the Development of Meteorological Theory at the Paris Observatory.” Annals of Science 41 (1984): 359-82.

Fleming, James Rodger. Meteorology in America, 1800-1870. Baltimore: Johns Hopkins University Press, 1990.

Fleming, James Rodger, and Roy E. Goodman, editors. International Bibliography of Meteorology. Upland, Pennsylvania: Diane Publishing Co., 1994.

Friedman, Robert Marc. Appropriating the Weather: Vilhelm Bjerknes and the Construction of a Modern Meteorology. Ithaca: Cornell University Press, 1989.

Hughes, Patrick. A Century of Weather Service. New York: Gordon and Breach, 1970.

Miller, Eric R. “The Evolution of Meteorological Institutions in United States.” Monthly Weather Review 59 (1931): 1-6.

Miller, Eric R. “New Light on the Beginnings of the Weather Bureau from the Papers of Increase A. Lapham.” Monthly Weather Review 59 (1931): 65-70.

Sah, Raaj. “Priorities of Developing Countries in Weather and Climate.” World Development 7 no. 3 (1979): 337-47.

Spiegler, David B. “A History of Private Sector Meteorology.” In Historical Essays on Meteorology, 1919-1995, edited by James Rodger Fleming, 417- 41. Boston: American Meteorological Society, 1996.

Weber, Gustavus A. The Weather Bureau: Its History, Activities and Organization. New York: D. Appleton and Company, 1922.

Whitnah, Donald R. A History of the United States Weather Bureau. Urbana: University of Illinois Press, 1961.

Citation: Craft, Erik. “Economic History of Weather Forecasting”. EH.Net Encyclopedia, edited by Robert Whaples. October 6, 2001. URL http://eh.net/encyclopedia/an-economic-history-of-weather-forecasting/

Turnpikes and Toll Roads in Nineteenth-Century America

Daniel B. Klein, Santa Clara University and John Majewski, University of California – Santa Barbara 1

Private turnpikes were business corporations that built and maintained a road for the right to collect fees from travelers.2 Accounts of the nineteenth-century transportation revolution often treat turnpikes as merely a prelude to more important improvements such as canals and railroads. Turnpikes, however, left important social and political imprints on the communities that debated and supported them. Although turnpikes rarely paid dividends or other forms of direct profit, they nevertheless attracted enough capital to expand both the coverage and quality of the U. S. road system. Turnpikes demonstrated how nineteenth-century Americans integrated elements of the modern corporation – with its emphasis on profit-taking residual claimants – with non-pecuniary motivations such as use and esteem.

Private road building came and went in waves throughout the nineteenth century and across the country, with between 2,500 and 3,200 companies successfully financing, building, and operating their toll road. There were three especially important episodes of toll road construction: the turnpike era of the eastern states 1792 to 1845; the plank road boom 1847 to 1853; and the toll road of the far West 1850 to 1902.

The Turnpike Era, 1792–1845

Prior to the 1790s Americans had no direct experience with private turnpikes; roads were built, financed and managed mainly by town governments. Typically, townships compelled a road labor tax. The State of New York, for example, assessed eligible males a minimum of three days of roadwork under penalty of fine of one dollar. The labor requirement could be avoided if the worker paid a fee of 62.5 cents a day. As with public works of any kind, incentives were weak because the chain of activity could not be traced to a residual claimant – that is, private owners who claim the “residuals,” profit or loss. The laborers were brought together in a transitory, disconnected manner. Since overseers and laborers were commonly farmers, too often the crop schedule, rather than road deterioration, dictated the repairs schedule. Except in cases of special appropriations, financing came in dribbles deriving mostly from the fines and commutations of the assessed inhabitants. Commissioners could hardly lay plans for decisive improvements. When a needed connection passed through unsettled lands, it was especially difficult to mobilize labor because assessments could be worked out only in the district in which the laborer resided. Because work areas were divided into districts, as well as into towns, problems arose coordinating the various jurisdictions. Road conditions thus remained inadequate, as New York’s governors often acknowledged publicly (Klein and Majewski 1992, 472-75).

For Americans looking for better connections to markets, the poor state of the road system was a major problem. In 1790, a viable steamboat had not yet been built, canal construction was hard to finance and limited in scope, and the first American railroad would not be completed for another forty years. Better transportation meant, above all, better highways. State and local governments, however, had small bureaucracies and limited budgets which prevented a substantial public sector response. Turnpikes, in essence, were organizational innovations borne out of necessity – “the states admitted that they were unequal to the task and enlisted the aid of private enterprise” (Durrenberger 1931, 37).

America’s very limited and lackluster experience with the publicly operated toll roads of the 1780s hardly portended a future boom in private toll roads, but the success of private toll bridges may have inspired some future turnpike companies. From 1786 to 1798, fifty-nine private toll bridge companies were chartered in the northeast, beginning with Boston’s Charles River Bridge, which brought investors an average annual return of 10.5 percent in its first six years (Davis 1917, II, 188). Private toll bridges operated without many of the regulations that would hamper the private toll roads that soon followed, such as mandatory toll exemptions and conflicts over the location of toll gates. Also, toll bridges, by their very nature, faced little toll evasion, which was a serious problem for toll roads.

The more significant predecessor to America’s private toll road movement was Britain’s success with private toll roads. Beginning in 1663 and peaking from 1750 to 1772, Britain experienced a private turnpike movement large enough to acquire the nickname “turnpike mania” (Pawson 1977, 151). Although the British movement inspired the future American turnpike movement, the institutional differences between the two were substantial. Most important, perhaps, was the difference in their organizational forms. British turnpikes were incorporated as trusts – non-profit organizations financed by bonds – while American turnpikes were stock-financed corporations seemingly organized to pay dividends, though acting within narrow limits determined by the charter. Contrary to modern sensibilities, this difference made the British trusts, which operated under the firm expectation of fulfilling bond obligations, more intent and more successful in garnering residuals. In contrast, for the American turnpikes the hope of dividends was merely a faint hope, and never a legal obligation. Odd as it sounds, the stock-financed “business” corporation was better suited to operating the project as a civic enterprise, paying out returns in use and esteem rather than cash.

The first private turnpike in the United States was chartered by Pennsylvania in 1792 and opened two years later. Spanning 62 miles between Philadelphia and Lancaster, it quickly attracted the attention of merchants in other states, who recognized its potential to direct commerce away from their regions. Soon lawmakers from those states began chartering turnpikes. By 1800, 69 turnpike companies had been chartered throughout the country, especially in Connecticut (23) and New York (13). Over the next decade nearly six times as many turnpikes were incorporated (398). Table 1 shows that in the mid-Atlantic and New England states between 1800 and 1830, turnpike companies accounted for 27 percent of all business incorporations.

Table 1: Turnpikes as a Percentage of All Business Incorporations,
by Special and General Acts, 1800-1830

As shown in Table 2, a wider set of states had incorporated 1562 turnpikes by the end of 1845. Somewhere between 50 to 70 percent of these succeeded in building and operating toll roads. A variety of regulatory and economic conditions – outlined below – account for why a relatively low percentage of chartered turnpikes became going concerns. In New York, for example, tolls could be collected only after turnpikes passed inspections, which were typically conducted after ten miles of roadway had been built. Only 35 to 40 percent of New York turnpike projects – or about 165 companies – reached operational status. In Connecticut, by contrast, where settlement covered the state and turnpikes more often took over existing roadbeds, construction costs were much lower and about 87 percent of the companies reached operation (Taylor 1934, 210).

Table 2: Turnpike Incorporation, 1792-1845

State 1792-1800 1801-10 1811-20 1821-30 1831-40 1841-45 Total
NH 4 45 5 1 4 0 59
VT 9 19 15 7 4 3 57
MA 9 80 8 16 1 1 115
RI 3 13 8 13 3 1 41
CT 23 37 16 24 13 0 113
NY 13 126 133 75 83 27 457
PA 5 39 101 59 101 37 342
NJ 0 22 22 3 3 0 50
VA 0 6 7 8 25 0 46
MD 3 9 33 12 14 7 78
OH 0 2 14 12 114 62 204
Total 69 398 362 230 365 138 1562

Source: Klein and Fielding 1992: 325.

Although the states of Pennsylvania, Virginia and Ohio subsidized privately-operated turnpike companies, most turnpikes were financed solely by private stock subscription and structured to pay dividends. This was a significant achievement, considering the large construction costs (averaging around $1,500 to $2,000 per mile) and the typical length (15 to 40 miles). But the achievement was most striking because, as New England historian Edward Kirkland (1948, 45) put it, “the turnpikes did not make money. As a whole this was true; as a rule it was clear from the beginning.” Organizers and “investors” generally regarded the initial proceeds from sale of stock as a fund from which to build the facility, which would then earn enough in toll receipts to cover operating expenses. One might hope for dividend payments as well, but “it seems to have been generally known long before the rush of construction subsided that turnpike stock was worthless” (Wood 1919, 63).3

Turnpikes promised little in the way of direct dividends and profits, but they offered potentially large indirect benefits. Because turnpikes facilitated movement and trade, nearby merchants, farmers, land owners, and ordinary residents would benefit from a turnpike. Gazetteer Thomas F. Gordon aptly summarized the relationship between these “indirect benefits” and investment in turnpikes: “None have yielded profitable returns to the stockholders, but everyone feels that he has been repaid for his expenditures in the improved value of his lands, and the economy of business” (quoted in Majewski 2000, 49). Gordon’s statement raises an important question. If one could not be excluded from benefiting from a turnpike, and if dividends were not in the offing, what incentive would anyone have to help finance turnpike construction? The turnpike communities faced a serious free-rider problem.

Nevertheless, hundreds of communities overcame the free-rider problem, mostly through a civic-minded culture that encouraged investment for long-term community gain. Alexis de Tocqueville observed that, excepting those of the South, Americans were infused with a spirit of public-mindedness. Their strong sense of community spirit resulted in the funding of schools, libraries, hospitals, churches, canals, dredging companies, wharves, and water companies, as well as turnpikes (Goodrich 1948). Vibrant community and cooperation sprung, according to Tocqueville, from the fertile ground of liberty:

If it is a question of taking a road past his property, [a man] sees at once that this small public matter has a bearing on his greatest private interests, and there is no need to point out to him the close connection between his private profit and the general interest. … Local liberties, then, which induce a great number of citizens to value the affection of their kindred and neighbors, bring men constantly into contact, despite the instincts which separate them, and force them to help one another. … The free institutions of the United States and the political rights enjoyed there provide a thousand continual reminders to every citizen that he lives in society. … Having no particular reason to hate others, since he is neither their slave nor their master, the American’s heart easily inclines toward benevolence. At first it is of necessity that men attend to the public interest, afterward by choice. What had been calculation becomes instinct. By dint of working for the good of his fellow citizens, he in the end acquires a habit and taste for serving them. … I maintain that there is only one effective remedy against the evils which equality may cause, and that is political liberty (Alexis de Tocqueville, 511-13, Lawrence/Mayer edition).

Tocqueville’s testimonial is broad and general, but its accuracy is seen in the archival records and local histories of the turnpike communities. Stockholder’s lists reveal a web of neighbors, kin, and locally prominent figures voluntarily contributing to what they saw as an important community improvement. Appeals made in newspapers, local speeches, town meetings, door-to-door solicitations, correspondence, and negotiations in assembling the route stressed the importance of community improvement rather than dividends.4 Furthermore, many toll road projects involved the effort to build a monument and symbol of the community. Participating in a company by donating cash or giving moral support was a relatively rewarding way of establishing public services; it was pursued at least in part for the sake of community romance and adventure as ends in themselves (Brown 1973, 68). It should be noted that turnpikes were not entirely exceptional enterprises in the early nineteenth century. In many fields, the corporate form had a public-service ethos, aimed not primarily at paying dividends, but at serving the community (Handlin and Handlin 1945, 22, Goodrich 1948, 306, Hurst 1970, 15).

Given the importance of community activism and long-term gains, most “investors” tended to be not outside speculators, but locals positioned to enjoy the turnpikes’ indirect benefits. “But with a few exceptions, the vast majority of the stockholders in turnpike were farmers, land speculators, merchants or individuals and firms interested in commerce” (Durrenberger 1931, 104). A large number of ordinary households held turnpike stock. Pennsylvania compiled the most complete set of investment records, which show that more than 24,000 individuals purchased turnpike or toll bridge stock between 1800 and 1821. The average holding was $250 worth of stock, and the median was less than $150 (Majewski 2001). Such sums indicate that most turnpike investors were wealthier than the average citizen, but hardly part of the urban elite that dominated larger corporations such as the Bank of the United States. County-level studies indicate that most turnpike investment came from farmers and artisans, as opposed to the merchants and professionals more usually associated with early corporations (Majewski 2000, 49-53).

Turnpikes became symbols of civic pride only after enduring a period of substantial controversy. In the 1790s and early 1800s, some Americans feared that turnpikes would become “engrossing monopolists” who would charge travelers exorbitant tolls or abuse eminent domain privileges. Others simply did not want to pay for travel that had formerly been free. To conciliate these different groups, legislators wrote numerous restrictions into turnpike charters. Toll gates, for example, often could be spaced no closer than every five or even ten miles. This regulation enabled some users to travel without encountering a toll gate, and eased the practice of steering horses and the high-mounted vehicles of the day off the main road so as to evade the toll gate, a practice known as “shunpiking.” The charters or general laws also granted numerous exemptions from toll payment. In New York, the exempt included people traveling on family business, those attending or returning from church services and funerals, town meetings, blacksmiths’ shops, those on military duty, and those who lived within one mile of a toll gate. In Massachusetts some of the same trips were exempt and also anyone residing in the town where the gate is placed and anyone “on the common and ordinary business of family concerns” (Laws of Massachusetts 1805, chapter 79, 649). In the face of exemptions and shunpiking, turnpike operators sometimes petitioned authorities for a toll hike, stiffer penalties against shunpikers, or the relocating of the toll gate. The record indicates that petitioning the legislature for such relief was a costly and uncertain affair (Klein and Majewski 1992, 496-98).

In view of the difficult regulatory environment and apparent free-rider problem, the success of early turnpikes in raising money and improving roads was striking. The movement built new roads at rates previously unheard of in America. Table 3 gives ballpark estimates of the cumulative investment in constructing turnpikes up to 1830 in New England and the Middle Atlantic. Repair and maintenance costs are excluded. These construction investment figures are probably too low – they generally exclude, for example, tolls revenue that might have been used to finish construction – but they nevertheless indicate the ability of private initiatives to raise money in an economy in which capital was in short supply. Turnpike companies in these states raised more than $24 million by 1830, an amount equaling 6.15 percent of those states’ 1830 GDP. To put this into comparative perspective, between 1956 and 1995 all levels of government spent $330 billion (in 1996 dollars) in building the interstate highway system, a cumulative total equaling only 4.30 percent of 1996 GDP.

Table 3
Cumulative Turnpike Investment (1800-1830) as percentage of 1830 GNP

State Cumulative Turnpike Investment, 1800-1830 ($) Cumulative Turnpike Investment as Percent of 1830 GDP Cumulative Turnpike Investment per Capita, 1830 ($)
Maine 35,000 0.16 0.09
New Hampshire 575,100 2.11 2.14
Vermont 484,000 3.37 1.72
Massachusetts 4,200,000 7.41 6.88
Rhode Island 140,000 1.54 1.44
Connecticut 1,036,160 4.68 3.48
New Jersey 1,100,000 4.79 3.43
New York 9,000,000 7.06 4.69
Pennsylvania 6,400,000 6.67 4.75
Maryland 1,500,000 3.85 3.36
TOTAL 24,470,260 6.15 4.49
Interstate Highway System, 1956-1996 330 Billion 4.15 (1996 GNP)

Sources: Pennsylvania turnpike investment: Durrenberger 1931: 61); New England turnpike investment: Taylor 1934: 210-11; New York, New Jersey, and Maryland turnpike investment: Fishlow 2000, 549. Only private investment is included. State GDP data come from Bodenhorn 2000: 237. Figures for the cost of the Interstate Highway System can be found at http://www.publicpurpose.com/hwy-is$.htm. Please note that our investment figures generally do not include investment to finish roads by loans or the use of toll revenue. The table therefore underestimates investment in turnpikes.

The organizational advantages of turnpike companies relative to government road not only generated more road mileage, but also higher quality roads (Taylor 1934, 334, Parks 1967, 23, 27). New York state gazetteer Horatio Spafford (1824, 125) wrote that turnpikes have been “an excellent school, in every road district, and people now work the highways to much better advantage than formerly.” Companies worked to intelligently develop roadway to achieve connective communication. The corporate form traversed town and county boundaries, so a single company could bring what would otherwise be separate segments together into a single organization. “Merchants and traders in New York sponsored pikes leading across northern New Jersey in order to tap the Delaware Valley trade which would otherwise have gone to Philadelphia” (Lane 1939, 156).

Turnpike networks became highly organized systems that sought to find the most efficient way of connecting eastern cities with western markets. Decades before the Erie Canal, private individuals realized the natural opening through the Appalachians and planned a system of turnpikes connecting Albany to Syracuse and beyond. Figure 1 shows the principal routes westward from Albany. The upper route begins with the Albany & Schenectady Turnpike, connects to the Mohawk Turnpike, and then the Seneca Turnpike. The lower route begins with the First Great Western Turnpike and then branches at Cherry Valley into the Second and Third Great Western Turnpikes. Corporate papers of these companies reveal that organizers of different companies talked to each other; they were quite capable of coordinating their intentions and planning mutually beneficial activities by voluntary means. When the Erie Canal was completed in 1825 it roughly followed the alignment of the upper route and greatly reduced travel on the competing turnpikes (Baer, Klein, and Majewski 1992).

Figure 1: Turnpike Network in Central New York, 1845
detail

Another excellent example of turnpike integration was the Pittsburgh Pike. The Pennsylvania route consisted of a combination of five turnpike companies, each of which built a road segment connecting Pittsburgh and Harrisburg, where travelers could take another series of turnpikes to Philadelphia. Completed in 1820, the Pittsburgh Pike greatly improved freighting over the rugged Allegheny Mountains. Freight rates between Philadelphia and Pittsburgh were cut in half because wagons increased their capacity, speed, and certainty (Reiser 1951, 76-77). Although the state government invested in the companies that formed the Pittsburgh Pike, records of the two companies for which we have complete investment information shows that private interests contributed 62 percent of the capital (calculated from Majewski 2000: 47-51: Reiser 1951, 76). Residents in numerous communities contributed to individual projects out of their own self interest. Their provincialism nevertheless helped create a coherent and integrated system.

A comparison of the Pittsburgh Pike and the National Road demonstrated the advantages of turnpike corporations over roads financed directly from government sources. Financed by the federal government, the National Road was built between Cumberland, Maryland, and Wheeling, West Virginia, where it was then extended through the Midwest with the hopes of reaching the Mississippi River. Although it never reached the Mississippi, the Federal Government nevertheless spent $6.8 million on the project (Goodrich 1960, 54, 65). The trans-Appalachian section of the National Road competed directly against the Pittsburgh Pike. From the records of two of the five companies that formed the Pittsburgh Pike, we estimate it cost $4,805 per mile to build (Majewski 2000, 47-51, Reiser 1951, 76). The Federal government, on the other hand, spent $13,455 per mile to complete the first 200 miles of the National Road (Fishlow 2000, 549). Besides costing much less, the Pennsylvania Pike was far better in quality. The toll gates along the Pittsburgh Pike provided a steady stream of revenue for repairs. The National Road, on the other hand, depended upon intermittent government outlays for basic maintenance, and the road quickly deteriorated. One army engineer in 1832 found “the road in a shocking condition, and every rod of it will require great repair; some of it now is almost impassable” (quoted in Searight, 60). Historians have found that travelers generally preferred to take the Pittsburgh Pike rather than the National Road.

The Plank Road Boom, 1847–1853

By the 1840s the major turnpikes were increasingly eclipsed by the (often state-subsidized) canals and railroads. Many toll roads reverted to free public use and quickly degenerated into miles of dust, mud and wheel-carved ruts. To link to the new and more powerful modes of communication, well-maintained, short-distance highways were still needed, but because governments became overextended in poor investments in canals, taxpayers were increasingly reluctant to fund internal improvements. Private entrepreneurs found the cost of the technologically most attractive road surfacing material (macadam, a compacted covering of crushed stones) prohibitively expensive at $3,500 per mile. Thus the ongoing need for new feeder roads spurred the search for innovation, and plank roads – toll roads surfaced with wooden planks – seemed to fit the need.

The plank road technique appears to have been introduced into Canada from Russia in 1840. It reached New York a few years later, after the village Salina, near Syracuse, sent civil engineer George Geddes to Toronto to investigate. After two trips Geddes (whose father, James, was an engineer for the Erie and Champlain Canals, and an enthusiastic canal advocate) was convinced of the plank roads’ feasibility and became their great booster. Plank roads, he wrote in Scientific American (Geddes 1850a), could be built at an average cost of $1,500 – although $1,900 would have been more accurate (Majewski, Baer and Klein 1994, 109, fn15). Geddes also published a pamphlet containing an influential, if overly optimistic, estimate that Toronto’s road planks had lasted eight years (Geddes 1850b). Simplicity of design made plank roads even more attractive. Road builders put down two parallel lines of timbers four or five feet apart, which formed the “foundation” of the road. They then laid, at right angles, planks that were about eight feet long and three or four inches thick. Builders used no nails or glue to secure the planks – they were secured only by their own weight – but they did build ditches on each side of the road to insure proper drainage (Klein and Majewski 1994, 42-43).

No less important than plank road economics and technology were the public policy changes that accompanied plank roads. Policymakers, perhaps aware that overly restrictive charters had hamstrung the first turnpike movement, were more permissive in the plank road era. Adjusting for deflation, toll rates were higher, toll gates were separated by shorter distances, and fewer local travelers were exempted from payment of tolls.

Although few today have heard of them, for a short time it seemed that plank roads might be one of the great innovations of the day. In just a few years, more than 1,000 companies built more than 10,000 miles of plank roads nationwide, including more than 3,500 miles in New York (Klein and Majewski 1994, Majewski, Baer, Klein 1993). According to one observer, plank roads, along with canals and railroads, were “the three great inscriptions graven on the earth by the hand of modern science, never to be obliterated, but to grow deeper and deeper” (Bogart 1851).

Except for most of New England, plank roads were chartered throughout the United States, especially in the top lumber-producing states of the Midwest and Mid-Atlantic states, as shown in Table 4.

Table 4: Plank Road Incorporation by State

State Number
New York 335
Pennsylvania 315
Ohio 205
Wisconsin 130
Michigan 122
Illinois 88
North Carolina 54
Missouri 49
New Jersey 25
Georgia 16
Iowa 14
Vermont 14
Maryland 13
Connecticut 7
Massachusetts 1
Rhode Island, Maine 0
Total 1388

Notes: The figure for Ohio is through 1851; Pennsylvania, New Jersey, and Maryland are through 1857. Few plank roads were incorporated after 1857. In western states, some roads were incorporated and built as plank roads, so the 1388 total is not to be taken as a total for the nation. For a complete description of the sources for this table, see Majewski, Baer, & Klein 1993: 110.

New York, the leading lumber state, had both the greatest number of plank road charters (350) and the largest value of lumber production ($13,126,000 in 1849 dollars). Plank roads were especially popular in rural dairy counties, where farmers needed quick and dependable transportation to urban markets (Majewski, Baer and Klein 1993).

The plank road and eastern turnpike episodes shared several features in common. Like the earlier turnpikes, investment in plank road companies came from local landowners, farmers, merchants, and professionals. Stock purchases were motivated less by the prospect of earning dividends than by the convenience and increased trade and development that the roads would bring. To many communities, plank roads held the hope of revitalization and the reversal (or slowing) of relative decline. But those hoping to attain these benefits once again were faced with a free-rider problem. Investors in plank roads, like the investors of the earlier turnpikes, were motivated often by esteem mechanisms – community allegiance and appreciation, reputational incentives, and their own conscience.

Although plank roads were smooth and sturdy, faring better in rain and snow than did dirt and gravel roads, they lasted only four or five years – not the eight to twelve years that promoters had claimed. Thus, the rush of construction ended suddenly by 1853, and by 1865 most companies had either switched to dirt and gravel surfaces or abandoned their road altogether.

Toll Roads in the Far West, 1850 to 1902

Unlike the areas served by the earlier turnpikes and plank roads, Colorado, Nevada, and California in the 1850s and 1860s lacked the settled communities and social networks that induced participation in community enterprise and improvement. Miners and the merchants who served them knew that the mining boom would not continue indefinitely and therefore seldom planted deep roots. Nor were the large farms that later populated California ripe for civic engagement in anywhere near the degree of the small farms of the east. Society in the early years of the West was not one where town meetings, door-to-door solicitations, and newspaper campaigns were likely to rally broad support for a road project. The lack of strong communities also meant that there would be few opponents to pressure the government for toll exemptions and otherwise hamper toll road operations. These conditions ensured that toll roads would tend to be more profit-oriented than the eastern turnpikes and plank road companies. Still, it is not clear whether on the whole the toll roads of the Far West were profitable.

The California toll road era began in 1850 after passage of general laws of incorporation. In 1853 new laws were passed reducing stock subscription requirements from $2,000 per mile to $300 per mile. The 1853 laws also delegated regulatory authority to the county governments. Counties were allowed “to set tolls at rates not to prevent a return of 20 percent,” but they did not interfere with the location of toll roads and usually looked favorably on the toll road companies. After passage of the 1853 laws, the number of toll road incorporations increased dramatically, peaking to nearly 40 new incorporations in 1866 alone. Companies were also created by special acts of the legislature. And sometimes they seemed to have operated without formal incorporation at all. David and Linda Beito (1998, 75, 84) show that in Nevada many entrepreneurs had built and operated toll roads – or other basic infrastructure – before there was a State of Nevada, and some operated for years without any government authority at all.

All told, in the Golden State, approximately 414 toll road companies were initiated,5 resulting in at least 159 companies that successfully built and operated toll roads. Table 5 provides some rough numbers for toll roads in western states. The numbers presented there are minimums. For California and Nevada, the numbers probably only slightly underestimate the true totals; for the other states the figures are quite sketchy and might significantly underestimate true totals. Again, an abundance of testimony indicates that the private road companies were the serious road builders, in terms of quantity and quality (see the ten quotations at Klein and Yin 1996, 689-90).

Table 5: Rough Minimums on Toll Roads in the West

Toll Road
Incorporations
Toll Roads
actually built
California 414 159
Colorado 350 n.a.
Nevada n.a. 117
Texas 50 n.a.
Wyoming 11 n.a.
Oregon 10 n.a.

Sources: For California, Klein and Yin 1996: 681-82; for Nevada, Beito and Beito 1998: 74; for the other states, notes and correspondence in D. Klein’s files.

Table 6 makes an attempt to justify guesses about total number of toll road companies and total toll road miles. The first three numbers in the “Incorporations” column come from Tables 2, 4, and 5. The estimates of success rates and average road length (in the third and fourth columns) are extrapolations from components that have been studied with more care. We have made these estimates conservative, in the sense of avoiding any overstatement of the extent of private road building. The ~ symbol has been used to keep the reader mindful of the fact that many of these numbers are estimates. The numbers in the right hand column have been rounded to the nearest 1000, so as to avoid any impression of accuracy. The “Other” row throws in a line to suggest a minimum to cover all the regions, periods, and road types not covered in Tables 2, 4, and 5. For example, the “Other” row would cover turnpikes in the East, South and Midwest after 1845 (Virginia’s turnpike boom came in the late 1840s and 1850s), and all turnpikes and plank roads in Indiana, whose county-based incorporation, it seems, has never been systematically researched. Ideally, not only would the numbers be more definite and complete, but there would be a weighting by years of operation. The “30,000 – 52,000 miles” should be read as a range for the sum of all the miles operated by any company at any time during the 100+ year period.

Table 6: A Rough Tally of the Private Toll Roads

Toll Road Movements Incorporations % Successful in Building Road Roads Built and Operated Average Road Length Toll Road

Miles Operated

Turnpikes incorporated from 1792 to 1845 1562 ~ 55 % ~ 859 ~ 18 ~ 15,000
Plank Roads incorporated from 1845 to roughly 1860 1388 ~ 65 % ~ 902 ~ 10 ~ 9,000
Toll Roads in the West incorporated from 1850 to roughly 1902 ~ 1127 ~ 40 % ~ 450 ~ 15 ~ 7,000
Other ~ <1000>

[a rough guess]

~ 50 % ~ 500 ~ 16 ~ 8,000
Ranges for

Totals

5,000 – 5,600

incorporations

48 – 60 percent 2,500 – 3,200 roads 12 – 16 miles 30,000 – 52,000

miles

Sources: Those of Tables 2, 4, and 5, plus the research files of the authors.

The End of Toll Roads in the Progressive Period

In 1880 many toll road companies nationwide continued to operate – probably in the range of 400 to 600 companies.6 But by 1920 the private toll road was almost entirely stamped out. From Maine to California, the laws and political attitudes from around 1880 onward moved against the handling of social affairs in ways that seemed informal, inexpert and unsystematic. Progressivism represented a burgeoning of more collectivist ideologies and policy reforms. Many progressive intellectuals took inspiration from European socialist doctrines. Although the politics of restraining corporate evils had a democratic and populist aspect, the bureaucratic spirit was highly managerial and hierarchical, intending to replicate the efficiency of large corporations in the new professional and scientific administration of government (Higgs 1987, 113-116, Ekirch 1967, 171-94).

One might point to the rise of the bicycle and later the automobile, which needed a harder and smoother surface, to explain the growth of America’s road network in the Progressive period. But such demand-side changes do not speak to the issues of road ownership and tolling. Automobiles achieved higher speeds, which made stopping to pay a toll more inconvenient, and that may have reinforced the anti-toll-road company movement that was underway prior to the automobile. Such developments figured into the history of road policy, but they really did not provide a good reason for the policy movement against the toll roads The following words of a county board of supervisors in New York in 1906 indicate a more general ideological bent against toll road companies:

[T]he ownership and operation of this road by a private corporation is contrary to public sentiment in this county, and [the] cause of good roads, which has received so much attention in this state in recent years, requires that this antiquated system should be abolished. … That public opinion throughout the state is strongly in favor of the abolition of toll roads is indicated by the fact that since the passage of the act of 1899, which permits counties to acquire these roads, the boards of supervisors of most of the counties where such roads have existed have availed themselves of its provisions and practically abolished the toll road.

Given such attitudes, it was no wonder that within the U. S. Department of Agricultural, the new Office of Road Inquiry began in 1893 to gather information, conduct research, and “educate” for better roads. The new bureaucracy opposed toll roads, and the Federal Highway Act of 1916 barred the use of tolls on highways receiving federal money (Seely 1987, 15, 79). Anti-toll-road sentiment became state and national policy.

Conclusions and Implications

Throughout the nineteenth-century, the United States was notoriously “land-rich” and “capital poor.” The viability of turnpikes shows how Americans devised institutions – in this case, toll-collecting corporations – that allowed them to invest precious capital in important public projects. What’s more, turnpikes paid little in direct dividends and stock appreciation, yet still attracted investment. Investors, of course, cared for long-term economic development, but that does not account for how turnpike organizers overcame the important public goods problem of buying turnpike stock. Esteem, social pressure, and other non-economic motivations influenced local residents to make investments that they knew would be unprofitable (at least in a direct sense) but would nevertheless help the entire community. On the other hand, the turnpike companies enjoyed the organizational clarity of stock ownership and residual returns. All companies faced the possibility of pressure from investors, who might have wanted to salvage something of their investment. Residual claimancy may have enhanced the viability of many projects, including communitarian projects undertaken primarily for use and esteem.

The combining of these two ingredients – the appeal of use and esteem, and the incentives and proprietary clarity of residual returns – is today severely undermined by the modern legal bifurcation of private initiative into “not-for-profit” and “for-profit” concerns. Not-for-profit corporations can appeal to use and esteem but cannot organize themselves to earn residual returns. For-profit corporations organize themselves for residual returns but cannot very well appeal to use and esteem. As already noted, prior to modern tax law and regulation, the old American toll roads were, relative to the British turnpike trusts, more, not less, use-and-esteem oriented by virtue of being structured to pay dividends rather than interest. Like the eighteenth century British turnpike trusts, the twentieth century American governmental toll projects financed (in part) by privately purchased bonds generally failed, relative to the nineteenth century American company model, to draw on use and esteem motivations.

The turnpike experience of nineteenth-century America suggests that the stock/dividend company can also be a fruitful, efficient, and socially beneficial way to make losses and go on making losses. The success of turnpikes suggests that our modern sensibility of dividing enterprises between profit and non-profit – a distinction embedded in modern tax laws and regulations – unnecessarily impoverishes the imagination of economists and other policy makers. Without such strict legal and institutional bifurcation, our own modern society might better recognize the esteem in trade and the trade in esteem.

References

Baer, Christopher T., Daniel B. Klein, and John Majewski. “From Trunk to Branch: Toll Roads in New York, 1800-1860.” Essays in Economic and Business History XI (1993): 191-209.

Beito, David T., and Linda Royster Beito. “Rival Road Builders: Private Toll Roads in Nevada, 1852-1880.” Nevada Historical Society Quarterly 41 (1998): 71- 91.

Benson, Bruce. “Are Public Goods Really Common Pools? Consideration of the Evolution of Policing and Highways in England.” Economic Inquiry 32 no. 2 (1994).

Bogart, W. H. “First Plank Road.” Hunt’s Merchant Magazine (1851).

Brown, Richard D. “The Emergence of Voluntary Associations in Massachusetts, 1760-1830.” Journal of Voluntary Action Research (1973): 64-73.

Bodenhorn, Howard. A History of Banking in Antebellum America. New York: Cambridge University Press, 2000.

Cage, R. A. “The Lowden Empire: A Case Study of Wagon Roads in Northern California.” The Pacific Historian 28 (1984): 33-48.

Davis, Joseph S. Essays in the Earlier History of American Corporations. Cambridge: Harvard University Press, 1917.

DuBasky, Mayo. The Gist of Mencken: Quotations from America’s Critic. Metuchen, NJ: Scarecrow Press, 1990.

Durrenberger, J.A. Turnpikes: A Study of the Toll Road Movement in the Middle Atlantic States and Maryland. Valdosta, GA.: Southern Stationery and Printing, 1981.

Ekirch, Arthur A., Jr. The Decline of American Liberalism. New York: Atheneum, 1967.

Fishlow, Albert. “Internal Transportation in the Nineteenth and Early Twentieth Centuries.” In The Cambridge Economic History of the United States, Vol. II: The Long Nineteenth Century, edited by Stanley L. Engerman and Robert E. Gallman. New York: Cambridge University Press, 2000.

Geddes, George. Scientific American 5 (April 27, 1850).

Geddes, George. Observations upon Plank Roads. Syracuse: L.W. Hall, 1850.

Goodrich, Carter. “Public Spirit and American Improvements.” Proceedings of the American Philosophical Society, 92 (1948): 305-09.

Goodrich, Carter. Government Promotion of American Canals and Railroads, 1800-1890. New York: Columbia University Press, 1960.

Gunderson, Gerald. “Privatization and the Nineteenth-Century Turnpike.” Cato Journal 9 no. 1 (1989): 191-200.

Higgs, Robert. Crises and Leviathan: Critical Episodes in the Growth of American Government. New York: Oxford University Press, 1987.

Higgs, Robert. “Regime Uncertainty: Why the Great Depression Lasted So Long and Why Prosperity Resumed after the War.” Independent Review 1 no. 4 (1997): 561-600.

Kaplan, Michael D. “The Toll Road Building Career of Otto Mears, 1881-1887.” Colorado Magazine 52 (1975): 153-70.

Kirkland, Edward C. Men, Cities and Transportation: A Study in New England History, 1820-1900. Cambridge, MA.: Harvard University Press, 1948.

Klein, Daniel. “The Voluntary Provision of Public Goods? The Turnpike Companies of Early America.” Economic Inquiry (1990): 788-812. (Reprinted in The Voluntary City, edited by David Beito, Peter Gordon and Alexander Tabarrok. Ann Arbor: University of Michigan Press, 2002.)

Klein, Daniel B. and Gordon J. Fielding. “Private Toll Roads: Learning from the Nineteenth Century.” Transportation Quarterly 46, no. 3 (1992): 321-41.

Klein, Daniel B. and John Majewski. “Economy, Community and Law: The Turnpike Movement in New York, 1797-1845.” Law & Society Review 26, no. 3 (1992): 469-512.

Klein, Daniel B. and John Majewski. “Plank Road Fever in Antebellum America: New York State Origins.” New York History (1994): 39-65.

Klein, Daniel B. and Chi Yin. “Use, Esteem, and Profit in Voluntary Provision: Toll Roads in California, 1850-1902.” Economic Inquiry (1996): 678-92.

Kresge, David T. and Paul O. Roberts. Techniques of Transport Planning, Volume Two: Systems Analysis and Simulation Models. Washington DC: Brookings Institution, 1971.

Lane, Wheaton J. From Indian Trail to Iron Horse: Travel and Transportation in New Jersey, 1620-1860. Princeton: Princeton University Press, 1939.

Majewski, John. A House Dividing: Economic Development in Pennsylvania and Virginia before the Civil War. New York: Cambridge University Press, 2000.

Majewski, John. “The Booster Spirit and ‘Mid-Atlantic’ Distinctiveness: Shareholding in Pennsylvania Banking and Transportation Corporations, 1800 to 1840.” Manuscript, Department of History, UC Santa Barbara, 2001.

Majewski, John, Christopher Baer and Daniel B. Klein. “Responding to Relative Decline: The Plank Road Boom of Antebellum New York.” Journal of Economic History 53, no. 1 (1993): 106-122.

Nash, Christopher A. “Integration of Public Transport: An Economic Assessment.” In Bus Deregulation and Privatisation: An International Perspective, edited by J.S. Dodgson and N. Topham. Brookfield, VT: Avebury, 1988

Nash, Gerald D. State Government and Economic Development: A History of Administrative Policies in California, 1849-1933. Berkeley: University of California Press (Institute of Governmental Studies), 1964.

Pawson, Eric. Transport and Economy: The Turnpike Roads of Eighteenth Century Britain. London: Academic Press, 1977.

Peyton, Billy Joe. “Survey and Building the [National] Road.” In The National Road, edited by Karl Raitz. Baltimore: Johns Hopkins University Press, 1996.

Poole, Robert W. “Private Toll Roads.” In Privatizing Transportation Systems, edited by Simon Hakim, Paul Seidenstate, and Gary W. Bowman. Westport, CT: Praeger, 1996

Reiser, Catherine Elizabeth. Pittsburgh’s Commercial Development, 1800-1850. Harrisburg: Pennsylvania Historical and Museum Commission, 1951.

Ridgway, Arthur. “The Mission of Colorado Toll Roads.” Colorado Magazine 9 (1932): 161-169.

Roth, Gabriel. Roads in a Market Economy. Aldershot, England: Avebury Technical, 1996.

Searight, Thomas B. The Old Pike: A History of the National Road. Uniontown, PA: Thomas Searight, 1894.

Seely, Bruce E. Building the American Highway System: Engineers as Policy Makers. Philadelphia: Temple University Press, 1987.

Taylor, George R. The Transportation Revolution, 1815-1860. New York: Rinehart, 1951

Thwaites, Reuben Gold. Early Western Travels, 1746-1846. Cleveland: A. H. Clark, 1907.

U. S. Agency for International Development. “A History of Foreign Assistance.” On the U.S. A.I.D. Website. Posted April 3, 2002. Accessed January 20, 2003.

Wood, Frederick J. The Turnpikes of New England and Evolution of the Same through England, Virginia, and Maryland. Boston: Marshall Jones, 1919.

1 Daniel Klein, Department of Economics, Santa Clara University, Santa Clara, CA, 95053, and Ratio Institute, Stockholm, Sweden; Email: Dklein@scu.edu.

John Majewski, Department of History, University of California, Santa Barbara, 93106; Email: Majewski@history.ucsb.edu.

2 The term “turnpike” comes from Britain, referring to a long staff (or pike) that acted as a swinging barrier or tollgate. In nineteenth century America, “turnpike” specifically means a toll road with a surface of gravel and earth, as opposed to “plank roads” which refer to toll roads surfaced by wooden planks. Later in the century, all such roads were typically just “toll roads.”

3 For a discussion of returns and expectations, see Klein 1990: 791-95.

4 See Klein 1990: 803-808, Klein and Majewski 1994: 56-61.

5 The 414 figure consists of 222 companies organized under the general law, 102 charted by the legislature, and 90 companies that we learned of by county records, local histories, and various other sources.

6 Durrenberger (1931: 164) notes that in 1911 there were 108 turnpikes operating in Pennsylvania alone.

Citation: Klein, Daniel and John Majewski. “Turnpikes and Toll Roads in Nineteenth-Century America”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/turnpikes-and-toll-roads-in-nineteenth-century-america/

The 1929 Stock Market Crash

Harold Bierman, Jr., Cornell University

Overview

The 1929 stock market crash is conventionally said to have occurred on Thursday the 24th and Tuesday the 29th of October. These two dates have been dubbed “Black Thursday” and “Black Tuesday,” respectively. On September 3, 1929, the Dow Jones Industrial Average reached a record high of 381.2. At the end of the market day on Thursday, October 24, the market was at 299.5 — a 21 percent decline from the high. On this day the market fell 33 points — a drop of 9 percent — on trading that was approximately three times the normal daily volume for the first nine months of the year. By all accounts, there was a selling panic. By November 13, 1929, the market had fallen to 199. By the time the crash was completed in 1932, following an unprecedentedly large economic depression, stocks had lost nearly 90 percent of their value.

The events of Black Thursday are normally defined to be the start of the stock market crash of 1929-1932, but the series of events leading to the crash started before that date. This article examines the causes of the 1929 stock market crash. While no consensus exists about its precise causes, the article will critique some arguments and support a preferred set of conclusions. It argues that one of the primary causes was the attempt by important people and the media to stop market speculators. A second probable cause was the great expansion of investment trusts, public utility holding companies, and the amount of margin buying, all of which fueled the purchase of public utility stocks, and drove up their prices. Public utilities, utility holding companies, and investment trusts were all highly levered using large amounts of debt and preferred stock. These factors seem to have set the stage for the triggering event. This sector was vulnerable to the arrival of bad news regarding utility regulation. In October 1929, the bad news arrived and utility stocks fell dramatically. After the utilities decreased in price, margin buyers had to sell and there was then panic selling of all stocks.

The Conventional View

The crash helped bring on the depression of the thirties and the depression helped to extend the period of low stock prices, thus “proving” to many that the prices had been too high.

Laying the blame for the “boom” on speculators was common in 1929. Thus, immediately upon learning of the crash of October 24 John Maynard Keynes (Moggridge, 1981, p. 2 of Vol. XX) wrote in the New York Evening Post (25 October 1929) that “The extraordinary speculation on Wall Street in past months has driven up the rate of interest to an unprecedented level.” And the Economist when stock prices reached their low for the year repeated the theme that the U.S. stock market had been too high (November 2, 1929, p. 806): “there is warrant for hoping that the deflation of the exaggerated balloon of American stock values will be for the good of the world.” The key phrases in these quotations are “exaggerated balloon of American stock values” and “extraordinary speculation on Wall Street.” Likewise, President Herbert Hoover saw increasing stock market prices leading up to the crash as a speculative bubble manufactured by the mistakes of the Federal Reserve Board. “One of these clouds was an American wave of optimism, born of continued progress over the decade, which the Federal Reserve Board transformed into the stock-exchange Mississippi Bubble” (Hoover, 1952). Thus, the common viewpoint was that stock prices were too high.

There is much to criticize in conventional interpretations of the 1929 stock market crash, however. (Even the name is inexact. The largest losses to the market did not come in October 1929 but rather in the following two years.) In December 1929, many expert economists, including Keynes and Irving Fisher, felt that the financial crisis had ended and by April 1930 the Standard and Poor 500 composite index was at 25.92, compared to a 1929 close of 21.45. There are good reasons for thinking that the stock market was not obviously overvalued in 1929 and that it was sensible to hold most stocks in the fall of 1929 and to buy stocks in December 1929 (admittedly this investment strategy would have been terribly unsuccessful).

Were Stocks Obviously Overpriced in October 1929?
Debatable — Economic Indicators Were Strong

From 1925 to the third quarter of 1929, common stocks increased in value by 120 percent in four years, a compound annual growth of 21.8%. While this is a large rate of appreciation, it is not obvious proof of an “orgy of speculation.” The decade of the 1920s was extremely prosperous and the stock market with its rising prices reflected this prosperity as well as the expectation that the prosperity would continue.

The fact that the stock market lost 90 percent of its value from 1929 to 1932 indicates that the market, at least using one criterion (actual performance of the market), was overvalued in 1929. John Kenneth Galbraith (1961) implies that there was a speculative orgy and that the crash was predictable: “Early in 1928, the nature of the boom changed. The mass escape into make-believe, so much a part of the true speculative orgy, started in earnest.” Galbraith had no difficulty in 1961 identifying the end of the boom in 1929: “On the first of January of 1929, as a matter of probability, it was most likely that the boom would end before the year was out.”

Compare this position with the fact that Irving Fisher, one of the leading economists in the U.S. at the time, was heavily invested in stocks and was bullish before and after the October sell offs; he lost his entire wealth (including his house) before stocks started to recover. In England, John Maynard Keynes, possibly the world’s leading economist during the first half of the twentieth century, and an acknowledged master of practical finance, also lost heavily. Paul Samuelson (1979) quotes P. Sergeant Florence (another leading economist): “Keynes may have made his own fortune and that of King’s College, but the investment trust of Keynes and Dennis Robertson managed to lose my fortune in 1929.”

Galbraith’s ability to ‘forecast’ the market turn is not shared by all. Samuelson (1979) admits that: “playing as I often do the experiment of studying price profiles with their dates concealed, I discovered that I would have been caught by the 1929 debacle.” For many, the collapse from 1929 to 1933 was neither foreseeable nor inevitable.

The stock price increases leading to October 1929, were not driven solely by fools or speculators. There were also intelligent, knowledgeable investors who were buying or holding stocks in September and October 1929. Also, leading economists, both then and now, could neither anticipate nor explain the October 1929 decline of the market. Thus, the conviction that stocks were obviously overpriced is somewhat of a myth.

The nation’s total real income rose from 1921 to 1923 by 10.5% per year, and from 1923 to 1929, it rose 3.4% per year. The 1920s were, in fact, a period of real growth and prosperity. For the period of 1923-1929, wholesale prices went down 0.9% per year, reflecting moderate stable growth in the money supply during a period of healthy real growth.

Examining the manufacturing situation in the United States prior to the crash is also informative. Irving Fisher’s Stock Market Crash and After (1930) offers much data indicating that there was real growth in the manufacturing sector. The evidence presented goes a long way to explain Fisher’s optimism regarding the level of stock prices. What Fisher saw was manufacturing efficiency rapidly increasing (output per worker) as was manufacturing output and the use of electricity.

The financial fundamentals of the markets were also strong. During 1928, the price-earnings ratio for 45 industrial stocks increased from approximately 12 to approximately 14. It was over 15 in 1929 for industrials and then decreased to approximately 10 by the end of 1929. While not low, these price-earnings (P/E) ratios were by no means out of line historically. Values in this range would be considered reasonable by most market analysts today. For example, the P/E ratio of the S & P 500 in July 2003 reached a high of 33 and in May 2004 the high was 23.

The rise in stock prices was not uniform across all industries. The stocks that went up the most were in industries where the economic fundamentals indicated there was cause for large amounts of optimism. They included airplanes, agricultural implements, chemicals, department stores, steel, utilities, telephone and telegraph, electrical equipment, oil, paper, and radio. These were reasonable choices for expectations of growth.

To put the P/E ratios of 10 to 15 in perspective, note that government bonds in 1929 yielded 3.4%. Industrial bonds of investment grade were yielding 5.1%. Consider that an interest rate of 5.1% represents a 1/(0.051) = 19.6 price-earnings ratio for debt.

In 1930, the Federal Reserve Bulletin reported production in 1920 at an index of 87.1 The index went down to 67 in 1921, then climbed steadily (except for 1924) until it reached 125 in 1929. This is an annual growth rate in production of 3.1%. During the period commodity prices actually decreased. The production record for the ten-year period was exceptionally good.

Factory payrolls in September were at an index of 111 (an all-time high). In October the index dropped to 110, which beat all previous months and years except for September 1929. The factory employment measures were consistent with the payroll index.

The September unadjusted measure of freight car loadings was at 121 — also an all-time record.2 In October the loadings dropped to 118, which was a performance second only to September’s record measure.

J.W. Kendrick (1961) shows that the period 1919-1929 had an unusually high rate of change in total factor productivity. The annual rate of change of 5.3% for 1919-1929 for the manufacturing sector was more than twice the 2.5% rate of the second best period (1948-1953). Farming productivity change for 1919-1929 was second only to the period 1929-1937. Overall, the period 1919-1929 easily took first place for productivity increases, handily beating the six other time periods studied by Kendrick (all the periods studies were prior to 1961) with an annual productivity change measure of 3.7%. This was outstanding economic performance — performance which normally would justify stock market optimism.

In the first nine months of 1929, 1,436 firms announced increased dividends. In 1928, the number was only 955 and in 1927, it was 755. In September 1929 dividend increased were announced by 193 firms compared with 135 the year before. The financial news from corporations was very positive in September and October 1929.

The May issue of the National City Bank of New York Newsletter indicated the earnings statements for the first quarter of surveyed firms showed a 31% increase compared to the first quarter of 1928. The August issue showed that for 650 firms the increase for the first six months of 1929 compared to 1928 was 24.4%. In September, the results were expanded to 916 firms with a 27.4% increase. The earnings for the third quarter for 638 firms were calculated to be 14.1% larger than for 1928. This is evidence that the general level of business activity and reported profits were excellent at the end of September 1929 and the middle of October 1929.

Barrie Wigmore (1985) researched 1929 financial data for 135 firms. The market price as a percentage of year-end book value was 420% using the high prices and 181% using the low prices. However, the return on equity for the firms (using the year-end book value) was a high 16.5%. The dividend yield was 2.96% using the high stock prices and 5.9% using the low stock prices.

Article after article from January to October in business magazines carried news of outstanding economic performance. E.K. Berger and A.M. Leinbach, two staff writers of the Magazine of Wall Street, wrote in June 1929: “Business so far this year has astonished even the perennial optimists.”

To summarize: There was little hint of a severe weakness in the real economy in the months prior to October 1929. There is a great deal of evidence that in 1929 stock prices were not out of line with the real economics of the firms that had issued the stock. Leading economists were betting that common stocks in the fall of 1929 were a good buy. Conventional financial reports of corporations gave cause for optimism relative to the 1929 earnings of corporations. Price-earnings ratios, dividend amounts and changes in dividends, and earnings and changes in earnings all gave cause for stock price optimism.

Table 1 shows the average of the highs and lows of the Dow Jones Industrial Index for 1922 to 1932.

Table 1
Dow-Jones Industrials Index Average
of Lows and Highs for the Year
1922 91.0
1923 95.6
1924 104.4
1925 137.2
1926 150.9
1927 177.6
1928 245.6
1929 290.0
1930 225.8
1931 134.1
1932 79.4

Sources: 1922-1929 measures are from the Stock Market Study, U.S. Senate, 1955, pp. 40, 49, 110, and 111; 1930-1932 Wigmore, 1985, pp. 637-639.

Using the information of Table 1, from 1922 to 1929 stocks rose in value by 218.7%. This is equivalent to an 18% annual growth rate in value for the seven years. From 1929 to 1932 stocks lost 73% of their value (different indices measured at different time would give different measures of the increase and decrease). The price increases were large, but not beyond comprehension. The price decreases taken to 1932 were consistent with the fact that by 1932 there was a worldwide depression.

If we take the 386 high of September 1929 and the 1929-year end value of 248.5, the market lost 36% of its value during that four-month period. Most of us, if we held stock in September 1929 would not have sold early in October. In fact, if I had money to invest, I would have purchased after the major break on Black Thursday, October 24. (I would have been sorry.)

Events Precipitating the Crash

Although it can be argued that the stock market was not overvalued, there is evidence that many feared that it was overvalued — including the Federal Reserve Board and the United States Senate. By 1929, there were many who felt the market price of equity securities had increased too much, and this feeling was reinforced daily by the media and statements by influential government officials.

What precipitated the October 1929 crash?

My research minimizes several candidates that are frequently cited by others (see Bierman 1991, 1998, 1999, and 2001).

  • The market did not fall just because it was too high — as argued above it is not obvious that it was too high.
  • The actions of the Federal Reserve, while not always wise, cannot be directly identified with the October stock market crashes in an important way.
  • The Smoot-Hawley tariff, while looming on the horizon, was not cited by the news sources in 1929 as a factor, and was probably not important to the October 1929 market.
  • The Hatry Affair in England was not material for the New York Stock Exchange and the timing did not coincide with the October crashes.
  • Business activity news in October was generally good and there were very few hints of a coming depression.
  • Short selling and bear raids were not large enough to move the entire market.
  • Fraud and other illegal or immoral acts were not material, despite the attention they have received.

Barsky and DeLong (1990, p. 280) stress the importance of fundamentals rather than fads or fashions. “Our conclusion is that major decade-to-decade stock market movements arise predominantly from careful re-evaluation of fundamentals and less so from fads or fashions.” The argument below is consistent with their conclusion, but there will be one major exception. In September 1929, the market value of one segment of the market, the public utility sector, should be based on existing fundamentals, and fundamentals seem to have changed considerably in October 1929.

A Look at the Financial Press

Thursday, October 3, 1929, the Washington Post with a page 1 headline exclaimed “Stock Prices Crash in Frantic Selling.” the New York Times of October 4 headed a page 1 article with “Year’s Worst Break Hits Stock Market.” The article on the first page of the Times cited three contributing factors:

  • A large broker loan increase was expected (the article stated that the loans increased, but the increase was not as large as expected).
  • The statement by Philip Snowden, England’s Chancellor of the Exchequer that described America’s stock market as a “speculative orgy.”
  • Weakening of margin accounts making it necessary to sell, which further depressed prices.

While the 1928 and 1929 financial press focused extensively and excessively on broker loans and margin account activity, the statement by Snowden is the only unique relevant news event on October 3. The October 4 (p. 20) issue of the Wall Street Journal also reported the remark by Snowden that there was “a perfect orgy of speculation.” Also, on October 4, the New York Times made another editorial reference to Snowden’s American speculation orgy. It added that “Wall Street had come to recognize its truth.” The editorial also quoted Secretary of the Treasury Mellon that investors “acted as if the price of securities would infinitely advance.” The Times editor obviously thought there was excessive speculation, and agreed with Snowden.

The stock market went down on October 3 and October 4, but almost all reported business news was very optimistic. The primary negative news item was the statement by Snowden regarding the amount of speculation in the American stock market. The market had been subjected to a barrage of statements throughout the year that there was excessive speculation and that the level of stock prices was too high. There is a possibility that the Snowden comment reported on October 3 was the push that started the boulder down the hill, but there were other events that also jeopardized the level of the market.

On August 8, the Federal Reserve Bank of New York had increased the rediscount rate from 5 to 6%. On September 26 the Bank of England raised its discount rate from 5.5 to 6.5%. England was losing gold as a result of investment in the New York Stock Exchange and wanted to decrease this investment. The Hatry Case also happened in September. It was first reported on September 29, 1929. Both the collapse of the Hatry industrial empire and the increase in the investment returns available in England resulted in shrinkage of English investment (especially the financing of broker loans) in the United States, adding to the market instability in the beginning of October.

Wednesday, October 16, 1929

On Wednesday, October 16, stock prices again declined. the Washington Post (October 17, p. 1) reported “Crushing Blow Again Dealt Stock Market.” Remember, the start of the stock market crash is conventionally identified with Black Thursday, October 24, but there were price declines on October 3, 4, and 16.

The news reports of the Post on October 17 and subsequent days are important since they were Associated Press (AP) releases, thus broadly read throughout the country. The Associated Press reported (p. 1) “The index of 20 leading public utilities computed for the Associated Press by the Standard Statistics Co. dropped 19.7 points to 302.4 which contrasts with the year’s high established less than a month ago.” This index had also dropped 18.7 points on October 3 and 4.3 points on October 4. The Times (October 17, p. 38) reported, “The utility stocks suffered most as a group in the day’s break.”

The economic news after the price drops of October 3 and October 4 had been good. But the deluge of bad news regarding public utility regulation seems to have truly upset the market. On Saturday, October 19, the Washington Post headlined (p. 13) “20 Utility Stocks Hit New Low Mark” and (Associated Press) “The utility shares again broke wide open and the general list came tumbling down almost half as far.” The October 20 issue of the Post had another relevant AP article (p. 12) “The selling again concentrated today on the utilities, which were in general depressed to the lowest levels since early July.”

An evaluation of the October 16 break in the New York Times on Sunday, October 20 (pp. 1 and 29) gave the following favorable factors:

  • stable business condition
  • low money rates (5%)
  • good retail trade
  • revival of the bond market
  • buying power of investment trusts
  • largest short interest in history (this is the total dollar value of stock sold where the investors do not own the stock they sold)

The following negative factors were described:

  • undigested investment trusts and new common stock shares
  • increase in broker loans
  • some high stock prices
  • agricultural prices lower
  • nervous market

The negative factors were not very upsetting to an investor if one was optimistic that the real economic boom (business prosperity) would continue. The Times failed to consider the impact on the market of the news concerning the regulation of public utilities.

Monday, October 21, 1929

On Monday, October 21, the market went down again. The Times (October 22) identified the causes to be

  • margin sellers (buyers on margin being forced to sell)
  • foreign money liquidating
  • skillful short selling

The same newspaper carried an article about a talk by Irving Fisher (p. 24) “Fisher says prices of stocks are low.” Fisher also defended investment trusts as offering investors diversification, thus reduced risk. He was reminded by a person attending the talk that in May he had “pointed out that predicting the human behavior of the market was quite different from analyzing its economic soundness.” Fisher was better with fundamentals than market psychology.

Wednesday, October 23, 1929

On Wednesday, October 23 the market tumbled. The Times headlines (October 24, p.1) said “Prices of Stocks Crash in Heavy Liquidation.” The Washington Post (p. 1) had “Huge Selling Wave Creates Near-Panic as Stocks Collapse.” In a total market value of $87 billion the market declined $4 billion — a 4.6% drop. If the events of the next day (Black Thursday) had not occurred, October 23 would have gone down in history as a major stock market event. But October 24 was to make the “Crash” of October 23 become merely a “Dip.”

The Times lamented October 24, (p. 38) “There was hardly a single item of news which might be construed as bearish.”

Thursday, October 24, 1929

Thursday, October 24 (Black Thursday) was a 12,894,650 share day (the previous record was 8,246,742 shares on March 26, 1929) on the NYSE. The headline on page one of the Times (October 25) was “Treasury Officials Blame Speculation.”

The Times (p. 41) moaned that the cost of call money had been 20% in March and the price break in March was understandable. (A call loan is a loan payable on demand of the lender.) Call money on October 24 cost only 5%. There should not have been a crash. The Friday Wall Street Journal (October 25) gave New York bankers credit for stopping the price decline with $1 billion of support.

the Washington Post (October 26, p. 1) reported “Market Drop Fails to Alarm Officials.” The “officials” were all in Washington. The rest of the country seemed alarmed. On October 25, the market gained. President Hoover made a statement on Friday regarding the excellent state of business, but then added how building and construction had been adversely “affected by the high interest rates induced by stock speculation” (New York Times, October 26, p. 1). A Times editorial (p. 16) quoted Snowden’s “orgy of speculation” again.

Tuesday, October 29, 1929

The Sunday, October 27 edition of the Times had a two-column article “Bay State Utilities Face Investigation.” It implied that regulation in Massachusetts was going to be less friendly towards utilities. Stocks again went down on Monday, October 28. There were 9,212,800 shares traded (3,000,000 in the final hour). The Times on Tuesday, October 29 again carried an article on the New York public utility investigating committee being critical of the rate making process. October 29 was “Black Tuesday.” The headline the next day was “Stocks Collapse in 16,410,030 Share Day” (October 30, p. 1). Stocks lost nearly $16 billion in the month of October or 18% of the beginning of the month value. Twenty-nine public utilities (tabulated by the New York Times) lost $5.1 billion in the month, by far the largest loss of any of the industries listed by the Times. The value of the stocks of all public utilities went down by more than $5.1 billion.

An Interpretive Overview of Events and Issues

My interpretation of these events is that the statement by Snowden, Chancellor of the Exchequer, indicating the presence of a speculative orgy in America is likely to have triggered the October 3 break. Public utility stocks had been driven up by an explosion of investment trust formation and investing. The trusts, to a large extent, bought stock on margin with funds loaned not by banks but by “others.” These funds were very sensitive to any market weakness. Public utility regulation was being reviewed by the Federal Trade Commission, New York City, New York State, and Massachusetts, and these reviews were watched by the other regulatory commissions and by investors. The sell-off of utility stocks from October 16 to October 23 weakened prices and created “margin selling” and withdrawal of capital by the nervous “other” money. Then on October 24, the selling panic happened.

There are three topics that require expansion. First, there is the setting of the climate concerning speculation that may have led to the possibility of relatively specific issues being able to trigger a general market decline. Second, there are investment trusts, utility holding companies, and margin buying that seem to have resulted in one sector being very over-levered and overvalued. Third, there are the public utility stocks that appear to be the best candidate as the actual trigger of the crash.

Contemporary Worries of Excessive Speculation

During 1929, the public was bombarded with statements of outrage by public officials regarding the speculative orgy taking place on the New York Stock Exchange. If the media say something often enough, a large percentage of the public may come to believe it. By October 29 the overall opinion was that there had been excessive speculation and the market had been too high. Galbraith (1961), Kindleberger (1978), and Malkiel (1996) all clearly accept this assumption. the Federal Reserve Bulletin of February 1929 states that the Federal Reserve would restrain the use of “credit facilities in aid of the growth of speculative credit.”

In the spring of 1929, the U.S. Senate adopted a resolution stating that the Senate would support legislation “necessary to correct the evil complained of and prevent illegitimate and harmful speculation” (Bierman, 1991).

The President of the Investment Bankers Association of America, Trowbridge Callaway, gave a talk in which he spoke of “the orgy of speculation which clouded the country’s vision.”

Adolph Casper Miller, an outspoken member of the Federal Reserve Board from its beginning described 1929 as “this period of optimism gone wild and cupidity gone drunk.”

Myron C. Taylor, head of U.S. Steel described “the folly of the speculative frenzy that lifted securities to levels far beyond any warrant of supporting profits.”

Herbert Hoover becoming president in March 1929 was a very significant event. He was a good friend and neighbor of Adolph Miller (see above) and Miller reinforced Hoover’s fears. Hoover was an aggressive foe of speculation. For example, he wrote, “I sent individually for the editors and publishers of major newspapers and magazine and requested them systematically to warn the country against speculation and the unduly high price of stocks.” Hoover then pressured Secretary of the Treasury Andrew Mellon and Governor of the Federal Reserve Board Roy Young “to strangle the speculative movement.” In his memoirs (1952) he titled his Chapter 2 “We Attempt to Stop the Orgy of Speculation” reflecting Snowden’s influence.

Buying on Margin

Margin buying during the 1920’s was not controlled by the government. It was controlled by brokers interested in their own well-being. The average margin requirement was 50% of the stock price prior to October 1929. On selected stocks, it was as high as 75%. When the crash came, no major brokerage firm was bankrupted, because the brokers managed their finances in a conservative manner. At the end of October, margins were lowered to 25%.

Brokers’ loans received a lot of attention in England, as they did in the United States. The Financial Times reported the level and the changes in the amount regularly. For example, the October 4 issue indicated that on October 3 broker loans reached a record high as money rates dropped from 7.5% to 6%. By October 9, money rates had dropped further to below .06. Thus, investors prior to October 24 had relatively easy access to funds at the lowest rate since July 1928.

the Financial Times (October 7, 1929, p. 3) reported that the President of the American Bankers Association was concerned about the level of credit for securities and had given a talk in which he stated, “Bankers are gravely alarmed over the mounting volume of credit being employed in carrying security loans, both by brokers and by individuals.” The Financial Times was also concerned with the buying of investment trusts on margin and the lack of credit to support the bull market.

My conclusion is that the margin buying was a likely factor in causing stock prices to go up, but there is no reason to conclude that margin buying triggered the October crash. Once the selling rush began, however, the calling of margin loans probably exacerbated the price declines. (A calling of margin loans requires the stock buyer to contribute more cash to the broker or the broker sells the stock to get the cash.)

Investment Trusts

By 1929, investment trusts were very popular with investors. These trusts were the 1929 version of closed-end mutual funds. In recent years seasoned closed-end mutual funds sell at a discount to their fundamental value. The fundamental value is the sum of the market values of the fund’s components (securities in the portfolio). In 1929, the investment trusts sold at a premium — i.e. higher than the value of the underlying stocks. Malkiel concludes (p. 51) that this “provides clinching evidence of wide-scale stock-market irrationality during the 1920s.” However, Malkiel also notes (p. 442) “as of the mid-1990’s, Berkshire Hathaway shares were selling at a hefty premium over the value of assets it owned.” Warren Buffett is the guiding force behind Berkshire Hathaway’s great success as an investor. If we were to conclude that rational investors would currently pay a premium for Warren Buffet’s expertise, then we should reject a conclusion that the 1929 market was obviously irrational. We have current evidence that rational investors will pay a premium for what they consider to be superior money management skills.

There were $1 billion of investment trusts sold to investors in the first eight months of 1929 compared to $400 million in the entire 1928. the Economist reported that this was important (October 12, 1929, p. 665). “Much of the recent increase is to be accounted for by the extraordinary burst of investment trust financing.” In September alone $643 million was invested in investment trusts (Financial Times, October 21, p. 3). While the two sets of numbers (from the Economist and the Financial Times) are not exactly comparable, both sets of numbers indicate that investment trusts had become very popular by October 1929.

The common stocks of trusts that had used debt or preferred stock leverage were particularly vulnerable to the stock price declines. For example, the Goldman Sachs Trading Corporation was highly levered with preferred stock and the value of its common stock fell from $104 a share to less than $3 in 1933. Many of the trusts were levered, but the leverage of choice was not debt but rather preferred stock.

In concept, investment trusts were sensible. They offered expert management and diversification. Unfortunately, in 1929 a diversification of stocks was not going to be a big help given the universal price declines. Irving Fisher on September 6, 1929 was quoted in the New York Herald Tribune as stating: “The present high levels of stock prices and corresponding low levels of dividend returns are due largely to two factors. One, the anticipation of large dividend returns in the immediate future; and two, reduction of risk to investors largely brought about through investment diversification made possible for the investor by investment trusts.”

If a researcher could find out the composition of the portfolio of a couple of dozen of the largest investment trusts as of September-October 1929 this would be extremely helpful. Seven important types of information that are not readily available but would be of interest are:

  • The percentage of the portfolio that was public utilities.
  • The extent of diversification.
  • The percentage of the portfolios that was NYSE firms.
  • The investment turnover.
  • The ratio of market price to net asset value at various points in time.
  • The amount of debt and preferred stock leverage used.
  • Who bought the trusts and how long they held.

The ideal information to establish whether market prices are excessively high compared to intrinsic values is to have both the prices and well-defined intrinsic values at the same moment in time. For the normal financial security, this is impossible since the intrinsic values are not objectively well defined. There are two exceptions. DeLong and Schleifer (1991) followed one path, very cleverly choosing to study closed-end mutual funds. Some of these funds were traded on the stock market and the market values of the securities in the funds’ portfolios are a very reasonable estimate of the intrinsic value. DeLong and Schleifer state (1991, p. 675):

“We use the difference between prices and net asset values of closed-end mutual funds at the end of the 1920s to estimate the degree to which the stock market was overvalued on the eve of the 1929 crash. We conclude that the stocks making up the S&P composite were priced at least 30 percent above fundamentals in late summer, 1929.”

Unfortunately (p. 682) “portfolios were rarely published and net asset values rarely calculated.” It was only after the crash that investment trusts started to reveal routinely their net asset value. In the third quarter of 1929 (p. 682), “three types of event seemed to trigger a closed-end fund’s publication of its portfolio.” The three events were (1) listing on the New York Stock Exchange (most of the trusts were not listed), (2) start up of a new closed-end fund (this stock price reflects selling pressure), and (3) shares selling at a discount from net asset value (in September 1929 most trusts were not selling at a discount, the inclusion of any that were introduces a bias). After 1929, some trusts revealed 1929 net asset values. Thus, DeLong and Schleifer lacked the amount and quality of information that would have allowed definite conclusions. In fact, if investors also lacked the information regarding the portfolio composition we would have to place investment trusts in a unique investment category where investment decisions were made without reliable financial statements. If investors in the third quarter of 1929 did not know the current net asset value of investment trusts, this fact is significant.

The closed-end funds were an attractive vehicle to study since the market for investment trusts in 1929 was large and growing rapidly. In August and September alone over $1 billion of new funds were launched. DeLong and Schleifer found the premiums of price over value to be large — the median was about 50% in the third quarter of 1929) (p. 678). But they worried about the validity of their study because funds were not selected randomly.

DeLong and Schleifer had limited data (pp. 698-699). For example, for September 1929 there were two observations, for August 1929 there were five, and for July there were nine. The nine funds observed in July 1929 had the following premia: 277%, 152%, 48%, 22%, 18% (2 times), 8% (3 times). Given that closed-end funds tend to sell at a discount, the positive premiums are interesting. Given the conventional perspective in 1929 that financial experts could manager money better than the person not plugged into the street, it is not surprising that some investors were willing to pay for expertise and to buy shares in investment trusts. Thus, a premium for investment trusts does not imply the same premium for other stocks.

The Public Utility Sector

In addition to investment trusts, intrinsic values are usually well defined for regulated public utilities. The general rule applied by regulatory authorities is to allow utilities to earn a “fair return” on an allowed rate base. The fair return is defined to be equal to a utility’s weighted average cost of capital. There are several reasons why a public utility can earn more or less than a fair return, but the target set by the regulatory authority is the weighted average cost of capital.

Thus, if a utility has an allowed rate equity base of $X and is allowed to earn a return of r, (rX in terms of dollars) after one year the firm’s equity will be worth X + rX or (1 + r)X with a present value of X. (This assumes that r is the return required by the market as well as the return allowed by regulators.) Thus, the present value of the equity is equal to the present rate base, and the stock price should be equal to the rate base per share. Given the nature of public utility accounting, the book value of a utility’s stock is approximately equal to the rate base.

There can be time periods where the utility can earn more (or less) than the allowed return. The reasons for this include regulatory lag, changes in efficiency, changes in the weather, and changes in the mix and number of customers. Also, the cost of equity may be different than the allowed return because of inaccurate (or incorrect) or changing capital market conditions. Thus, the stock price may differ from the book value, but one would not expect the stock price to be very much different than the book value per share for very long. There should be a tendency for the stock price to revert to the book value for a public utility supplying an essential service where there is no effective competition, and the rate commission is effectively allowing a fair return to be earned.

In 1929, public utility stock prices were in excess of three times their book values. Consider, for example, the following measures (Wigmore, 1985, p. 39) for five operating utilities.

border=”1″ cellspacing=”0″ cellpadding=”2″ class=”encyclopedia” width=”580″>

1929 Price-earnings Ratio

High Price for Year

Market Price/Book Value

Commonwealth Edison

35

3.31

Consolidated Gas of New York

39

3.34

Detroit Edison

35

3.06

Pacific Gas & Electric

28

3.30

Public Service of New Jersey

35

3.14

Sooner or later this price bubble had to break unless the regulatory authorities were to decide to allow the utilities to earn more than a fair return, or an infinite stream of greater fools existed. The decision made by the Massachusetts Public Utility Commission in October 1929 applicable to the Edison Electric Illuminating Company of Boston made clear that neither of these improbable events were going to happen (see below).

The utilities bubble did burst. Between the end of September and the end of November 1929, industrial stocks fell by 48%, railroads by 32% and utilities by 55% — thus utilities dropped the furthest from the highs. A comparison of the beginning of the year prices and the highest prices is also of interest: industrials rose by 20%, railroads by 19%, and utilities by 48%. The growth in value for utilities during the first nine months of 1929 was more than twice that of the other two groups.

The following high and low prices for 1929 for a typical set of public utilities and holding companies illustrate how severely public utility prices were hit by the crash (New York Times, 1 January 1930 quotations.)

1929
Firm High Price Low Price Low Price DividedBy High Price
American Power & Light 1753/8 641/4 .37
American Superpower 711/8 15 .21
Brooklyn Gas 2481/2 99 .44
Buffalo, Niagara & Eastern Power 128 611/8 .48
Cities Service 681/8 20 .29
Consolidated Gas Co. of N.Y. 1831/4 801/8 .44
Electric Bond and Share 189 50 .26
Long Island Lighting 91 40 .44
Niagara Hudson Power 303/4 111/4 .37
Transamerica 673/8 201/4 .30

Picking on one segment of the market as the cause of a general break in the market is not obviously correct. But the combination of an overpriced utility segment and investment trusts with a portion of the market that had purchased on margin appears to be a viable explanation. In addition, as of September 1, 1929 utilities industry represented $14.8 billion of value or 18% of the value of the outstanding shares on the NYSE. Thus, they were a large sector, capable of exerting a powerful influence on the overall market. Moreover, many contemporaries pointed to the utility sector as an important force in triggering the market decline.

The October 19, 1929 issue of the Commercial and Financial Chronicle identified the main depressing influences on the market to be the indications of a recession in steel and the refusal of the Massachusetts Department of Public Utilities to allow Edison Electric Illuminating Company of Boston to split its stock. The explanations offered by the Department — that the stock was not worth its price and the company’s dividend would have to be reduced — made the situation worse.

the Washington Post (October 17, p. 1) in explaining the October 16 market declines (an Associated Press release) reported, “Professional traders also were obviously distressed at the printed remarks regarding inflation of power and light securities by the Massachusetts Public Utility Commission in its recent decision.”

Straws That Broke the Camel’s Back?

Edison Electric of Boston

On August 2, 1929, the New York Times reported that the Directors of the Edison Electric Illuminating Company of Boston had called a meeting of stockholders to obtain authorization for a stock split. The stock went up to a high of $440. Its book value was $164 (the ratio of price to book value was 2.6, which was less than many other utilities).

On Saturday (October 12, p. 27) the Times reported that on Friday the Massachusetts Department of Public Utilities has rejected the stock split. The heading said “Bars Stock Split by Boston Edison. Criticizes Dividend Policy. Holds Rates Should Not Be Raised Until Company Can Reduce Charge for Electricity.” Boston Edison lost 15 points for the day even though the decision was released after the Friday closing. The high for the year was $440 and the stock closed at $360 on Friday.

The Massachusetts Department of Public Utilities (New York Times, October 12, p. 27) did not want to imply to investors that this was the “forerunner of substantial increases in dividends.” They stated that the expectation of increased dividends was not justified, offered “scathing criticisms of the company” (October 16, p. 42) and concluded “the public will take over such utilities as try to gobble up all profits available.”

On October 15, the Boston City Council advised the mayor to initiate legislation for public ownership of Edison, on October 16, the Department announced it would investigate the level of rates being charged by Edison, and on October 19, it set the dates for the inquiry. On Tuesday, October 15 (p. 41), there was a discussion in the Times of the Massachusetts decision in the column “Topic in Wall Street.” It “excited intense interest in public utility circles yesterday and undoubtedly had effect in depressing the issues of this group. The decision is a far-reaching one and Wall Street expressed the greatest interest in what effect it will have, if any, upon commissions in other States.”

Boston Edison had closed at 360 on Friday, October 11, before the announcement was released. It dropped 61 points at its low on Monday, (October 14) but closed at 328, a loss of 32 points.

On October 16 (p. 42), the Times reported that Governor Allen of Massachusetts was launching a full investigation of Boston Edison including “dividends, depreciation, and surplus.”

One major factor that can be identified leading to the price break for public utilities was the ruling by the Massachusetts Public Utility Commission. The only specific action was that it refused to permit Edison Electric Illuminating Company of Boston to split its stock. Standard financial theory predicts that the primary effect of a stock split would be to reduce the stock price by 50% and would leave the total value unchanged, thus the denial of the split was not economically significant, and the stock split should have been easy to grant. But the Commission made it clear it had additional messages to communicate. For example, the Financial Times (October 16, 1929, p. 7) reported that the Commission advised the company to “reduce the selling price to the consumer.” Boston was paying $.085 per kilowatt-hour and Cambridge only $.055. There were also rumors of public ownership and a shifting of control. The next day (October 17), the Times reported (p. 3) “The worst pressure was against Public Utility shares” and the headline read “Electric Issue Hard Hit.”

Public Utility Regulation in New York

Massachusetts was not alone in challenging the profit levels of utilities. The Federal Trade Commission, New York City, and New York State were all challenging the status of public utility regulation. New York Governor (Franklin D. Roosevelt) appointed a committee on October 8 to investigate the regulation of public utilities in the state. The Committee stated, “this inquiry is likely to have far-reaching effects and may lead to similar action in other States.” Both the October 17 and October 19 issues of the Times carried articles regarding the New York investigative committee. Professor Bonbright, a Roosevelt appointee, described the regulatory process as a “vicious system” (October 19, p. 21), which ignored consumers. The Chairman of the Public Service Commission, testifying before the Committee wanted more control over utility holding companies, especially management fees and other transfers.

The New York State Committee also noted the increasing importance of investment trusts: “mention of the influence of the investment trust on utility securities is too important for this committee to ignore” (New York Times, October 17, p. 18). They conjectured that the trusts had $3.5 billion to invest, and “their influence has become very important” (p. 18).

In New York City Mayor Jimmy Walker was fighting the accusation of graft charges with statements that his administration would fight aggressively against rate increases, thus proving that he had not accepted bribes (New York Times, October 23). It is reasonable to conclude that the October 16 break was related to the news from Massachusetts and New York.

On October 17, the New York Times (p. 18) reported that the Committee on Public Service Securities of the Investment Banking Association warned against “speculative and uniformed buying.” The Committee published a report in which it asked for care in buying shares in utilities.

On Black Thursday, October 24, the market panic began. The market dropped from 305.87 to 272.32 (a 34 point drop, or 9%) and closed at 299.47. The declines were led by the motor stocks and public utilities.

The Public Utility Multipliers and Leverage

Public utilities were a very important segment of the stock market, and even more importantly, any change in public utility stock values resulted in larger changes in equity wealth. In 1929, there were three potentially important multipliers that meant that any change in a public utility’s underlying value would result in a larger value change in the market and in the investor’s value.

Consider the following hypothetical values for a public utility:

Book value per share for a utility $50

Market price per share 162.502

Market price of investment trust holding stock (assuming a 100% 325.00

premium over market value)

Eliminating the utility’s $112.50 market price premium over book value, the market price of the investment trust would be $50 without a premium. The loss in market value of the stock of the investment trust and the utility would be $387.50 (with no premium). The $387.50 is equal to the $112.50 loss in underlying stock value and the $275 reduction in investment trust stock value. The public utility holding companies, in fact, were even more vulnerable to a stock price change since their ratio of price to book value averaged 4.44 (Wigmore, p. 43). The $387.50 loss in market value implies investments in both the firm’s stock and the investment trust.

For simplicity, this discussion has assumed the trust held all the holding company stock. The effects shown would be reduced if the trust held only a fraction of the stock. However, this discussion has also assumed that no debt or margin was used to finance the investment. Assume the individual investors invested only $162.50 of their money and borrowed $162.50 to buy the investment trust stock costing $325. If the utility stock went down from $162.50 to $50 and the trust still sold at a 100% premium, the trust would sell at $100 and the investors would have lost 100% of their investment since the investors owe $162.50. The vulnerability of the margin investor buying a trust stock that has invested in a utility is obvious.

These highly levered non-operating utilities offered an opportunity for speculation. The holding company typically owned 100% of the operating companies’ stock and both entities were levered (there could be more than two levels of leverage). There were also holding companies that owned holding companies (e.g., Ebasco). Wigmore (p. 43) lists nine of the largest public utility holding companies. The ratio of the low 1929 price to the high price (average) was 33%. These stocks were even more volatile than the publicly owned utilities.

The amount of leverage (both debt and preferred stock) used in the utility sector may have been enormous, but we cannot tell for certain. Assume that a utility purchases an asset that costs $1,000,000 and that asset is financed with 40% stock ($400,000). A utility holding company owns the utility stock and is also financed with 40% stock ($160,000). A second utility holding company owns the first and it is financed with 40% stock ($64,000). An investment trust owns the second holding company’s stock and is financed with 40% stock ($25,600). An investor buys the investment trust’s common stock using 50% margin and investing $12,800 in the stock. Thus, the $1,000,000 utility asset is financed with $12,800 of equity capital.

When the large amount of leverage is combined with the inflated prices of the public utility stock, both holding company stocks, and the investment trust the problem is even more dramatic. Continuing the above example, assume the $1,000,000 asset again financed with $600,000 of debt and $400,000 common stock, but the common stock has a $1,200,000 market value. The first utility holding company has $720,000 of debt and $480,000 of common. The second holding company has $288,000 of debt and $192,000 of stock. The investment trust has $115,200 of debt and $76,800 of stock. The investor uses $38,400 of margin debt. The $1,000,000 asset is supporting $1,761,600 of debt. The investor’s $38,400 of equity is very much in jeopardy.

Conclusions and Lessons

Although no consensus has been reached on the causes of the 1929 stock market crash, the evidence cited above suggests that it may have been that the fear of speculation helped push the stock market to the brink of collapse. It is possible that Hoover’s aggressive campaign against speculation, helped by the overpriced public utilities hit by the Massachusetts Public Utility Commission decision and statements and the vulnerable margin investors, triggered the October selling panic and the consequences that followed.

An important first event may have been Lord Snowden’s reference to the speculative orgy in America. The resulting decline in stock prices weakened margin positions. When several governmental bodies indicated that public utilities in the future were not going to be able to justify their market prices, the decreases in utility stock prices resulted in margin positions being further weakened resulting in general selling. At some stage, the selling panic started and the crash resulted.

What can we learn from the 1929 crash? There are many lessons, but a handful seem to be most applicable to today’s stock market.

  • There is a delicate balance between optimism and pessimism regarding the stock market. Statements and actions by government officials can affect the sensitivity of stock prices to events. Call a market overpriced often enough, and investors may begin to believe it.
  • The fact that stocks can lose 40% of their value in a month and 90% over three years suggests the desirability of diversification (including assets other than stocks). Remember, some investors lose all of their investment when the market falls 40%.
  • A levered investment portfolio amplifies the swings of the stock market. Some investment securities have leverage built into them (e.g., stocks of highly levered firms, options, and stock index futures).
  • A series of presumably undramatic events may establish a setting for a wide price decline.
  • A segment of the market can experience bad news and a price decline that infects the broader market. In 1929, it seems to have been public utilities. In 2000, high technology firms were candidates.
  • Interpreting events and assigning blame is unreliable if there has not been an adequate passage of time and opportunity for reflection and analysis — and is difficult even with decades of hindsight.
  • It is difficult to predict a major market turn with any degree of reliability. It is impressive that in September 1929, Roger Babson predicted the collapse of the stock market, but he had been predicting a collapse for many years. Also, even Babson recommended diversification and was against complete liquidation of stock investments (Financial Chronicle, September 7, 1929, p. 1505).
  • Even a market that is not excessively high can collapse. Both market psychology and the underlying economics are relevant.

References

Barsky, Robert B. and J. Bradford DeLong. “Bull and Bear Markets in the Twentieth Century,” Journal of Economic History 50, no. 2 (1990): 265-281.

Bierman, Harold, Jr. The Great Myths of 1929 and the Lessons to be Learned. Westport, CT: Greenwood Press, 1991.

Bierman, Harold, Jr. The Causes of the 1929 Stock Market Crash. Westport, CT, Greenwood Press, 1998.

Bierman, Harold, Jr. “The Reasons Stock Crashed in 1929.” Journal of Investing (1999): 11-18.

Bierman, Harold, Jr. “Bad Market Days,” World Economics (2001) 177-191.

Commercial and Financial Chronicle, 1929 issues.

Committee on Banking and Currency. Hearings on Performance of the National and Federal Reserve Banking System. Washington, 1931.

DeLong, J. Bradford and Andrei Schleifer, “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” Journal of Economic History 51, no. 3 (1991): 675-700.

Federal Reserve Bulletin, February, 1929.

Fisher, Irving. The Stock Market Crash and After. New York: Macmillan, 1930.

Galbraith, John K. The Great Crash, 1929. Boston, Houghton Mifflin, 1961.

Hoover, Herbert. The Memoirs of Herbert Hoover. New York, Macmillan, 1952.

Kendrick, John W. Productivity Trends in the United States. Princeton University Press, 1961.

Kindleberger, Charles P. Manias, Panics, and Crashes. New York, Basic Books, 1978.

Malkiel, Burton G., A Random Walk Down Wall Street. New York, Norton, 1975 and 1996.

Moggridge, Donald. The Collected Writings of John Maynard Keynes, Volume XX. New York: Macmillan, 1981.

New York Times, 1929 and 1930.

Rappoport, Peter and Eugene N. White, “Was There a Bubble in the 1929 Stock Market?” Journal of Economic History 53, no. 3 (1993): 549-574.

Samuelson, Paul A. “Myths and Realities about the Crash and Depression.” Journal of Portfolio Management (1979): 9.

Senate Committee on Banking and Currency. Stock Exchange Practices. Washington, 1928.

Siegel, Jeremy J. “The Equity Premium: Stock and Bond Returns since 1802,”

Financial Analysts Journal 48, no. 1 (1992): 28-46.

Wall Street Journal, October 1929.

Washington Post, October 1929.

Wigmore, Barry A. The Crash and Its Aftermath: A History of Securities Markets in the United States, 1929-1933. Greenwood Press, Westport, 1985.

1 1923-25 average = 100.

2 Based a price to book value ratio of 3.25 (Wigmore, p. 39).

Citation: Bierman, Harold. “The 1929 Stock Market Crash”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/the-1929-stock-market-crash/

Slavery in the United States

Jenny Bourne, Carleton College

Slavery is fundamentally an economic phenomenon. Throughout history, slavery has existed where it has been economically worthwhile to those in power. The principal example in modern times is the U.S. South. Nearly 4 million slaves with a market value estimated to be between $3.1 and $3.6 billion lived in the U.S. just before the Civil War. Masters enjoyed rates of return on slaves comparable to those on other assets; cotton consumers, insurance companies, and industrial enterprises benefited from slavery as well. Such valuable property required rules to protect it, and the institutional practices surrounding slavery display a sophistication that rivals modern-day law and business.

THE SPREAD OF SLAVERY IN THE U.S.

Not long after Columbus set sail for the New World, the French and Spanish brought slaves with them on various expeditions. Slaves accompanied Ponce de Leon to Florida in 1513, for instance. But a far greater proportion of slaves arrived in chains in crowded, sweltering cargo holds. The first dark-skinned slaves in what was to become British North America arrived in Virginia — perhaps stopping first in Spanish lands — in 1619 aboard a Dutch vessel. From 1500 to 1900, approximately 12 million Africans were forced from their homes to go westward, with about 10 million of them completing the journey. Yet very few ended up in the British colonies and young American republic. By 1808, when the trans-Atlantic slave trade to the U.S. officially ended, only about 6 percent of African slaves landing in the New World had come to North America.

Slavery in the North

Colonial slavery had a slow start, particularly in the North. The proportion there never got much above 5 percent of the total population. Scholars have speculated as to why, without coming to a definite conclusion. Some surmise that indentured servants were fundamentally better suited to the Northern climate, crops, and tasks at hand; some claim that anti-slavery sentiment provided the explanation. At the time of the American Revolution, fewer than 10 percent of the half million slaves in the thirteen colonies resided in the North, working primarily in agriculture. New York had the greatest number, with just over 20,000. New Jersey had close to 12,000 slaves. Vermont was the first Northern region to abolish slavery when it became an independent republic in 1777. Most of the original Northern colonies implemented a process of gradual emancipation in the late eighteenth and early nineteenth centuries, requiring the children of slave mothers to remain in servitude for a set period, typically 28 years. Other regions above the Mason-Dixon line ended slavery upon statehood early in the nineteenth century — Ohio in 1803 and Indiana in 1816, for instance.

TABLE 1
Population of the Original Thirteen Colonies, selected years by type

1750 1750 1790 1790 1790 1810 1810 1810 1860 1860 1860

State

White Black White Free Slave White Free Slave White Free Slave
Nonwhite Nonwhite Nonwhite
108,270 3,010 232,236 2,771 2,648 255,179 6,453 310 451,504 8,643 - Connecticut
27,208 1,496 46,310 3,899 8,887 55,361 13,136 4,177 90,589 19,829 1,798 Delaware
4,200 1,000 52,886 398 29,264 145,414 1,801 105,218 591,550 3,538 462,198 Georgia
97,623 43,450 208,649 8,043 103,036 235,117 33,927 111,502 515,918 83,942 87,189 Maryland
183,925 4,075 373,187 5,369 - 465,303 6,737 - 1,221,432 9,634 - Massachusetts
26,955 550 141,112 630 157 182,690 970 - 325,579 494 - New Hampshire
66,039 5,354 169,954 2,762 11,423 226,868 7,843 10,851 646,699 25,318 - New Jersey
65,682 11,014 314,366 4,682 21,193 918,699 25,333 15,017 3,831,590 49,145 - New York
53,184 19,800 289,181 5,041 100,783 376,410 10,266 168,824 629,942 31,621 331,059 North Carolina
116,794 2,872 317,479 6,531 3,707 786,804 22,492 795 2,849,259 56,956 - Pennsylvania
29,879 3,347 64,670 3,484 958 73,214 3,609 108 170,649 3,971 - Rhode Island
25,000 39,000 140,178 1,801 107,094 214,196 4,554 196,365 291,300 10,002 402,406 South Carolina
129,581 101,452 442,117 12,866 292,627 551,534 30,570 392,518 1,047,299 58,154 490,865 Virginia
934,340 236,420 2,792,325 58,277 681,777 4,486,789 167,691 1,005,685 12,663,310 361,247 1,775,515 United States

Source: Historical Statistics of the U.S. (1970), Franklin (1988).

Slavery in the South

Throughout colonial and antebellum history, U.S. slaves lived primarily in the South. Slaves comprised less than a tenth of the total Southern population in 1680 but grew to a third by 1790. At that date, 293,000 slaves lived in Virginia alone, making up 42 percent of all slaves in the U.S. at the time. South Carolina, North Carolina, and Maryland each had over 100,000 slaves. After the American Revolution, the Southern slave population exploded, reaching about 1.1 million in 1810 and over 3.9 million in 1860.

TABLE 2
Population of the South 1790-1860 by type

Year White Free Nonwhite Slave
1790 1,240,454 32,523 654,121
1800 1,691,892 61,575 851,532
1810 2,118,144 97,284 1,103,700
1820 2,867,454 130,487 1,509,904
1830 3,614,600 175,074 1,983,860
1840 4,601,873 207,214 2,481,390
1850 6,184,477 235,821 3,200,364
1860 8,036,700 253,082 3,950,511

Source: Historical Statistics of the U.S. (1970).

Slave Ownership Patterns

Despite their numbers, slaves typically comprised a minority of the local population. Only in antebellum South Carolina and Mississippi did slaves outnumber free persons. Most Southerners owned no slaves and most slaves lived in small groups rather than on large plantations. Less than one-quarter of white Southerners held slaves, with half of these holding fewer than five and fewer than 1 percent owning more than one hundred. In 1860, the average number of slaves residing together was about ten.

TABLE 3
Slaves as a Percent of the Total Population
selected years, by Southern state

1750 1790 1810 1860
State Black/total Slave/total Slave/total Slave/total
population population population population
Alabama 45.12
Arkansas 25.52
Delaware 5.21 15.04 5.75 1.60
Florida 43.97
Georgia 19.23 35.45 41.68 43.72
Kentucky 16.87 19.82 19.51
Louisiana 46.85
Maryland 30.80 32.23 29.30 12.69
Mississippi 55.18
Missouri 9.72
North Carolina 27.13 25.51 30.39 33.35
South Carolina 60.94 43.00 47.30 57.18
Tennessee 17.02 24.84
Texas 30.22
Virginia 43.91 39.14 40.27 30.75
Overall 37.97 33.95 33.25 32.27

Sources: Historical Statistics of the United States (1970), Franklin (1988).

TABLE 4
Holdings of Southern Slaveowners
by states, 1860

State Total Held 1 Held 2 Held 3 Held 4 Held 5 Held 1-5 Held 100- Held 500+
slaveholders slave slaves Slaves slaves slaves slaves 499 slaves slaves
AL 33,730 5,607 3,663 2,805 2,329 1,986 16,390 344 -
AR 11,481 2,339 1,503 1,070 894 730 6,536 65 1
DE 587 237 114 74 51 34 510 - -
FL 5,152 863 568 437 365 285 2,518 47 -
GA 41,084 6,713 4,335 3,482 2,984 2,543 20,057 211 8
KY 38,645 9,306 5,430 4,009 3,281 2,694 24,720 7 -
LA 22,033 4,092 2,573 2,034 1,536 1,310 11,545 543 4
MD 13,783 4,119 1,952 1,279 1,023 815 9,188 16 -
MS 30,943 4,856 3,201 2,503 2,129 1,809 14,498 315 1
MO 24,320 6,893 3,754 2,773 2,243 1,686 17,349 4 -
NC 34,658 6,440 4,017 3,068 2,546 2,245 18,316 133 -
SC 26,701 3,763 2,533 1,990 1,731 1,541 11,558 441 8
TN 36,844 7,820 4,738 3,609 3,012 2,536 21,715 47 -
TX 21,878 4,593 2,874 2,093 1,782 1,439 12,781 54 -
VA 52,128 11,085 5,989 4,474 3,807 3,233 28,588 114 -
TOTAL 393,967 78,726 47,244 35,700 29,713 24,886 216,269 2,341 22

Source: Historical Statistics of the United States (1970).

Rapid Natural Increase in U.S. Slave Population

How did the U.S. slave population increase nearly fourfold between 1810 and 1860, given the demise of the trans-Atlantic trade? They enjoyed an exceptional rate of natural increase. Unlike elsewhere in the New World, the South did not require constant infusions of immigrant slaves to keep its slave population intact. In fact, by 1825, 36 percent of the slaves in the Western hemisphere lived in the U.S. This was partly due to higher birth rates, which were in turn due to a more equal ratio of female to male slaves in the U.S. relative to other parts of the Americas. Lower mortality rates also figured prominently. Climate was one cause; crops were another. U.S. slaves planted and harvested first tobacco and then, after Eli Whitney’s invention of the cotton gin in 1793, cotton. This work was relatively less grueling than the tasks on the sugar plantations of the West Indies and in the mines and fields of South America. Southern slaves worked in industry, did domestic work, and grew a variety of other food crops as well, mostly under less abusive conditions than their counterparts elsewhere. For example, the South grew half to three-quarters of the corn crop harvested between 1840 and 1860.

INSTITUTIONAL FRAMEWORK

Central to the success of slavery are political and legal institutions that validate the ownership of other persons. A Kentucky court acknowledged the dual character of slaves in Turner v. Johnson (1838): “[S]laves are property and must, under our present institutions, be treated as such. But they are human beings, with like passions, sympathies, and affections with ourselves.” To construct slave law, lawmakers borrowed from laws concerning personal property and animals, as well as from rules regarding servants, employees, and free persons. The outcome was a set of doctrines that supported the Southern way of life.

The English common law of property formed a foundation for U.S. slave law. The French and Spanish influence in Louisiana — and, to a lesser extent, Texas — meant that Roman (or civil) law offered building blocks there as well. Despite certain formal distinctions, slave law as practiced differed little from common-law to civil-law states. Southern state law governed roughly five areas: slave status, masters’ treatment of slaves, interactions between slaveowners and contractual partners, rights and duties of noncontractual parties toward others’ slaves, and slave crimes. Federal law and laws in various Northern states also dealt with matters of interstate commerce, travel, and fugitive slaves.

Interestingly enough, just as slave law combined elements of other sorts of law, so too did it yield principles that eventually applied elsewhere. Lawmakers had to consider the intelligence and volition of slaves as they crafted laws to preserve property rights. Slavery therefore created legal rules that could potentially apply to free persons as well as to those in bondage. Many legal principles we now consider standard in fact had their origins in slave law.

Legal Status Of Slaves And Blacks

By the end of the seventeenth century, the status of blacks — slave or free — tended to follow the status of their mothers. Generally, “white” persons were not slaves but Native and African Americans could be. One odd case was the offspring of a free white woman and a slave: the law often bound these people to servitude for thirty-one years. Conversion to Christianity could set a slave free in the early colonial period, but this practice quickly disappeared.

Skin Color and Status

Southern law largely identified skin color with status. Those who appeared African or of African descent were generally presumed to be slaves. Virginia was the only state to pass a statute that actually classified people by race: essentially, it considered those with one quarter or more black ancestry as black. Other states used informal tests in addition to visual inspection: one-quarter, one-eighth, or one-sixteenth black ancestry might categorize a person as black.

Even if blacks proved their freedom, they enjoyed little higher status than slaves except, to some extent, in Louisiana. Many Southern states forbade free persons of color from becoming preachers, selling certain goods, tending bar, staying out past a certain time of night, or owning dogs, among other things. Federal law denied black persons citizenship under the Dred Scott decision (1857). In this case, Chief Justice Roger Taney also determined that visiting a free state did not free a slave who returned to a slave state, nor did traveling to a free territory ensure emancipation.

Rights And Responsibilities Of Slave Masters

Southern masters enjoyed great freedom in their dealings with slaves. North Carolina Chief Justice Thomas Ruffin expressed the sentiments of many Southerners when he wrote in State v. Mann (1829): “The power of the master must be absolute, to render the submission of the slave perfect.” By the nineteenth century, household heads had far more physical power over their slaves than their employees. In part, the differences in allowable punishment had to do with the substitutability of other means of persuasion. Instead of physical coercion, antebellum employers could legally withhold all wages if a worker did not complete all agreed-upon services. No such alternate mechanism existed for slaves.

Despite the respect Southerners held for the power of masters, the law — particularly in the thirty years before the Civil War — limited owners somewhat. Southerners feared that unchecked slave abuse could lead to theft, public beatings, and insurrection. People also thought that hungry slaves would steal produce and livestock. But masters who treated slaves too well, or gave them freedom, caused consternation as well. The preamble to Delaware’s Act of 1767 conveys one prevalent view: “[I]t is found by experience, that freed [N]egroes and mulattoes are idle and slothful, and often prove burdensome to the neighborhood wherein they live, and are of evil examples to slaves.” Accordingly, masters sometimes fell afoul of the criminal law not only when they brutalized or neglected their slaves, but also when they indulged or manumitted slaves. Still, prosecuting masters was extremely difficult, because often the only witnesses were slaves or wives, neither of whom could testify against male heads of household.

Law of Manumission

One area that changed dramatically over time was the law of manumission. The South initially allowed masters to set their slaves free because this was an inherent right of property ownership. During the Revolutionary period, some Southern leaders also believed that manumission was consistent with the ideology of the new nation. Manumission occurred only rarely in colonial times, increased dramatically during the Revolution, then diminished after the early 1800s. By the 1830s, most Southern states had begun to limit manumission. Allowing masters to free their slaves at will created incentives to emancipate only unproductive slaves. Consequently, the community at large bore the costs of young, old, and disabled former slaves. The public might also run the risk of having rebellious former slaves in its midst.

Antebellum U.S. Southern states worried considerably about these problems and eventually enacted restrictions on the age at which slaves could be free, the number freed by any one master, and the number manumitted by last will. Some required former masters to file indemnifying bonds with state treasurers so governments would not have to support indigent former slaves. Some instead required former owners to contribute to ex-slaves’ upkeep. Many states limited manumissions to slaves of a certain age who were capable of earning a living. A few states made masters emancipate their slaves out of state or encouraged slaveowners to bequeath slaves to the Colonization Society, which would then send the freed slaves to Liberia. Former slaves sometimes paid fees on the way out of town to make up for lost property tax revenue; they often encountered hostility and residential fees on the other end as well. By 1860, most Southern states had banned in-state and post-mortem manumissions, and some had enacted procedures by which free blacks could voluntarily become slaves.

Other Restrictions

In addition to constraints on manumission, laws restricted other actions of masters and, by extension, slaves. Masters generally had to maintain a certain ratio of white to black residents upon plantations. Some laws barred slaves from owning musical instruments or bearing firearms. All states refused to allow slaves to make contracts or testify in court against whites. About half of Southern states prohibited masters from teaching slaves to read and write although some of these permitted slaves to learn rudimentary mathematics. Masters could use slaves for some tasks and responsibilities, but they typically could not order slaves to compel payment, beat white men, or sample cotton. Nor could slaves officially hire themselves out to others, although such prohibitions were often ignored by masters, slaves, hirers, and public officials. Owners faced fines and sometimes damages if their slaves stole from others or caused injuries.

Southern law did encourage benevolence, at least if it tended to supplement the lash and shackle. Court opinions in particular indicate the belief that good treatment of slaves could enhance labor productivity, increase plantation profits, and reinforce sentimental ties. Allowing slaves to control small amounts of property, even if statutes prohibited it, was an oft-sanctioned practice. Courts also permitted slaves small diversions, such as Christmas parties and quilting bees, despite statutes that barred slave assemblies.

Sale, Hire, And Transportation Of Slaves

Sales of Slaves

Slaves were freely bought and sold across the antebellum South. Southern law offered greater protection to slave buyers than to buyers of other goods, in part because slaves were complex commodities with characteristics not easily ascertained by inspection. Slave sellers were responsible for their representations, required to disclose known defects, and often liable for unknown defects, as well as bound by explicit contractual language. These rules stand in stark contrast to the caveat emptor doctrine applied in antebellum commodity sales cases. In fact, they more closely resemble certain provisions of the modern Uniform Commercial Code. Sales law in two states stands out. South Carolina was extremely pro-buyer, presuming that any slave sold at full price was sound. Louisiana buyers enjoyed extensive legal protection as well. A sold slave who later manifested an incurable disease or vice — such as a tendency to escape frequently — could generate a lawsuit that entitled the purchaser to nullify the sale.

Hiring Out Slaves

Slaves faced the possibility of being hired out by their masters as well as being sold. Although scholars disagree about the extent of hiring in agriculture, most concur that hired slaves frequently worked in manufacturing, construction, mining, and domestic service. Hired slaves and free persons often labored side by side. Bond and free workers both faced a legal burden to behave responsibly on the job. Yet the law of the workplace differed significantly for the two: generally speaking, employers were far more culpable in cases of injuries to slaves. The divergent law for slave and free workers does not necessarily imply that free workers suffered. Empirical evidence shows that nineteenth-century free laborers received at least partial compensation for the risks of jobs. Indeed, the tripartite nature of slave-hiring arrangements suggests why antebellum laws appeared as they did. Whereas free persons had direct work and contractual relations with their bosses, slaves worked under terms designed by others. Free workers arguably could have walked out or insisted on different conditions or wages. Slaves could not. The law therefore offered substitute protections. Still, the powerful interests of slaveowners also may mean that they simply were more successful at shaping the law. Postbellum developments in employment law — North and South — in fact paralleled earlier slave-hiring law, at times relying upon slave cases as legal precedents.

Public Transportation

Public transportation also figured into slave law: slaves suffered death and injury aboard common carriers as well as traveled as legitimate passengers and fugitives. As elsewhere, slave-common carrier law both borrowed from and established precedents for other areas of law. One key doctrine originating in slave cases was the “last-clear-chance rule.” Common-carrier defendants that had failed to offer slaves — even negligent slaves — a last clear chance to avoid accidents ended up paying damages to slaveowners. Slaveowner plaintiffs won several cases in the decade before the Civil War when engineers failed to warn slaves off railroad tracks. Postbellum courts used slave cases as precedents to entrench the last-clear-chance doctrine.

Slave Control: Patrollers And Overseers

Society at large shared in maintaining the machinery of slavery. In place of a standing police force, Southern states passed legislation to establish and regulate county-wide citizen patrols. Essentially, Southern citizens took upon themselves the protection of their neighbors’ interests as well as their own. County courts had local administrative authority; court officials appointed three to five men per patrol from a pool of white male citizens to serve for a specified period. Typical patrol duty ranged from one night per week for a year to twelve hours per month for three months. Not all white men had to serve: judges, magistrates, ministers, and sometimes millers and blacksmiths enjoyed exemptions. So did those in the higher ranks of the state militia. In many states, courts had to select from adult males under a certain age, usually 45, 50, or 60. Some states allowed only slaveowners or householders to join patrols. Patrollers typically earned fees for captured fugitive slaves and exemption from road or militia duty, as well as hourly wages. Keeping order among slaves was the patrollers’ primary duty. Statutes set guidelines for appropriate treatment of slaves and often imposed fines for unlawful beatings. In rare instances, patrollers had to compensate masters for injured slaves. For the most part, however, patrollers enjoyed quasi-judicial or quasi-executive powers in their dealings with slaves.

Overseers commanded considerable control as well. The Southern overseer was the linchpin of the large slave plantation. He ran daily operations and served as a first line of defense in safeguarding whites. The vigorous protests against drafting overseers into military service during the Civil War reveal their significance to the South. Yet slaves were too valuable to be left to the whims of frustrated, angry overseers. Injuries caused to slaves by overseers’ cruelty (or “immoral conduct”) usually entitled masters to recover civil damages. Overseers occasionally confronted criminal charges as well. Brutality by overseers naturally generated responses by their victims; at times, courts reduced murder charges to manslaughter when slaves killed abusive overseers.

Protecting The Master Against Loss: Slave Injury And Slave Stealing

Whether they liked it or not, many Southerners dealt daily with slaves. Southern law shaped these interactions among strangers, awarding damages more often for injuries to slaves than injuries to other property or persons, shielding slaves more than free persons from brutality, and generating convictions more frequently in slave-stealing cases than in other criminal cases. The law also recognized more offenses against slaveowners than against other property owners because slaves, unlike other property, succumbed to influence.

Just as assaults of slaves generated civil damages and criminal penalties, so did stealing a slave to sell him or help him escape to freedom. Many Southerners considered slave stealing worse than killing fellow citizens. In marked contrast, selling a free black person into slavery carried almost no penalty.

The counterpart to helping slaves escape — picking up fugitives — also created laws. Southern states offered rewards to defray the costs of capture or passed statutes requiring owners to pay fees to those who caught and returned slaves. Some Northern citizens worked hand-in-hand with their Southern counterparts, returning fugitive slaves to masters either with or without the prompting of law. But many Northerners vehemently opposed the peculiar institution. In an attempt to stitch together the young nation, the federal government passed the first fugitive slave act in 1793. To circumvent its application, several Northern states passed personal liberty laws in the 1840s. Stronger federal fugitive slave legislation then passed in 1850. Still, enough slaves fled to freedom — perhaps as many as 15,000 in the decade before the Civil War — with the help (or inaction) of Northerners that the profession of “slave-catching” evolved. This occupation was often highly risky — enough so that such men could not purchase life insurance coverage — and just as often highly lucrative.

Slave Crimes

Southern law governed slaves as well as slaveowners and their adversaries. What few due process protections slaves possessed stemmed from desires to grant rights to masters. Still, slaves faced harsh penalties for their crimes. When slaves stole, rioted, set fires, or killed free people, the law sometimes had to subvert the property rights of masters in order to preserve slavery as a social institution.

Slaves, like other antebellum Southern residents, committed a host of crimes ranging from arson to theft to homicide. Other slave crimes included violating curfew, attending religious meetings without a master’s consent, and running away. Indeed, a slave was not permitted off his master’s farm or business without his owner’s permission. In rural areas, a slave was required to carry a written pass to leave the master’s land.

Southern states erected numerous punishments for slave crimes, including prison terms, banishment, whipping, castration, and execution. In most states, the criminal law for slaves (and blacks generally) was noticeably harsher than for free whites; in others, slave law as practiced resembled that governing poorer white citizens. Particularly harsh punishments applied to slaves who had allegedly killed their masters or who had committed rebellious acts. Southerners considered these acts of treason and resorted to immolation, drawing and quartering, and hanging.

MARKETS AND PRICES

Market prices for slaves reflect their substantial economic value. Scholars have gathered slave prices from a variety of sources, including censuses, probate records, plantation and slave-trader accounts, and proceedings of slave auctions. These data sets reveal that prime field hands went for four to six hundred dollars in the U.S. in 1800, thirteen to fifteen hundred dollars in 1850, and up to three thousand dollars just before Fort Sumter fell. Even controlling for inflation, the prices of U.S. slaves rose significantly in the six decades before South Carolina seceded from the Union. By 1860, Southerners owned close to $4 billion worth of slaves. Slavery remained a thriving business on the eve of the Civil War: Fogel and Engerman (1974) projected that by 1890 slave prices would have increased on average more than 50 percent over their 1860 levels. No wonder the South rose in armed resistance to protect its enormous investment.

Slave markets existed across the antebellum U.S. South. Even today, one can find stone markers like the one next to the Antietam battlefield, which reads: “From 1800 to 1865 This Stone Was Used as a Slave Auction Block. It has been a famous landmark at this original location for over 150 years.” Private auctions, estate sales, and professional traders facilitated easy exchange. Established dealers like Franklin and Armfield in Virginia, Woolfolk, Saunders, and Overly in Maryland, and Nathan Bedford Forrest in Tennessee prospered alongside itinerant traders who operated in a few counties, buying slaves for cash from their owners, then moving them overland in coffles to the lower South. Over a million slaves were taken across state lines between 1790 and 1860 with many more moving within states. Some of these slaves went with their owners; many were sold to new owners. In his monumental study, Michael Tadman (1989) found that slaves who lived in the upper South faced a very real chance of being sold for profit. From 1820 to 1860, he estimated that an average of 200,000 slaves per decade moved from the upper to the lower South, most via sales. A contemporary newspaper, The Virginia Times, calculated that 40,000 slaves were sold in the year 1830.

Determinants of Slave Prices

The prices paid for slaves reflected two economic factors: the characteristics of the slave and the conditions of the market. Important individual features included age, sex, childbearing capacity (for females), physical condition, temperament, and skill level. In addition, the supply of slaves, demand for products produced by slaves, and seasonal factors helped determine market conditions and therefore prices.

Age and Price

Prices for both male and female slaves tended to follow similar life-cycle patterns. In the U.S. South, infant slaves sold for a positive price because masters expected them to live long enough to make the initial costs of raising them worthwhile. Prices rose through puberty as productivity and experience increased. In nineteenth-century New Orleans, for example, prices peaked at about age 22 for females and age 25 for males. Girls cost more than boys up to their mid-teens. The genders then switched places in terms of value. In the Old South, boys aged 14 sold for 71 percent of the price of 27-year-old men, whereas girls aged 14 sold for 65 percent of the price of 27-year-old men. After the peak age, prices declined slowly for a time, then fell off rapidly as the aging process caused productivity to fall. Compared to full-grown men, women were worth 80 to 90 percent as much. One characteristic in particular set some females apart: their ability to bear children. Fertile females commanded a premium. The mother-child link also proved important for pricing in a different way: people sometimes paid more for intact families.


Source: Fogel and Engerman (1974)

Other Characteristics and Price

Skills, physical traits, mental capabilities, and other qualities also helped determine a slave’s price. Skilled workers sold for premiums of 40-55 percent whereas crippled and chronically ill slaves sold for deep discounts. Slaves who proved troublesome — runaways, thieves, layabouts, drunks, slow learners, and the like — also sold for lower prices. Taller slaves cost more, perhaps because height acts as a proxy for healthiness. In New Orleans, light-skinned females (who were often used as concubines) sold for a 5 percent premium.

Fluctuations in Supply

Prices for slaves fluctuated with market conditions as well as with individual characteristics. U.S. slave prices fell around 1800 as the Haitian revolution sparked the movement of slaves into the Southern states. Less than a decade later, slave prices climbed when the international slave trade was banned, cutting off legal external supplies. Interestingly enough, among those who supported the closing of the trans-Atlantic slave trade were several Southern slaveowners. Why this apparent anomaly? Because the resulting reduction in supply drove up the prices of slaves already living in the U.S and, hence, their masters’ wealth. U.S. slaves had high enough fertility rates and low enough mortality rates to reproduce themselves, so Southern slaveowners did not worry about having too few slaves to go around.

Fluctuations in Demand

Demand helped determine prices as well. The demand for slaves derived in part from the demand for the commodities and services that slaves provided. Changes in slave occupations and variability in prices for slave-produced goods therefore created movements in slave prices. As slaves replaced increasingly expensive indentured servants in the New World, their prices went up. In the period 1748 to 1775, slave prices in British America rose nearly 30 percent. As cotton prices fell in the 1840s, Southern slave prices also fell. But, as the demand for cotton and tobacco grew after about 1850, the prices of slaves increased as well.

Interregional Price Differences

Differences in demand across regions led to transitional regional price differences, which in turn meant large movements of slaves. Yet because planters experienced greater stability among their workforce when entire plantations moved, 84 percent of slaves were taken to the lower South in this way rather than being sold piecemeal.

Time of Year and Price

Demand sometimes had to do with the time of year a sale took place. For example, slave prices in the New Orleans market were 10 to 20 percent higher in January than in September. Why? September was a busy time of year for plantation owners: the opportunity cost of their time was relatively high. Prices had to be relatively low for them to be willing to travel to New Orleans during harvest time.

Expectations and Prices

One additional demand factor loomed large in determining slave prices: the expectation of continued legal slavery. As the American Civil War progressed, prices dropped dramatically because people could not be sure that slavery would survive. In New Orleans, prime male slaves sold on average for $1381 in 1861 and for $1116 in 1862. Burgeoning inflation meant that real prices fell considerably more. By war’s end, slaves sold for a small fraction of their 1860 price.


Source: Data supplied by Stanley Engerman and reported in Walton and Rockoff (1994).

PROFITABILITY, EFFICIENCY, AND EXPLOITATION

That slavery was profitable seems almost obvious. Yet scholars have argued furiously about this matter. On one side stand antebellum writers such as Hinton Rowan Helper and Frederick Law Olmstead, many antebellum abolitionists, and contemporary scholars like Eugene Genovese (at least in his early writings), who speculated that American slavery was unprofitable, inefficient, and incompatible with urban life. On the other side are scholars who have marshaled masses of data to support their contention that Southern slavery was profitable and efficient relative to free labor and that slavery suited cities as well as farms. These researchers stress the similarity between slave markets and markets for other sorts of capital.

Consensus That Slavery Was Profitable

This battle has largely been won by those who claim that New World slavery was profitable. Much like other businessmen, New World slaveowners responded to market signals — adjusting crop mixes, reallocating slaves to more profitable tasks, hiring out idle slaves, and selling slaves for profit. One well-known instance shows that contemporaneous free labor thought that urban slavery may even have worked too well: employees of the Tredegar Iron Works in Richmond, Virginia, went out on their first strike in 1847 to protest the use of slave labor at the Works.

Fogel and Engerman’s Time on the Cross

Carrying the banner of the “slavery was profitable” camp is Nobel laureate Robert Fogel. Perhaps the most controversial book ever written about American slavery is Time on the Cross, published in 1974 by Fogel and co-author Stanley Engerman. These men were among the first to use modern statistical methods, computers, and large datasets to answer a series of empirical questions about the economics of slavery. To find profit levels and rates of return, they built upon the work of Alfred Conrad and John Meyer, who in 1958 had calculated similar measures from data on cotton prices, physical yield per slave, demographic characteristics of slaves (including expected lifespan), maintenance and supervisory costs, and (in the case of females) number of children. To estimate the relative efficiency of farms, Fogel and Engerman devised an index of “total factor productivity,” which measured the output per average unit of input on each type of farm. They included in this index controls for quality of livestock and land and for age and sex composition of the workforce, as well as amounts of output, labor, land, and capital

Time on the Cross generated praise — and considerable criticism. A major critique appeared in 1976 as a collection of articles entitled Reckoning with Slavery. Although some contributors took umbrage at the tone of the book and denied that it broke new ground, others focused on flawed and insufficient data and inappropriate inferences. Despite its shortcomings, Time on the Cross inarguably brought people’s attention to a new way of viewing slavery. The book also served as a catalyst for much subsequent research. Even Eugene Genovese, long an ardent proponent of the belief that Southern planters had held slaves for their prestige value, finally acknowledged that slavery was probably a profitable enterprise. Fogel himself refined and expanded his views in a 1989 book, Without Consent or Contract.

Efficiency Estimates

Fogel’s and Engerman’s research led them to conclude that investments in slaves generated high rates of return, masters held slaves for profit motives rather than for prestige, and slavery thrived in cities and rural areas alike. They also found that antebellum Southern farms were 35 percent more efficient overall than Northern ones and that slave farms in the New South were 53 percent more efficient than free farms in either North or South. This would mean that a slave farm that is otherwise identical to a free farm (in terms of the amount of land, livestock, machinery and labor used) would produce output worth 53 percent more than the free. On the eve of the Civil War, slavery flourished in the South and generated a rate of economic growth comparable to that of many European countries, according to Fogel and Engerman. They also discovered that, because slaves constituted a considerable portion of individual wealth, masters fed and treated their slaves reasonably well. Although some evidence indicates that infant and young slaves suffered much worse conditions than their freeborn counterparts, teenaged and adult slaves lived in conditions similar to — sometimes better than — those enjoyed by many free laborers of the same period.

Transition from Indentured Servitude to Slavery

One potent piece of evidence supporting the notion that slavery provides pecuniary benefits is this: slavery replaces other labor when it becomes relatively cheaper. In the early U.S. colonies, for example, indentured servitude was common. As the demand for skilled servants (and therefore their wages) rose in England, the cost of indentured servants went up in the colonies. At the same time, second-generation slaves became more productive than their forebears because they spoke English and did not have to adjust to life in a strange new world. Consequently, the balance of labor shifted away from indentured servitude and toward slavery.

Gang System

The value of slaves arose in part from the value of labor generally in the antebellum U.S. Scarce factors of production command economic rent, and labor was by far the scarcest available input in America. Moreover, a large proportion of the reward to owning and working slaves resulted from innovative labor practices. Certainly, the use of the “gang” system in agriculture contributed to profits in the antebellum period. In the gang system, groups of slaves perfomed synchronized tasks under the watchful overseer’s eye, much like parts of a single machine. Masters found that treating people like machinery paid off handsomely.

Antebellum slaveowners experimented with a variety of other methods to increase productivity. They developed an elaborate system of “hand ratings” in order to improve the match between the slave worker and the job. Hand ratings categorized slaves by age and sex and rated their productivity relative to that of a prime male field hand. Masters also capitalized on the native intelligence of slaves by using them as agents to receive goods, keep books, and the like.

Use of Positive Incentives

Masters offered positive incentives to make slaves work more efficiently. Slaves often had Sundays off. Slaves could sometimes earn bonuses in cash or in kind, or quit early if they finished tasks quickly. Some masters allowed slaves to keep part of the harvest or to work their own small plots. In places, slaves could even sell their own crops. To prevent stealing, however, many masters limited the products that slaves could raise and sell, confining them to corn or brown cotton, for example. In antebellum Louisiana, slaves even had under their control a sum of money called a peculium. This served as a sort of working capital, enabling slaves to establish thriving businesses that often benefited their masters as well. Yet these practices may have helped lead to the downfall of slavery, for they gave slaves a taste of freedom that left them longing for more.

Slave Families

Masters profited from reproduction as well as production. Southern planters encouraged slaves to have large families because U.S. slaves lived long enough — unlike those elsewhere in the New World — to generate more revenue than cost over their lifetimes. But researchers have found little evidence of slave breeding; instead, masters encouraged slaves to live in nuclear or extended families for stability. Lest one think sentimentality triumphed on the Southern plantation, one need only recall the willingness of most masters to sell if the bottom line was attractive enough.

Profitability and African Heritage

One element that contributed to the profitability of New World slavery was the African heritage of slaves. Africans, more than indigenous Americans, were accustomed to the discipline of agricultural practices and knew metalworking. Some scholars surmise that Africans, relative to Europeans, could better withstand tropical diseases and, unlike Native Americans, also had some exposure to the European disease pool.

Ease of Identifying Slaves

Perhaps the most distinctive feature of Africans, however, was their skin color. Because they looked different from their masters, their movements were easy to monitor. Denying slaves education, property ownership, contractual rights, and other things enjoyed by those in power was simple: one needed only to look at people to ascertain their likely status. Using color was a low-cost way of distinguishing slaves from free persons. For this reason, the colonial practices that freed slaves who converted to Christianity quickly faded away. Deciphering true religious beliefs is far more difficult than establishing skin color. Other slave societies have used distinguishing marks like brands or long hair to denote slaves, yet color is far more immutable and therefore better as a cheap way of keeping slaves separate. Skin color, of course, can also serve as a racist identifying mark even after slavery itself disappears.

Profit Estimates

Slavery never generated superprofits, because people always had the option of putting their money elsewhere. Nevertheless, investment in slaves offered a rate of return — about 10 percent — that was comparable to returns on other assets. Slaveowners were not the only ones to reap rewards, however. So too did cotton consumers who enjoyed low prices and Northern entrepreneurs who helped finance plantation operations.

Exploitation Estimates

So slavery was profitable; was it an efficient way of organizing the workforce? On this question, considerable controversy remains. Slavery might well have profited masters, but only because they exploited their chattel. What is more, slavery could have locked people into a method of production and way of life that might later have proven burdensome.

Fogel and Engerman (1974) claimed that slaves kept about ninety percent of what they produced. Because these scholars also found that agricultural slavery produced relatively more output for a given set of inputs, they argued that slaves may actually have shared in the overall material benefits resulting from the gang system. Other scholars contend that slaves in fact kept less than half of what they produced and that slavery, while profitable, certainly was not efficient. On the whole, current estimates suggest that the typical slave received only about fifty percent of the extra output that he or she produced.

Did Slavery Retard Southern Economic Development?

Gavin Wright (1978) called attention as well to the difference between the short run and the long run. He noted that slaves accounted for a very large proportion of most masters’ portfolios of assets. Although slavery might have seemed an efficient means of production at a point in time, it tied masters to a certain system of labor which might not have adapted quickly to changed economic circumstances. This argument has some merit. Although the South’s growth rate compared favorably with that of the North in the antebellum period, a considerable portion of wealth was held in the hands of planters. Consequently, commercial and service industries lagged in the South. The region also had far less rail transportation than the North. Yet many plantations used the most advanced technologies of the day, and certain innovative commercial and insurance practices appeared first in transactions involving slaves. What is more, although the South fell behind the North and Great Britain in its level of manufacturing, it compared favorably to other advanced countries of the time. In sum, no clear consensus emerges as to whether the antebellum South created a standard of living comparable to that of the North or, if it did, whether it could have sustained it.

Ultimately, the South’s system of law, politics, business, and social customs strengthened the shackles of slavery and reinforced racial stereotyping. As such, it was undeniably evil. Yet, because slaves constituted valuable property, their masters had ample incentives to take care of them. And, by protecting the property rights of masters, slave law necessarily sheltered the persons embodied within. In a sense, the apologists for slavery were right: slaves sometimes fared better than free persons because powerful people had a stake in their well-being.

Conclusion: Slavery Cannot Be Seen As Benign

But slavery cannot be thought of as benign. In terms of material conditions, diet, and treatment, Southern slaves may have fared as well in many ways as the poorest class of free citizens. Yet the root of slavery is coercion. By its very nature, slavery involves involuntary transactions. Slaves are property, whereas free laborers are persons who make choices (at times constrained, of course) about the sort of work they do and the number of hours they work.

The behavior of former slaves after abolition clearly reveals that they cared strongly about the manner of their work and valued their non-work time more highly than masters did. Even the most benevolent former masters in the U.S. South found it impossible to entice their former chattels back into gang work, even with large wage premiums. Nor could they persuade women back into the labor force: many female ex-slaves simply chose to stay at home. In the end, perhaps slavery is an economic phenomenon only because slave societies fail to account for the incalculable costs borne by the slaves themselves.

REFERENCES AND FURTHER READING

For studies pertaining to the economics of slavery, see particularly Aitken, Hugh, editor. Did Slavery Pay? Readings in the Economics of Black Slavery in the United States. Boston: Houghton-Mifflin, 1971.

Barzel, Yoram. “An Economic Analysis of Slavery.” Journal of Law and Economics 20 (1977): 87-110.

Conrad, Alfred H., and John R. Meyer. The Economics of Slavery and Other Studies. Chicago: Aldine, 1964.

David, Paul A., Herbert G. Gutman, Richard Sutch, Peter Temin, and Gavin Wright. Reckoning with Slavery: A Critical Study in the Quantitative History of American Negro Slavery. New York: Oxford University Press, 1976

Fogel , Robert W. Without Consent or Contract. New York: Norton, 1989.

Fogel, Robert W., and Stanley L. Engerman. Time on the Cross: The Economics of American Negro Slavery. New York: Little, Brown, 1974.

Galenson, David W. Traders, Planters, and Slaves: Market Behavior in Early English America. New York: Cambridge University Press, 1986

Kotlikoff, Laurence. “The Structure of Slave Prices in New Orleans, 1804-1862.” Economic Inquiry 17 (1979): 496-518.

Ransom, Roger L., and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. New York: Cambridge University Press, 1977.

Ransom, Roger L., and Richard Sutch “Capitalists Without Capital” Agricultural History 62 (1988): 133-160.

Vedder, Richard K. “The Slave Exploitation (Expropriation) Rate.” Explorations in Economic History 12 (1975): 453-57.

Wright, Gavin. The Political Economy of the Cotton South: Households, Markets, and Wealth in the Nineteenth Century. New York: Norton, 1978.

Yasuba, Yasukichi. “The Profitability and Viability of Slavery in the U.S.” Economic Studies Quarterly 12 (1961): 60-67.

For accounts of slave trading and sales, see
Bancroft, Frederic. Slave Trading in the Old South. New York: Ungar, 1931. Tadman, Michael. Speculators and Slaves. Madison: University of Wisconsin Press, 1989.

For discussion of the profession of slave catchers, see
Campbell, Stanley W. The Slave Catchers. Chapel Hill: University of North Carolina Press, 1968.

To read about slaves in industry and urban areas, see
Dew, Charles B. Slavery in the Antebellum Southern Industries. Bethesda: University Publications of America, 1991.

Goldin, Claudia D. Urban Slavery in the American South, 1820-1860: A Quantitative History. Chicago: University of Chicago Press,1976.

Starobin, Robert. Industrial Slavery in the Old South. New York: Oxford University Press, 1970.

For discussions of masters and overseers, see
Oakes, James. The Ruling Race: A History of American Slaveholders. New York: Knopf, 1982.

Roark, James L. Masters Without Slaves. New York: Norton, 1977.

Scarborough, William K. The Overseer: Plantation Management in the Old South. Baton Rouge, Louisiana State University Press, 1966.

On indentured servitude, see
Galenson, David. “Rise and Fall of Indentured Servitude in the Americas: An Economic Analysis.” Journal of Economic History 44 (1984): 1-26.

Galenson, David. White Servitude in Colonial America: An Economic Analysis. New York: Cambridge University Press, 1981.

Grubb, Farley. “Immigrant Servant Labor: Their Occupational and Geographic Distribution in the Late Eighteenth Century Mid-Atlantic Economy.” Social Science History 9 (1985): 249-75.

Menard, Russell R. “From Servants to Slaves: The Transformation of the Chesapeake Labor System.” Southern Studies 16 (1977): 355-90.

On slave law, see
Fede, Andrew. “Legal Protection for Slave Buyers in the U.S. South.” American Journal of Legal History 31 (1987). Finkelman, Paul. An Imperfect Union: Slavery, Federalism, and Comity. Chapel Hill: University of North Carolina, 1981.

Finkelman, Paul. Slavery, Race, and the American Legal System, 1700-1872. New York: Garland, 1988.

Finkelman, Paul, ed. Slavery and the Law. Madison: Madison House, 1997.

Flanigan, Daniel J. The Criminal Law of Slavery and Freedom, 1800-68. New York: Garland, 1987.

Morris, Thomas D., Southern Slavery and the Law: 1619-1860. Chapel Hill: University of North Carolina Press, 1996.

Schafer, Judith K. Slavery, The Civil Law, and the Supreme Court of Louisiana. Baton Rouge: Louisiana State University Press, 1994.

Tushnet, Mark V. The American Law of Slavery, 1810-60: Considerations of Humanity and Interest. Princeton: Princeton University Press, 1981.

Wahl, Jenny B. The Bondsman’s Burden: An Economic Analysis of the Common Law of Southern Slavery. New York: Cambridge University Press, 1998.

Other useful sources include
Berlin, Ira, and Philip D. Morgan, eds. The Slave’s Economy: Independent Production by Slaves in the Americas. London: Frank Cass, 1991.

Berlin, Ira, and Philip D. Morgan, eds, Cultivation and Culture: Labor and the Shaping of Slave Life in the Americas. Charlottesville, University Press of Virginia, 1993.

Elkins, Stanley M. Slavery: A Problem in American Institutional and Intellectual Life. Chicago: University of Chicago Press, 1976.

Engerman, Stanley, and Eugene Genovese. Race and Slavery in the Western Hemisphere: Quantitative Studies. Princeton: Princeton University Press, 1975.

Fehrenbacher, Don. Slavery, Law, and Politics. New York: Oxford University Press, 1981.

Franklin, John H. From Slavery to Freedom. New York: Knopf, 1988.

Genovese, Eugene D. Roll, Jordan, Roll. New York: Pantheon, 1974.

Genovese, Eugene D. The Political Economy of Slavery: Studies in the Economy and Society of the Slave South . Middletown, CT: Wesleyan, 1989.

Hindus, Michael S. Prison and Plantation. Chapel Hill: University of North Carolina Press, 1980.

Margo, Robert, and Richard Steckel. “The Heights of American Slaves: New Evidence on Slave Nutrition and Health.” Social Science History 6 (1982): 516-538.

Phillips, Ulrich B. American Negro Slavery: A Survey of the Supply, Employment and Control of Negro Labor as Determined by the Plantation Regime. New York: Appleton, 1918.

Stampp, Kenneth M. The Peculiar Institution: Slavery in the Antebellum South. New York: Knopf, 1956.

Steckel, Richard. “Birth Weights and Infant Mortality Among American Slaves.” Explorations in Economic History 23 (1986): 173-98.

Walton, Gary, and Hugh Rockoff. History of the American Economy. Orlando: Harcourt Brace, 1994, chapter 13.

Whaples, Robert. “Where Is There Consensus among American Economic Historians?” Journal of Economic History 55 (1995): 139-154.

Data can be found at
U.S. Bureau of the Census, Historical Statistics of the United States, 1970, collected in ICPSR study number 0003, “Historical Demographic, Economic and Social Data: The United States, 1790-1970,” located at http://fisher.lib.virginia.edu/census/.

Citation: Bourne, Jenny. “Slavery in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/slavery-in-the-united-states/