is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

The Economic History of Mexico

The Economic History of Mexico

Richard Salvucci, Trinity University



This article is a brief interpretive survey of some of the major features of the economic history of Mexico from pre-conquest to the present. I begin with the pre-capitalist economy of Mesoamerica. The colonial period is divided into the Habsburg and Bourbon regimes, although the focus is not really political: the emphasis is instead on the consequences of demographic and fiscal changes that colonialism brought.  Next I analyze the economic impact of independence and its accompanying conflict. A tentative effort to reconstruct secular patterns of growth in the nineteenth century follows, as well as an account of the effects of foreign intervention, war, and the so-called “dictatorship” of Porfirio Diaz.  I then examine the economic consequences of the Mexican Revolution down through the presidency of Lázaro Cárdenas, before considering the effects of the Great Depression and World War II. This is followed by an examination of the so-called Mexican Miracle, the period of import-substitution industrialization after World War II. The end of the “miracle” and the rise of economic instability in the 1970s and 1980s are discussed in some detail. I conclude with structural reforms in the 1990s, the North American Free Trade Agreement (NAFTA), and slow growth in Mexico since then. It is impossible to be comprehensive and the references appearing in the citations are highly selective and biased (where possible) in favor of English-language works, although Spanish is a must for getting beyond the basics. This is especially true in economic history, where some of the most innovative and revisionist work is being done, as it should be, by historians and economists in Mexico.[2]


Where (and What) is Mexico?

For most of its long history, Mexico’s boundaries have been shifting, albeit broadly stable. Colonial Mexico basically stretched from Guatemala, across what is now California and the Southwestern United States, and vaguely into the Pacific Northwest.  There matters stood for more than three centuries[3]. The big shock came at the end of the War of 1847 (“the Mexican-American War” in U.S. history). The Treaty of Guadalupe Hidalgo (1848) ended the war, but in so doing, ceded half of Mexico’s former territory to the United States—recall Texas had been lost in 1836. The northern boundary now ran on a line beginning with the Rio Grande to El Paso, and thence more or less west to the Pacific Ocean south of San Diego. With one major adjustment in 1853 (the Gadsden Purchase or Treaty of the Mesilla) and minor ones thereafter, because of the shifting of the Rio Grande, there it has remained.

Prior to the arrival of the Europeans, Mexico was a congeries of ethnic and city states whose own boundaries were unstable. Prior to the emergence of the most powerful of these states in the fifteenth century, the so-called Triple Alliance (popularly “Aztec Empire”), Mesoamerica consisted of cultural regions determined by political elites and spheres of influence that were dominated by large ceremonial centers such as La Venta, Teotihuacan, and Tula.

While such regions may have been dominant at different times, they were never “economically” independent of one another. At Teotihuacan, there were living quarters given over to Olmec residents from the Veracruz region, presumably merchants. Mesoamerica was connected, if not unified, by an ongoing trade in luxury goods and valuable stones such as jade, turquoise and precious feathers. This was not, however, trade driven primarily by factor endowments and relative costs. Climate and resource endowments did differ significantly over the widely diverse regions and microclimates of Mesoamerica. Yet trade was also political and ritualized in religious belief. For example, calling the shipment of turquoise from the (U.S.) Southwest to Central Mexico the outcome of market activity is an anachronism. In the very long run, such prehistorical exchange facilitated the later emergence of trade routes, roads, and more technologically advanced forms of transport. But arbitrage does not appear to have figured importantly in it.[4]

In sum, what we call “Mexico” in a modern sense is not of much use to the economic historian with an interest in the country before 1870, which is to say, the great bulk of its history. In these years, specificity of time and place, sometimes reaching to the village level, is an indispensable prerequisite for meaningful discussion. At the very least, it is usually advisable to be aware of substantial regional differences which reflect the ethnic and linguistic diversity of the country both before and after the arrival of the Europeans. There are fully ten language families in Mexico, and two of them, Nahuatl and Quiché, number over a million speakers each.[5]


Trade and Tribute before the Europeans

In the codices or deerskin folded paintings the Europeans examined (or actually commissioned), they soon became aware of a prominent form of Mesoamerican economic activity: tribute, or taxation in kind, or even labor services. In the absence of anything that served as money, tribute was forced exchange. Tribute has been interpreted as a means of redistribution in a nonmonetary economy. Social and political units formed a basis for assessment, and the goods collected included maize, beans, chile and cotton cloth. It was through the tribute the indigenous “empires” mobilized labor and resources. There is little or no evidence for the existence of labor or land markets to do so, for these were a European import, although marketplaces for goods existed in profusion.

To an extent, the preconquest reliance on barter economies and the absence of money largely accounts for the ubiquity of tribute. The absence of money is much more difficult to explain and was surely an obstacle to the growth of productivity in the indigenous economies.

The tribute was a near-universal attribute of Mesoamerican ceremonial centers and political empires. The city of Teotihuacan (ca. 600 CE, with a population of 125,000 or more) in central Mexico depended on tribute to support an upper stratum of priests and nobles while the tributary population itself lived at subsistence. Tlatelolco (ca 1520, with a population ranging from 50 to 100 thousand) drew maize, cotton, cacao, beans and precious feathers from a wide swath of territory that broadly extended from the Pacific to Gulf coasts that supported an upper stratum of priests, warriors, nobles, and merchants. It was this urban complex that sat atop the lagoons that filled the Valley of Mexico that so awed the arriving conquerors.

While the characterization of tribute as both a corvée and a tax in kind to support nonproductive populations is surely correct, its persistence in altered (i.e., monetized) form under colonial rule does suggest an important question. The tributary area of the Mexica (“Aztec” is a political term, not an ethnic one) broadly comprised a Pacific slope, a central valley, and a Gulf slope. These embrace a wide range of geographic features ranging from rugged volcanic highlands (and even higher snow-capped volcanoes) to marshy, humid coastal plains. Even today, travel through these regions is challenging. Lacking both the wheel and draught animals, the indigenous peoples relied on human transport, or, where possible, waterborne exchange. However we measure the costs of transportation, they were high. In the colonial period, they typically circumscribed the subsistence radius of markets to 25 to 35 miles. Under the circumstances, it is not easy to imagine that voluntary exchange, particularly between the coastal lowlands and the temperate to cold highlands and mountains, would be profitable for all but the most highly valued goods. In some parts of Mexico–as in the Andean region—linkages of family and kinship bound different regions together in a cult of reciprocal economic obligations. Yet absent such connections, it is not hard to imagine, for example, transporting woven cottons from the coastal lowlands to the population centers of the highlands could become a political obligation rather than a matter of profitable, voluntary exchange. The relatively ambiguous role of markets in both labor and goods that persisted into the nineteenth century may perhaps derive from just this combination of climatic and geographical characteristics. It is what made voluntary exchange under capitalistic markets such a puzzlingly problematic answer to the ordinary demands of economic activity.


[See the relief map below for the principal physical features of Mexico.]


[See the political map below for Mexican states and state capitals.]




Used by permission of the University of Texas Libraries, The University of Texas at Austin.


“New Spain” or Colonial Mexico: The First Phase

Mexico was established by military conquest and civil war. In the process, a civilization with its own institutions and complex culture was profoundly modified and altered, if not precisely destroyed, by the European invaders. The catastrophic elements of conquest, including the sharp decline of the existing indigenous population, from perhaps 25 million to fewer than a million within a century due to warfare, disease, social disorganization and the imposition of demands for labor and resources should nevertheless not preclude some assessment, however tentative, of its economic level in 1519, when the Europeans arrived.[6]

Recent thinking suggests that Spain was far from poor when it began its overseas expansion. If this were so, the implications of the Europeans’ reactions to what they found on the mainland of Mexico (not, significantly in the Caribbean, and, especially, in Cuba, where they were first established) is important. We have several accounts of the conquest of Mexico by the European participants, of which Bernal Díaz del Castillo is the best known, but not the only one. The reaction of the Europeans was almost uniformly astonishment by the apparent material wealth of Tenochtitlan. The public buildings, spacious residences of the temple precinct, the causeways linking the island to the shore, and the fantastic array of goods available in the marketplace evoked comparisons to Venice, Constantinople, and other wealthy centers of European civilization. While it is true that this was a view of the indigenous elite, the beneficiaries of the wealth accumulated from numerous tributaries, it hardly suggests anything other than a kind of storied opulence. Of course, the peasant commoners lived at subsistence and enjoyed no such privileges, but then so did the peasants of the society from which Bernal Díaz, Cortés, Pedro de Alvarado and the other conquerors were drawn. It is hard to imagine that the average standard of living in Mexico was any lower than that of the Iberian Peninsula. The conquerors remarked on the physical size and apparent robust health of the people whom they met, and from this, scholars such as Woodrow Borah and Sherburne Cook concluded that the physical size of the Europeans and the Mexicans was about the same. Borah and Cook surmised that caloric intake per individual in Central Mexico was around 1,900 calories per day, which certainly seems comparable to European levels.[7]

Certainly, the technological differences with Europe hampered commercial exchange, such as the absence of the wheel for transportation, metallurgy that did not include iron, and the exclusive reliance on pictographic writing systems. Yet by the same token, Mesoamerican agricultural technology was richly diverse and especially oriented toward labor-intensive techniques, well suited to pre-conquest Mexico’s factor endowments. As Gene Wilken points out, Bernardo de Sahagún explained in his General History of the Things of New Spain that the Nahua farmer recognized two dozen soil types related to origin, source, color, texture, smell, consistency and organic content.  They were expert at soil management.[8] So it is possible not only to misspecify, but to mistake the technological “backwardness” of Mesoamerica relative to Europe, and historians routinely have.

The essentially political and clan-based nature of economic activity made the distribution of output somewhat different from standard neoclassical models. Although no one seriously maintains that indigenous civilization did not include private property and, in fact, property rights in humans, the distribution of product tended to emphasize average rather than marginal product. If responsibility for tribute was collective, it is logical to suppose that there was some element of redistribution and collective claim on output by the basic social groups of indigenous society, the clans or calpulli.[9] Whatever the case, it seems clear that viewing indigenous society and economy as strained by population growth to the point of collapse, as the so-called “Berkeley school” did in the 1950s, is no longer tenable. It is more likely that the tensions exploited by the Europeans to divide and conquer their native hosts and so erect a colonial state on pre-existing native entities were mainly political rather than socioeconomic. It was through the assistance of native allies such as the Tlaxcalans, as well as with the help of previously unknown diseases such as smallpox that ravaged the indigenous peoples, that the Europeans were able to place a weakened Tenochtitlan under siege and finally defeat it.


Colonialism and Economic Adjustment to Population Decline

With the subjection first of Tenochtitlan and Tlatelolco and then of other polities and peoples, a process that would ultimately stretch well into the nineteenth century and was never really completed, the Europeans turned their attention to making colonialism pay. The process had several components: the modification or introduction of institutions of rule and appropriation; the introduction of new flora and fauna that could be turned to economic use; the reorientation of a previously autarkic and precapitalist economy to the demands of trade and commercial exploitation; and the implementation of European fiscal sovereignty. These processes were complex, required much time, and were, in many cases, only partly successful. There is considerable speculation regarding how long it took before Spain (arguably a relevant term by the mid-sixteenth century) made colonialism pay. The best we can do is present a schematic view of what occurred. Regional variations were enormous: a “typical” outcome or institution of colonialism may well have been an outcome visible in central Mexico. Moreover, all generalizations are fragile, rest on limited quantitative evidence, and will no doubt be substantially modified eventually. The message is simple: proceed with caution.

The Europeans did not seek to take Mesoamerica as a tabula rasa. In some ways, they would have been happy to simply become the latest in a long line of ruling dynasties established by decapitating native elites and assuming control. The initial demand of the conquerors for access to native labor in the so-called encomienda was precisely that, with the actual task of governing be left to the surviving and collaborating elite: the principle of “indirect rule.”[10] There were two problems with this strategy: the natives resisted and the natives died. They died in such large numbers as to make the original strategy impracticable.

The number of people who lived in Mesoamerica has long been a subject of controversy, but there is no point in spelling it out once again. The numbers are unknowable and, in an economic sense, not really important. The population of Tenochtitlan has been variously estimated between 50 and 200 thousand individuals, depending on the instruments of estimation.  As previously mentioned, some estimates of the Central Mexican population range as high as 25 million on the eve of the European conquest, and virtually no serious student accepts the small population estimates based on the work of Angel Rosenblatt. The point is that labor was abundant relative to land, and that the small surpluses of a large tributary population must have supported the opulent elite that Bernal Díaz and his companions described.

By 1620, or thereabouts, the indigenous population had fallen to less than a million according to Cook and Borah. This is not just the quantitative speculation of modern historical demographers. Contemporaries such as Jerónimo de Mendieta in his Historia eclesiástica Indiana (1596) spoke of towns formerly densely populated now witness to “the palaces of those former Lords ruined or on the verge of. The homes of the commoners mostly empty, roads and streets deserted, churches empty on feast days, the few Indians who populate the towns in Spanish farms and factories.” Mendieta was an eyewitness to the catastrophic toll that European microbes and warfare took on the native population. There was a smallpox epidemic in 1519-20 when 5 to 8 million died. The epidemic of hemorrhagic fever in 1545 to 1548 was one of the worst demographic catastrophes in human history, killing 5 to 15 million people. And then again in 1576 to 1578, when 2 to 2.5 million people died, we have clear evidence that land prices in the Valley of Mexico (Coyoacán, a village outside Mexico City, as the reconstructed Tenochtitlán was called) collapsed. The death toll was staggering. Lesser outbreaks were registered in 1559, 1566, 1587, 1592, 1601, 1604, 1606, 1613, 1624, and 1642. The larger point is that the intensive use of native labor, such as the encomienda, had to come to an end, whatever its legal status had become by virtue of the New Laws (1542). The encomienda or the simple exploitation of massive numbers of indigenous workers was no longer possible. There were too few “Indians” by the end of the sixteenth century.[11]

As a result, the institutions and methods of economic appropriation were forced to change. The Europeans introduced pastoral agriculture – the herding of cattle and sheep – and the use of now abundant land and scarce labor in the form of the hacienda while the remaining natives were brought together in “villages” whose origins were not essentially pre- but post-conquest, the so-called congregaciones, at the same time that the titles to now-vacant lands were created, regularized and “composed.”[12] (Land titles were a European innovation as well). Sheep and cattle, which the Europeans introduced, became part of the new institutional backbone of the colony. The natives would continue to rely on maize for the better part of their subsistence, but the Europeans introduced wheat, olives (oil), grapes (wine) and even chickens, which the natives rapidly adopted. On the whole, the results of these alterations were complex. Some scholars argue that the native diet improved even in the face of their diminishing numbers, a consequence of increased land per person and of greater variety of foodstuffs, and that the agricultural potential of the colony now called New Spain was enhanced. By the beginning of the seventeenth century, the combined indigenous, European immigrant, and new mixed blood populations could largely survive on the basis of their own production. The introduction of sheep lead to the introduction and manufacture of woolens in what were called obrajes or manufactories in Puebla, Querétaro, and Coyoacán. The native peoples continued to produce cottons (a domestic crop) under the stimulus of European organization, lending, and marketing. Extensive pastoralism, the cultivation of cereals and even the incorporation of native labor then characterized the emergence of the great estates or haciendas, which became a characteristic rural institution through the twentieth century, when the Mexican Revolution put an end to many of them. Thus the colony of New Spain continued to feed, clothe and house itself independent of metropolitan Spain’s direction. Certainly, Mexico before the Conquest was self-sufficient. The extent to which the immigrant and American Spaniard or creole population depended on imports of wine, oil and other foodstuffs and textiles in the decades immediately following the conquest is much less clear.

At the same time, other profound changes accompanied the introduction of Europeans, their crops and their diseases into what they termed the “kingdom” (not colony, for constitutional reasons) of New Spain.[13] Prior to the conquest, land and labor had been commoditized, but not to any significant extent, although there was a distinction recognized between possession and ownership.  Scholars who have closely examined the emergence of land markets after the conquest—mainly in the Valley of Mexico—are virtually unanimous in this conclusion. To the extent that markets in labor and commodities had emerged, it took until the 1630s (and later elsewhere in New Spain) for the development to reach maturity. Even older mechanisms of allocation of labor by administrative means (repartimiento) or by outright coercion persisted. Purely economic incentives in the form of money wages and prices never seemed adequate to the job of mobilizing resources and those with access to political power were reluctant to pay a competitive wage. In New Spain, the use of some sort of political power or rent-seeking nearly always accompanied labor recruitment. It was, quite simply, an attempt to evade the implications of relative scarcity, and renders the entire notion of “capitalism” as a driving economic force in colonial Mexico quite inexact.


Why the Settlers Resisted the Implications of Scarce Labor

The reasons behind this development are complex and varied. The evidence we have for the Valley of Mexico demonstrates that the relative price of labor rose while the relative price of land fell even when nominal movements of one or the other remained fairly limited. For instance, the table constructed below demonstrates that from 1570-75 through 1591-1606, the price of unskilled labor in the Valley of Mexico nearly tripled while the price of land in the Valley (Coyoacán) fell by nearly two thirds. On the whole, the price of labor relative to land increased by nearly 800 percent. The evolution of relative prices would have inevitably worked against the demanders of labor (Europeans and increasingly, creoles or Americans of largely European ancestry) and in favor of the supplier (native labor, or people of mixed race generically termed mestizo). This was not of course what the Europeans had in mind and by capture of legal institutions (local magistrates, in particularly), frequently sought to substitute compulsion for what would have been costly “free labor.” What has been termed the “depression” of the seventeenth century may well represent one of the consequences of this evolution: an abundance of land, a scarcity of labor, and the attempt of the new rulers to adjust to changing relative prices. There were repeated royal prohibitions on the use of forced indigenous labor in both public and private works, and thus a reduction in the supply of labor. All highly speculative, no doubt, but the adjustment came during the central decades of the seventeenth century, when New Spain increasingly produced its own woolens and cottons, and largely assumed the tasks of providing itself with foodstuffs and was thus required to save and invest more.  No doubt, the new rulers felt the strain of trying to do more with less.[14]


Years Land Price Index Labor Price Index (Labor/Land) Index
1570-1575 100 100 100
1576-1590 50 143 286
1591-1606 33 286 867


Source: Calculated from Rebecca Horn, Postconquest Coyoacan: Nahua-Spanish Relations in Central Mexico, 1519-1650 (Stanford: Stanford University Press, 1997), p. 208 and José Ignacio Urquiola Permisan, “Salarios y precios en la industria manufacturer textile de la lana en Nueva España, 1570-1635,” in Virginia García Acosta, (ed.), Los precios de alimentos y manufacturas novohispanos (México, DF: CIESAS, 1995), p. 206.


The overall role of Mexico within the Hapsburg Empire was in flux as well. Nothing signals the change as much as the emergence of silver mining as the principal source of Mexican exportables in the second half of the sixteenth century. While Mexico would soon be eclipsed by Peru as the most productive center of silver mining—at least until the eighteenth century—the discovery of significant silver mines in Zacatecas in the 1540s transformed the economy of the Spanish empire and the character of New Spain’s as well.




Silver Mining

While silver mining and smelting was practiced before the conquest, it was never a focal point of indigenous activity. But for the Europeans, Mexico was largely about silver mining. From the mid- sixteenth century onward, it was explicitly understood by the viceroys that they were to do all in their power to “favor the mines,” as one memorable royal instruction enjoined. Again, there has been much controversy of the precise amounts of silver that Mexico sent to the Iberian Peninsula. What we do know certainly is that Mexico (and the Spanish Empire) became the leading source of silver, monetary reserves, and thus, of high-powered money. Over the course of the colonial period, most sources agree that Mexico provided nearly 2 billion pesos (dollars) or roughly 1.6 billion troy ounces to the world economy. The graph below provides a picture of the remissions of all Mexican silver to both Spain and to the Philippines taken from the work of John TePaske.[15]

Since the population of Mexico under Spanish rule was at most 6 million people by the end of the colonial period, the kingdom’s silver output could only be considered astronomical.

This production has to be considered in both its domestic and international dimensions. From a domestic perspective, the mines were what a later generation of economists would call “growth poles.” They were markets in which inputs were transformed into tradable outputs at a much higher rate of productivity (because of mining’s relatively advanced technology) than Mexico’s other activities. Silver thus became Mexico’s principal exportable good, and remained so well into the late nineteenth century.  The residual claimants on silver production were many and varied.  There were, of course the silver miners themselves in Mexico and their merchant financiers and suppliers. They ranged from some of the wealthiest people in the world at the time, such as the Count of Regla (1710-1781), who donated warships to Spain in the eighteenth century, to individual natives in Zacatecas smelting their own stocks of silver ore.[16] While the conditions of labor in Mexico’s silver mines were almost uniformly bad, the compensation ranged from above market wages paid to free labor in the prosperous larger mines  of the Bajío and the North to the use of forced village  labor drafts in more marginal (and presumably less profitable) sites such as Taxco. In the Iberian Peninsula, income from American silver mines ultimately supported not only a class of merchant entrepreneurs in the large port cities, but virtually the core of the Spanish political nation, including monarchs, royal officials, churchmen, the military and more. And finally, silver flowed to those who valued it most highly throughout the world. It is generally estimated that 40 percent of Spain’s American (not just Mexican, but Peruvian as well) silver production ended up in hoards in China.

Within New Spain, mining centers such as Guanajuato, San Luis Potosí, and Zacatecas became places where economic growth took place rapidly, in which labor markets more readily evolved, and in which the standard of living became obviously higher than in neighboring regions. Mining centers tended to crowd out growth elsewhere because the rate of return for successful mines exceeded what could be gotten in commerce, agriculture and manufacturing. Because silver was the numeraire for Mexican prices—Mexico was effectively on a silver standard—variations in silver production could and did have substantial effects on real economic activity elsewhere in New Spain. There is considerable evidence that silver mining saddled Mexico with an early case of “Dutch disease” in which irreducible costs imposed by the silver standard ultimately rendered manufacturing and the production of other tradable goods in New Spain uncompetitive. For this reason, the expansion of Mexican silver production in the years after 1750 was never unambiguously accompanied by overall, as opposed to localized prosperity. Silver mining tended to absorb a disproportional quantity of resources and to keep New Spain’s price level high, even when the business cycle slowed down—a fact that was to impress visitors to Mexico well into the nineteenth century. Mexican silver accounted for well over three-quarters of exports by value into the nineteenth century as well. The estimates vary widely, for silver was by no means the only, or even the most important source of revenue to the Crown, but by the end of the colonial era, the Kingdom of New Spain probably accounted for 25 percent of the Crown’s imperial income.[17] That is why reformist proposals circulating in governing circles in Madrid in the late eighteenth century fixed on Mexico. If there was any threat to the American Empire, royal officials thought that Mexico, and increasingly, Cuba, were worth holding on to. From a fiscal standpoint, Mexico had become just that important.[18]


“New Spain”: The Second Phase                of the Bourbon “Reforms”

In 1700, the last of the Spanish Hapsburgs died and a disputed succession followed. The ensuring conflict, known as the War of Spanish Succession, came to an end in 1714. The grandson of French king Louis XIV came to the Spanish throne as King Philip V. The dynasty he represented was known as the Bourbons. For the next century of so, they were to determine the fortunes of New Spain. Traditionally, the Bourbons, especially the later ones, have been associated with an effort to “renationalize” the Spanish empire in America after it had been thoroughly penetrated by French, Dutch, and lastly, British commercial interests.[19]

There were at least two areas in which the Bourbon dynasty, “reformist” or no, affected the Mexican economy. One of them dealt with raising revenue and the other was the international position of the imperial economy, specifically, the volume and value of trade. A series of statistics calculated by Richard Garner shows that the share of Mexican output or estimated GDP taken by taxes grew by 167 percent between 1700 and 1800. The number of taxes collected by the Royal Treasury increased from 34 to 112 between 1760 and 1810. This increase, sometimes labelled as a Bourbon “reconquest” of Mexico after a century and a half of drift under the Hapsburgs, occurred because of Spain’s need to finance increasingly frequent and costly wars of empire in the eighteenth century. An entire array of new taxes and fiscal placemen came to Mexico. They affected (and alienated) everyone, from the wealthiest merchant to the humblest villager. If they did nothing else, the Bourbons proved to be expert tax collectors.[20]

The second and equally consequential change in imperial management lay in the revision and “deregulation” of New Spain’s international trade, or the evolution from a “fleet” system to a regime of independent sailings, and then, finally, of voyages to and from a far larger variety of metropolitan and colonial ports. From the mid-sixteenth century onwards, ocean-going trade between Spain and the Americas was, in theory, at least, closely regulated and supervised. Ships in convoy (flota) sailed together annually under license from the monarchy and returned together as well. Since so much silver specie was carried, the system made sense, even if the flotas made a tempting target and the problem of contraband was immense. The point of departure was Seville and later, Cadiz. Under pressure from other outports in the late eighteenth century, the system was finally relaxed. As a consequence, the volume and value of trade to Mexico increased as the price of importables fell. Import-competing industries in Mexico, especially textiles, suffered under competition and established merchants complained that the new system of trade was too loose. But to no avail. There is no measure of the barter terms of trade for the eighteenth century, but anecdotal evidence suggests they improved for Mexico. Nevertheless, it is doubtful that these gains could have come anywhere close to offsetting the financial cost of Spain’s “reconquest” of Mexico.[21]

On the other hand, the few accounts of per capita real income growth in the eighteenth century that exist suggest little more than stagnation, the result of population growth and a rising price level. Admittedly, looking for modern economic growth in Mexico in the eighteenth century is an anachronism, although there is at least anecdotal evidence of technological change in silver mining, especially in the use of gunpowder for blasting and excavating, and of some productivity increase in silver mining. So even though the share of international trade outside of goods such as cochineal and silver was quite small, at the margin, changes in the trade regime were important. There is also some indication that asset income rose and labor income fell, which fueled growing social tensions in New Spain. In the last analysis, the growing fiscal pressure of the Spanish empire came when the standard of living for most people in Mexico—the native and mixed blood population—was stagnating. During periodic subsistence crisis, especially those propagated by drought and epidemic disease, and mostly in the 1780s, living standards fell. Many historians think of late colonial Mexico as something of a powder keg waiting to explode. When it did, in 1810, the explosion was the result of a political crisis at home and a dynastic failure abroad. What New Spain had negotiated during the Wars of Spanish Succession—regime change– provide impossible to surmount during the Napoleonic Wars (1794-1815). This may well be the most sensitive indicator of how economic conditions changed in New Spain under the heavy, not to say clumsy hand, of the Bourbon “reforms.”[22]


The War for Independence, the Insurgency, and Their Legacy

The abdication of the Bourbon monarchy to Napoleon Bonaparte in 1808 produced a series of events that ultimately resulted in the independence of New Spain. The rupture was accompanied by a violent peasant rebellion headed by the clerics Miguel Hidalgo and José Morelos that, one way or another, carried off 10 percent of the population between 1810 and 1820. Internal commerce was largely paralyzed. Silver mining essentially collapsed between 1810 and 1812 and a full recovery of mining output was delayed until the 1840s. The mines located in zones of heavy combat, such as Guanajuato and Querétaro, were abandoned by fleeing workers. Thus neglected, they quickly flooded.

At the same time, the fiscal and human costs of this period, the Insurgency, were even greater.[23] The heavy borrowings in which the Bourbons engaged to finance their military alliances left Mexico with a considerable legacy of internal debt, estimated at £16 million at Independence. The damage to the fiscal, bureaucratic and administrative structure of New Spain in the face of the continuing threat of Spanish reinvasion (Spain did not recognize the Independence of Mexico (1821)) in the 1820s drove the independent governments into foreign borrowing on the London market to the tune of £6.4 million in order to finance continuing heavy military outlays. With a reduced fiscal capacity, in part the legacy of the Insurgency and in part the deliberate effort of Mexican elites to resist any repetition Bourbon-style taxation, Mexico defaulted on its foreign debt in 1827. For the next sixty years, through a serpentine history of moratoria, restructuring and repudiation (1867), it took until 1884 for the government to regain access to international capital markets, at what cost can only be imagined. Private sector borrowing and lending continued, although to what extent is currently unknown. What is clear is that the total (internal plus external) indebtedness of Mexico relative to late colonial GDP was somewhere in the range of 47 to 56 percent.[24]

This was, perhaps, not an insubstantial amount for a country whose mechanisms of public finance were in what could be mildly termed chaotic condition in the 1820s and 1830s as the form, philosophy, and mechanics of government oscillated from federalist to centralist and back into the 1850s.  Leaving aside simple questions of uncertainty, there is the very real matter that the national government—whatever the state of private wealth—lacked the capacity to service debt because national and regional elites denied it the means to do so. This issue would bedevil successive regimes into the late nineteenth century, and, indeed, into the twentieth.[25]

At the same time, the demographic effects of the Insurgency exacted a cost in terms of lost output from the 1810s through the 1840s. Gaping holes in the labor force emerged, especially in the fertile agricultural plains of the Bajío that created further obstacles to the growth of output. It is simply impossible to generalize about the fortunes of the Mexican economy in this period because of the dramatic regional variations in the Republic’s economy. A rough estimate of output per head in the late colonial period was perhaps 40 pesos (dollars).[26] After a sharp contraction in the 1810s, income remained in that neighborhood well into the 1840s, at least until the eve of the war with the United States in 1846. By the time United States troops crossed the Rio Grande, a recovery had been under way, but the war arrested it. Further political turmoil and civil war in the 1850s and 1860s represented setbacks as well. In this way, a half century or so of potential economic growth was sacrificed from the 1810s through the 1870s. This was not an uncommon experience in Latin America in the nineteenth century, and the period has even been called The Stage of the Great Delay.[27] Whatever the exact rate of real per capita income growth was, it is hard to imagine it ever exceeded two percent, if indeed it reached much more than half that.


Agricultural Recovery and War

On the other hand, it is clear that there was a recovery in agriculture in the central regions of the country, most notably in the staple maize crop and in wheat. The famines of the late colonial era, especially of 1785-86, when massive numbers perished, were not repeated. There were years of scarcity and periodic corresponding outbreaks of epidemic disease—the cholera epidemic of 1832 affected Mexico as it did so many other places—but by and large, the dramatic human wastage of the colonial period ceased, and the death rate does appear to have begun to fall. Very good series on wheat deliveries and retail sales taxes for the city of Puebla southeast of Mexico City show a similarly strong recovery in the 1830s and early 1840s, punctuated only by the cholera epidemic whose effects were felt everywhere.[28]

Ironically, while the Panic of 1837 appears to have at least hit the financial economy in Mexico hard with a dramatic fall in public borrowing (and private lending), especially in the capital,[29] an incipient recovery of the real economy was ended by war with the United States. It is not possible to put numbers on the cost of the war to Mexico, which lasted intermittently from 1846 to 1848, but the loss of what had been the Southwest under Mexico is most often emphasized. This may or may not be accurate. Certainly, the loss of California, where gold was discovered in January 1848, weighs heavily on the historical imaginations of modern Mexicans. There is also the sense that the indemnity paid by the United States–$15 million—was wholly inadequate, which seems at least understandable when one considers that Andrew Jackson offered $5 million to purchase Texas alone in 1829.

It has been estimated that the agricultural output of the Mexican “cession” as it was called in 1900, was nearly $64 million, and that the value of livestock in the territory was over $100 million. The value of gold and silver produced was about $35 million. Whether it is reasonable to employ the numbers in estimating the present value of output relative to the indemnity paid is at least debatable as a counterfactual, unless one chooses to regard this as the annuitized value on a perpetuity “purchased” from Mexico at gunpoint, which seems more like robbery than exchange.  In the long run, the loss may have been staggering, but in the short run, much less so. The northern territories Mexico lost had really yielded very little up until the War. In fact, the balance of costs and revenues to the Mexican government may well have been negative.[30]

Whatever the case, the decades following the war with the United States until the beginning of the administration of Porfirio Díaz (1876) are typically regarded as a step backward. The reasons are several. In 1850, the government essentially went broke. While it is true that its financial position had disintegrated since the mid-1830s, 1850 marked a turning point. The entire indemnity payment from the United States was consumed in debt service, but this made no appreciable dent in the outstanding principal, which hovered around 50 million pesos (dollars).  The limits of debt sustainability had been reached: governing was turned into a wild search for resources, which proved fruitless. Mexico continued to sell of parts of its territory, such as the Treaty of the Mesilla (1853), or Gadsden Purchase, whose proceeds largely ended up in the hands of domestic financiers rather than foreign creditors’.[31] Political divisions, if anything, terrible before the war with the United States, turned catastrophic. A series of internal revolts, uprisings and military pronouncements segued into yet another violent civil war between liberals and conservatives—now a formal party—the so-called Three Years’ War (1856-58). In 1862, frustrated by Mexico’s suspension of foreign debt service, Great Britain, Spain and France seized Veracruz. A Hapsburg prince, Maximilian, was installed as Mexico’s second “emperor.” (Agustín de Iturbide was the first). While only the French actively prosecuted the war within Mexico, and while they never controlled more than a very small part of the country, the disruption was substantial. By 1867, with Maximillian deposed and the French army withdrawn, the country required serious reconstruction. [32]


Juárez, Díaz and the Porfiriato: authoritarian development.

To be sure, the origins of authoritarian development in nineteenth century Mexico were not with Porfirio Díaz, as is often asserted. Their beginnings actually went back several decades earlier, to the last presidency of Santa Anna, generally known as the Dictatorship (1853-54). But Santa Anna was overthrown too quickly, and now for the last time, for much to have actually occurred. A ministry for development (Fomento) had been created, but the Liberal revolution of Ayutla swept Santa Anna and his clique away for good. Serious reform seems to have begun around 1870, when the Finance Minister was Matías Romero. Romero was intent on providing Mexico with a modern Treasury, and on ending the hand-to- mouth financing that had mostly characterized the country’s government since Independence, or at least since the mid-1830s. So it is appropriate to pick up with the story here. Where did Mexico stand in 1870?[33]

The most revealing data that we have on the state of economic development come from various anthropometric and cost of living studies by Amilcar Challu, Aurora Gómez Galvarriato, and Moramay López Alonso.[34] Their research overlaps in part, and gives a fascinating picture of Mexico in the long run, from 1735 to 1940. For the moment, let us look at the period leading up to 1867, when the French withdrew from Mexico. If we look at the heights of the “literate” population, Challu’s research suggests that the standard of living stagnated between 1750 and 1840. If we look at the “illiterate” population, there was a consistent decline until 1850. Since the share of the illiterate population was clearly larger, we might infer that living standards for most Mexicans declined after 1750, however we interpret other quantitative and anecdotal evidence.

López Alonso confines her work to the period after the 1840s. From 1850 through 1890, her work generally corroborates Challu’s. The period after the Mexican War was clearly a difficult one for most Mexicans, and the challenge that both Juárez and Díaz faced was a macroeconomy in frank contraction after 1850. The regimes after 1867 were faced with stagnation.

The real wage study of by Amilcar Challu and Aurora Gómez Galvarriato, when combined with the existing anthropometric work, offers a pretty clear correlation between movements in real wages (down) and height (falling). [35]

It would then appear growth from the 1850s through the 1870s was slow—if there was any at all—and perhaps inferior to what had come between the 1820s and the 1840s. Given the growth of import substitution during the Napoleonic Wars, roughly 1790-1810, coupled with the commercial opening brought by the Bourbons’   post-1789 extension of “free trade” to Mexico, we might well see a pattern of mixed performance (1790-1810), sharp contraction (the 1810s), rebound and recovery, with a sharp financial shocks coming in the mid-1820s and mid -1830s (1820s-1840s), and stagnation once more (1850s-1870s). Real per capita output oscillated, sometimes sharply, around an underlying growth rate of perhaps one percent; changes in the distribution of income and wealth are more or less impossible to identify consistently, because studies conflict.

Far less speculative is that the foundations for modern economic growth were laid down in Mexico during the era of Benito Juárez. Its key elements were the creation of a secular, bourgeois state and secular institutions embedded in the Constitution of 1857. The titanic ideological struggles between liberals and conservatives were ultimately resolved in favor of a liberal, but nevertheless centralizing form of government under Porfirio Diáz. This was the beginning of the end of the Ancien Regime. Under Juárez, corporate lands of the Church and native villages were privatized in favor of individual holdings and their former owners compensated in bonds. This was effectively the largest transfer of land title since the late sixteenth century (not including the war with the United States) and it cemented the idea of individual property rights. With the expulsion of the French and the outright repudiation of the French debt, the Treasury was reorganized along more modern lines. The country got additional breathing room by the suspension of debt service to Great Britain until the terms of the 1825 loans were renegotiated under the Dublán Convention (1884). Equally, if not more important, Mexico now entered the railroad age in 1876, nearly forty years after the first tracks were laid in Cuba in 1837. The educational system was expanded in an attempt to create at least a core of literate citizens who could adopt the tools of modern finance and technology. Literacy still remained in the neighborhood of 20 percent, and life expectancy at birth scarcely reached 40 years of age, if that. Yet by the end of the Restored Republic (1876), Mexico had turned a corner. There would be regressions, but the nineteenth century had finally arrived, aptly if brutally signified by Juárez’ execution of Maximilian in Querétaro in 1867.[36]

Porfirian Mexico

Yet when Díaz came to power, Mexico was, in many ways, much as it had been a century earlier. It was a rural, agrarian nation whose primary agricultural output per person was maize, followed by wheat and beans. These were produced on haciendas and ranchos in Jalisco, Guanajuato, Michoacán, Mexico, Puebla as well as Oaxaca, Veracruz, Aguascalientes, Chihuahua and Sonora. Cotton, which with great difficulty had begun to supply a mechanized factory regime (first in spinning, then weaving) was produced in Oaxaca, Yucatán, Guerrero and Chiapas as well as in parts of Durango and Coahuila. Domestic production of raw cotton rarely sufficed to supply factories in Michoacán, Querétaro, Puebla and Veracruz, so imports from the Southern United States were common. For the most part, the indigenous population lived on maize, beans, and chile, producing its own subsistence on small, scattered plots known as milpas. Perhaps 75 percent of the population was rural, with the remainder to be found in cities like Mexico, Guadalajara, San Luis Potosí, and later, Monterrey. Population growth in the Southern and Eastern parts of the country had been relatively slow in the nineteenth century. The North and the center North grew more rapidly.  The Center of the country, less so. Immigration from abroad had been of no consequence.[37]

It is a commonplace to see the presidency of Porfirio Díaz (1876-1910) as a critical juncture in Mexican history, and this would be no less true of economic or commercial history as well. By 1910, when the Díaz government fell and Mexico descended into two decades of revolution, the first one extremely violent, the face of the country had been changed for good. The nature and effect of these changes remain not only controversial, but essential for understanding the subsequent evolution of the country, so we should pause here to consider some of their essential features.

While mining and especially, silver mining, had long held a privileged place in the economy, the nineteenth century had witnessed a number of significant changes. Until about 1889, the coinage of gold, silver, and copper—a very rough proxy for production given how much silver had been illegally exported—continued on a steadily upward track. In 1822, coinage was about 10 million pesos. By 1846, it had reached roughly 15 million pesos. There was something of a structural break after the war with the United States (its origins are unclear), and coinage continued upward to about 25 million pesos in 1888. Then, the falling international price of silver, brought on by large increases in supply elsewhere, drove the trend after 1889 sharply downward. By 1909-10, coinage had collapsed to levels previously unrecorded since the 1820s, although in 1904 and 1905, it had skyrocketed to nearly 45 million pesos.[38]

It comes as no surprise that these variations in production corresponded to sharp changes in international relative prices. For example, the market price of silver declined sharply relative to lead, which in turn encountered a large increase in Mexican production and a diversification into other metals including zinc, antinomy, and copper. Mexico left the silver standard (for international transactions, but continued to use silver domestically) in 1905, which contributed to the eclipse of this one crucial industry, which would never again have the status it had when Díaz became president in 1876, when precious metals represented 75 percent of Mexican exports by value. By the time he had decamped in exile to Paris, precious metals accounted for less than half of all exports.

The reason for this relative decline was the diversification of agricultural exports that had been slowly occurring since the 1870s. Coffee, cotton, sugar, sisal and vanilla were the principal crops, and some regions of the country such as Yucatán (henequen) and Durango and Tamaulipas (cotton) supplied new export crops.


Railroads and Infrastructure

None of be of this would have occurred without the massive changes in land tenure that had begun in the 1850s, but most of all, without the construction of railroads financed by the migration of foreign capital to Mexico under Díaz. At one level, it is a well-known story of social savings, which were substantial in Mexico because the terrain was difficult and the alternative modes of carriage few. One way or another, transportation has always been viewed as an “obstacle” to Mexican economic development. That must be true at some level, although recent studies (especially by Sandra Kuntz) have raised important qualifications. Railroads may not have been gateways to foreign dependency, as historians once argued, but there were limits to their ability to effect economic change, even internally. They tended to enlarge the internal market for some commodities more than others. The peculiarities of rate-making produced other distortions, while markets for some commodities were inevitably concentrated in major cities or transshipment points which afforded some monopoly power to distributors even as a national market in basic commodities became more of a reality. Yet, in general, the changes were far reaching.[39]

Conventional figures confirm conventional wisdom. When Díaz assumed the presidency, there were 660 km (410 miles) of track. In 1910, there were 19,280 km (about 12,000 miles). Seven major lines linked the cities of Mexico, Veracruz, Acapulco, Juárez, Laredo, Puebla, Oaxaca. Monterrey and Tampico in 1892. The lines were built by foreign capital (e.g., the Central Mexicano was built by the Atchison, Topeka and Santa Fe), which is why resolving the long-standing questions of foreign debt service were critical. Large government subsidies on the order of 3,500 to 8,000 pesos per km were granted, and financing the subsidies amounted to over 30 million pesos by 1890. While the railroads were successful in creating more of a national market, especially in the North, their finances were badly affected by the depreciation of the silver peso, given that foreign liabilities had to be liquidated in gold.

As a result, the government nationalized the railroads in 1903. At the same time, it undertook an enormous effort to construct infrastructure such as drainage and ports, virtually all of which were financed by British capital and managed by “Don Porfirio’s contactor,” Sir Weetman Pearson.  Between railroads, ports, drainage works and irrigation facilities, the Mexican government borrowed 157 million pesos to finance costs.[40]

The expansion of the railroads, the build-out of infrastructure and the expansion of trade would have normally increased output per capita. Any data we have prior to 1930 are problematic, and before 1895, strictly speaking, we have no official measures of output per capita at all. Most scholars shy away from using levels of GDP in any form, other than for illustrative purposes.  Aside from the usual problems attending national income accounting, Mexico presents a few exceptional challenges. In peasant families, where women were entrusted with converting maize into tortilla, no small job, the omission of their value added from GDP must constitute a sizeable defect in measured output. Moreover, as the commercial radius of Mexican agriculture expanded rapidly as railroads, roads, and later, highways spread extensively, growth rates represented increased commercialization rather than increased growth. We have no idea how important this phenomenon was, but it is worth keeping in mind when we look at very rapid growth rates after 1940.

There are various measures of cumulative growth during the Porfiriato. By and large, the figure from 1900 through 1910 is around 23 percent, which is certainly higher than rates achieved during the nineteenth century, but nothing like what was recorded after 1940. In light of declining real wages, one can only assume that the bulk of “progress” flowed to the recipients of property income. This may well have represented a reversal of trends in the nineteenth century, when some argue that property income contracted in the wake of the Insurgency[41].

There was also significant industrialization in Mexico during the Porfiriato. Some industry, especially textiles, had its origins in the 1840s, but its size, scale and location altered dramatically by the end of the nineteenth century. For example, the cotton textile industry saw the number of workers, spindles and looms more than double from the late 1870s to the first decade of the nineteenth century. Brewing and its associated industry, glassmaking, became well established in Monterrey during the 1890s. The country’s first iron and steel mill, Fundidora Monterrey, was established there as well in 1903. Other industries, such as papermaking and cigarettes followed suit. By the end of the Porfiriato, over 10 percent of Mexico’s output was certainly industrial.[42]


From Revolution to “Miracle”

The Mexican Revolution (1910-1940) began as a political upheaval provoked by a crisis in the presidential succession when Porfirio Díaz refused to leave office in the wake of electoral defeat after signaling his willingness to do so in a famous pubic interview of 1908.[43] It was also the result of an agrarian uprising and the insistent demand of Mexico’s growing industrial proletariat for a share of political power. Finally, there was a small (fewer than 10 percent of all households) but upwardly mobile urban middle class created by economic development under Díaz whose access to political power had been effectively blocked by the regime’s mechanics of political control. Precisely how “revolutionary” were the results of the armed revolt—which persisted largely through the 1910s and peaked in a civil war in 1914-1915—has long been contentious, but is only tangentially relevant as a matter of economic history. The Mexican Revolution was no Bolshevik movement (of course, it predated Bolshevism by seven years) but it was not a purely bourgeois constitutional movement either, although it did contain substantial elements of both.

From a macroeconomic standpoint, it has become fashionable to argue that the Revolution had few, if any, profound economic consequences. It seems as if the principal reason was that revolutionary factions were interested in appropriating rather than destroying the means of production. For example, the production of crude oil peaked in Mexico in 1915—at the height of the Revolution—because crude oil could be used as a source of income to the group controlling the wells in Veracruz state. This was a powerful consideration.[44]

Yet in another sense, the conclusion that the Revolution had slight economic effects is not only facile, but obviously wrong. As the demographic historian Robert McCaa showed, the excess mortality occasioned by the Revolution was larger than any similar event in Mexican history other than the conquest in the sixteenth century. There has been no attempt made to measure the output lost by the demographic wastage (including births that never occurred), yet even the effect on the population cohort born between 1910 and 1920 is plain to see in later demographic studies.  [45]

There is also a subtler question that some scholars have raised. The Revolution increased labor mobility and the labor supply by abolishing constraints on the rural population such as debt peonage and even outright slavery. Moreover, the Revolution, by encouraging and ultimately setting into motion a massive redistribution of previously privatized land, contributed to an enlarged supply of that factor of production as well. The true impact of these developments was realized in the 1940s and 1950s, when rapid economic growth began, the so-called Mexican Miracle, which was characterized by rates of real growth of as much as 6 percent per year (1955-1966). Whatever the connection between the Revolution and the Miracle, it will require a serious examination on empirical grounds and not simply a dogmatic dismissal of what is now regarded as unfashionable development thinking: import substitution and inward-oriented growth.[46]

The other major consequence of the Revolution, the agrarian reform and the creation of the ejido, or land granted by the Mexican state to rural population under the authority provided it by the revolutionary Constitution on 1917 took considerable time to coalesce, and were arguably not even high on one of the Revolution’s principal instigators, Francisco Madero’s, list of priorities. The redistribution of land to the peasantry in the form of possession if not ownership – a kind of return to real or fictitious preconquest and colonial forms of land tenure – did peak during the avowedly reformist, and even modestly radical presidency of Lázaro Cárdenas (1934-1940) after making only halting progress under his predecessors since the 1920s. From 1940 to 1965, the cultivated area in Mexico grew at 3.7 percent per year and the rise in productivity in basic food crops was 2.8 percent per year.

Nevertheless, the long-run effects of the agrarian reform and land redistribution have been predictably controversial. Under the presidency of Carlos Salinas (1988-1994) the reform was officially declared over, with no further land redistribution to be undertaken and the legal status of the ejido definitively changed. The principal criticism of the ejido was that, in the long run, it encouraged inefficiently small landholding per farmer and, by virtue of its limitations on property rights, made agricultural credit difficult for peasants to obtain.[47]

There is no doubt these are justifiable criticisms, but they have to be placed in context. Cárdenas’ predecessors in office, Alvaro Obregón (1924-1928) and Plutarco Elías Calles (1928-1932) may well have preferred a more commercial model of agriculture with larger, irrigated holdings. But it is worth recalling that one of the original agrarian leaders of the Revolution, Emiliano Zapata, had an uneasy relationship with Madero, who saw the Revolution in mostly political terms, from the start and quickly rejected Madero’s leadership in favor of restoring peasant lands in his native state of Morelos.  Cárdenas, who was in the midst of several major maneuvers that would require widespread popular support—such as the expropriation of foreign oil companies operating in Mexico in March 1938—was undoubtedly sensitive to the need to mobilize the peasantry on his behalf. The agrarian reform of his presidency, which surpassed that of any other, needs to be considered in those terms as well as in terms of economic efficiency.[48]

Cárdenas’ presidency also coincided with the continuation of the Great Depression. Like other countries in Latin America, Mexico was hard hit by the Great Depression, at least through the early 1930s.  All sorts of consumer goods became scarcer, and the depreciation of the peso raised the relative price of imports. As had happened previously in Mexican history (1790-1810, during the Napoleonic Wars and the disruption of the Atlantic trade), in the medium term domestic industry was nevertheless given a stimulus and import substitution, the subsequent core of Mexico’s industrialization program after World War II, was given a decisive boost. On the other hand, Mexico also experienced the forced “repatriation” of people of Mexican descent, mostly from California, of whom 60 percent were United States citizens. The effects of this movement—the emigration of the Revolution in reverse—has never been properly analyzed. The general consensus is that World War II helped Mexico to prosper. Demand for labor and materials from the United States, to which Mexico was allied, raised real wages and incomes, and thus boosted aggregate demand. From 1939 through 1946, real output in Mexico grew by approximately 50 percent. The growth in population accelerated as well as the country began to move into the later stages of the demographic transition, with a falling death rate, while birth rates remained high.[49]


From Miracle to Meltdown: 1950-1982  

The history of import substitution manufacturing did not begin with postwar Mexico, but few countries (especially in Latin America) became as identified with the policy in the 1950s, and with what Mexicans termed the emergence of “stabilizing development.” There was never anything resembling a formal policy announcement, although Raúl Prebisch’s 1949 manifesto, “The Economic Development of Latin America and its Principal Problems” might be regarded as supplying one. Prebisch’s argument, that a directed change in the composition of imports toward capital goods to facilitate domestic industrialization was, in essence, the basis of the policy that Mexico followed. Mexico stabilized the nominal exchange rate at 12.5 pesos to the dollar in 1954, but further movement in the real exchange rate (until the 1970s) were unimportant. The substantive bias of import substitution in Mexico was a high effective rate of protection to both capital and consumer goods. Jaime Ros has calculated these rates in 1960 ranged between 47 and 85 percent, and between 33 and 109 percent in 1980. The result, in the short to intermediate run, was very rapid rates of economic growth, averaging 6.5 percent in 1950 through 1973. Other than Brazil, which also followed an import substitution regime, no country in Latin America experienced higher rates of growth. Mexico’s was substantially above the regional average. [50]

[See the historical graph of population growth in Mexico through 2000 below]

Source: Essentially, Estadísticas Históricas de México (various editions since 1999; the most recent is 2014) (Accessed July 20, 2016)


But there were unexpected results as well. The contribution of labor to GDP growth was 14 percent. Capital’s contribution was 53 percent, and the remainder, total factor productivity (TFP) 28 percent.[51] As a consequence, while Mexico’s growth occurred through the accumulation of capital, the distribution of income became extremely skewed. The ratio of the top 10 percent of household income to the bottom 40 percent was 7 in 1960, and 6 in 1968. Even supporters of Mexico’s development program, such as Carlos Tello, conceded that it probable that it was the organized peasants and workers experienced an effective improvement of their relative position. The fruits of the Revolution were unevenly distributed, even among the working class.[52]

By “organized” one means such groups as the most important labor union in the country, the CTM (Confederation of Mexican Workers) or the nationally recognized peasant union, the CNC, both of which formed two of the three organized sectors of the official government party, the PRI, or Party of the Institutional Revolution that was organized in 1946. The CTM in particular was instrumental in supporting the official policy of import substitution, and thus benefited from government wage setting and political support. The leaders of these organizations became important political figures in their own right. One, Fidel Velázquez, as both a federal senator and the head of the CTM from 1941 to his death in 1997. The incorporation of these labor and peasant groups into the political system offered the government both a means of control and a guarantee of electoral support. They became pillars of what the Peruvian writer Mario Vargas Llosa famously called “the perfect dictatorship” of the PRI from 1946 to 2000, during which the PRI held a monopoly of the presidency and the important offices of state. In a sense, import substitution was the economic ideology of the PRI.[53]

Labor and economic development during the years of rapid growth is, like many others, a debated subject. While some have found strong wage growth, others, looking mostly at Mexico City, have found declining real wages. Beyond that, there is the question of informality and a segmented labor market. Were workers in the CTM the real beneficiaries of economic growth, while others in the informal sector (defined as receiving no social security payments, meaning roughly two-thirds of Mexican workers) did far less well? Obviously, the attraction of a segmented labor market model can address one obvious puzzle: why would industry substitute capital for labor, as it obviously did, if real wages were not rising? Postulating an informal sector that absorbed the rapid influx of rural migrants and thus held nominal wages steady while organized labor in the CTM got the benefit of higher negotiated wages, but in so doing, limited their employment is an attractive hypothesis, but would not command universal agreement. Nothing has been resolved, at least for the period of the “Miracle.” After Mexico entered a prolonged series of economic crises in the 1980s—here labelled as “meltdown”—the discussion must change, because many hold that the key to relative political stability and the failure of open unemployment to rise sharply can be explained by falling real wages.

The fiscal basis on which the years of the Miracle were constructed was conventional, not to say conservative.[54] A stable nominal exchange rate, balanced budgets, limited public borrowing, and a predictable monetary policy were all predicated on the notion that the private sector would react positively to favorable incentives. By and large, it did. Until the late 1960s, foreign borrowing was considered inconsequential, even if there was some concern on the horizon that it was starting to rise. No one foresaw serious macroeconomic instability. It is worth consulting a brief memorandum from Secretary of State Dean Rusk to President Lyndon Johnson (Washington, December 11, 1968) –to get some insight into how informed contemporaries viewed Mexico. The instability that existed was seen as a consequence of heavy-handedness on the part of the PRI and overreaction in the security forces. Informed observers did not view Mexico’s embrace of import-substitution industrialization as a train wreck waiting to happen. Historical actors are rarely so prescient.[55]


Slowing of the Miracle and Echeverría

The most obvious problems in Mexico were political. They stemmed from the increasing awareness that the limits of the “institutional revolution” had been reached, particularly regarding the growing democratic demands of the urban middle classes. The economic problem, which was far from obvious, was that import substitution had concentrated income in the upper 10 per cent of the population, so that domestic demand had begun to stagnate. Initially at least, public sector borrowing could support a variety of consumption subsidies to the population, and there were also efforts to transfer resources out of agriculture via domestic prices for staples such as maize. Yet Mexico’s population was also growing at the rate of nearly 3 percent per year, so that the long term prospects for any of these measures were cloudy.

At the same time, growing political pressures on the PRI, mostly dramatically manifest in the army’s violent repression of student demonstrators at Tlatelolco in 1968 just prior to the Olympics, had convinced some elements in the PRI, people like Carlos Madrazo, to argue for more radical change. The emergence of an incipient guerilla movement in the state of Guerrero had much the same effect. The new president, Luis Echeverría (1970-76), openly pushed for changes in the distribution of income and wealth, incited agrarian discontent for political purposes, dramatically increased government spending and borrowing, and alienated what had typically been a complaisant, if not especially friendly private sector.

The country’s macroeconomic performance began to deteriorate dramatically. Inflation, normally in the range of about 5 percent, rose into the low 20 percent range in the early 1970s. The public sector deficit, fueled by increasing social spending, rose from 2 to 7 percent of GDP. Money supply growth now averaged about 14 percent per year. Real GDP growth had begun to slip after 1968 and in the early 1970s, in deteriorated more, if unevenly. There had been clear convergence of regional economies in Mexico between 1930 and 1980 because of changing patterns of industrialization in the northern and central regions of the country.  After 1980, that process stalled and regional inequality again widened. [56]

While there is a tendency to blame Luis Echeverria for all or most of these developments, this forgets that his administration coincided with the First OPEC oil shock (1973) and rapidly deteriorating external conditions. Mexico had, as yet, not discovered the oil reserves (1978) that were to provide a temporary respite from economic adjustment after the shock of the peso devaluation of 1976—the first change in its value in over 20 years. At the same time, external demand fell, principally transmitted from the United States, Mexico’s largest trading partner, where the economy had fallen into recession in late 1973. Yet it seems reasonable to conclude that the difficult international environment, while important in bring Mexico’s “miracle” period to a close, was not helped by Echeverría’s propensity for demagoguery, of the loss of fiscal discipline that had long characterized government policy, at least since the 1950s. The only question to be resolved was to what sort of conclusion the period would come. The answer, unfortunately, was disastrous.[57]


Meltdown: The Debt Crisis, the Lost Decade and After

In contemporary parlance, Mexico had passed from “stabilizing” to “shared” development under Echeverría. But the devaluation of 1976 from 12.5 to 20.5 pesos to the dollar suggested that something had gone awry. One might suppose that some adjustment in course, especially in public spending and borrowing, would have occurred. But precisely the opposite occurred. Between 1976 and 1979, nominal federal spending doubled. The budget deficit increased by a factor of 15. The reason for this odd performance was the discovery of crude oil in the Gulf of Mexico, perhaps unsurprising in light of the spiking prices of the 1970s (the oil shocks of 1973-74, 1978-79), but nevertheless of considerable magnitude. In 1975, Mexico’s proven reserves were 6 billion barrels of oil. By 1978, they had increased to 40 billion. President López Portillo set himself to the task of “administering abundance” and Mexican analysts confidently predicted crude oil at $100 a barrel (when it stood at $37 in current prices in 1980). The scope of the miscalculation was catastrophic. At the same time, encouraged by bank loan pushing and effectively negative real rates of interest, Mexico borrowed abroad. Consumption subsidies, while vital in the face of slowing import substitution, were also costly, and when supported by foreign borrowing, unsustainable, but foreign indebtedness doubled between 1976 and 1979, and even further thereafter.

Matters came to a head in 1982. By then, Mexico’s foreign indebtedness was estimated at over $80 billion dollars, an increase from less than $20 billion in 1975. Real interest rates had begun to rise in the United States in mid-1981, and with Mexican borrowing tied to international rates, debt service rapidly increased. Oil revenue, which had come to constitute the great bulk of foreign exchange, followed international crude prices downward, driven in large part by a recession that had begun in the United States in mid-1981. Within six months, Mexico, too, had fallen into recession. Real per capital output was to decline by 8 percent in 1982.  Forced to sharply devalue, the real exchange rate fell by 50 percent in 1982 and inflation approached 100 percent. By the late summer, Finance Minister Jesus Silva Herzog admitted that the country could not meet an upcoming payment obligation, and was forced to turn to the US Federal Reserve, to the IMF, and to a committee of bank creditors for assistance. In late August, in a remarkable display of intemperance, President López Portillo nationalized the banking system. By December 20, 1982, Mexico’s incoming President, Miguel de la Madrid (1982-88) appeared, beleaguered, on the cover of Time Magazine framed by the caption, “We are in an Emergency.”  It was, as the saying goes, a perfect storm, and with it, the Debt Crisis and the “Lost Decade” in Mexico had begun. It would be years before anything resembling stability, let alone prosperity, was restored. Even then, what growth there was a pale imitation of what had occurred during the decades of the “Miracle.”


The 1980s

The 1980s were a difficult decade.[58]  After 1981, annual real per capita growth would not reach 4 percent again until 1989, and in 1986, it fell by 6 percent. In 1987, inflation reached 159 percent. The nominal exchange rate fell by 139 percent in 1986-1987. By the standards of the years of stabilizing development, the record of the 1980s was disastrous. To complete the devastation, on September 19, 1985, the worst earthquake in Mexican history, 7.8 on the Richter Scale, devastated large parts of central Mexico City and killed 5 thousand (some estimates run as high as 25 thousand), many of whom were simply buried in mass graves. It was as if a plague of biblical proportions had struck the country.

Massive indebtedness produced a dramatic decline in the standard of living as structural adjustment occurred. Servicing the debt required the production of an export surplus in non-oil exports, which in turn, required a reduction in domestic consumption. In an effort to surmount the crisis, the government implemented an agreement between organized labor, the private sector, and agricultural producers called the Economic Solidarity Pact (PSE). The PSE combined an incomes policy with fiscal austerity, trade and financial liberalization, generally tight monetary policy, and debt renegotiation and reduction. The centerpiece of the “remaking” of the previously inward orientation of the domestic economy was the North American Free Trade Agreement (NAFTA, 1993) linking Mexico, the United States, and Canada. While average tariff rates in Mexico had fallen from 34 percent in 1985 to 4 percent in 1992—even before NAFTA was signed—the agreement was generally seen as creating the institutional and legal framework whereby the reforms of Miguel de la Madrid and Carlos Salinas (1988-1994) would be preserved. Most economists thought its effects would be relatively larger in Mexico than in the United States, which generally appears to have been the case. Nevertheless, NAFTA has been predictably controversial, as trade agreements are wont to be. The political furor (and, in some places, euphoria) surrounding the agreement have faded, but never entirely disappeared. In the United States in particular, NAFTA is blamed for deindustrialization, although pressure on manufacturing, like trade liberalization itself, was underway long before NAFTA was negotiated. In Mexico, there has been much hand wringing over the fate of agriculture and small maize producers in particular. While none of this is likely to cease, it is nevertheless the case that there has been a large increase in the volume of trade between the NAFTA partners. To dismiss this is, quite plainly, misguided, even where sensitive and well organized political constituencies are concerned. But the legacy of NAFTA, like most everything in Mexican economic history, remains unsettled.


Post Crisis: No Miracles

Still, while some prosperity was restored to Mexico by the reforms of the 1980s and 1990s, the general macroeconomic results have been disappointing, not to say mediocre. The average real compensation per person in manufacturing in 2008 was virtually unchanged from 1993 according to the Instituto Nacional De Estadística  Geografía e Informática, and there is little reason to think the compensation has improved at all since then. It is generally conceded that per capita GDP growth has probably averaged not much more than 1 percent a year. Real GDP growth since NAFTA according to the OECD has rarely reached 5 percent and since 2010, it has been well below that.



Source: (Accessed July 21, 2016). The vertical scale cuts the horizontal axis at 1982


For virtually everyone in Mexico, the question is why, and the answers proposed include virtually any plausible factor: the breakdown of the political system after the PRI’s historic loss of presidential power in 2000; the rise of China as a competitor to Mexico in international markets; the explosive spread of narcoviolence in recent years, albeit concentrated in the states of Sonora, Sinaloa, Tamaulipas, Nuevo León and Veracruz; the results of NAFTA itself; the failure of the political system to undertake further structural economic reforms and privatizations after the initial changes of the 1980s, especially regarding the national oil monopoly, Petroleos Mexicanos (PEMEX); the failure of the border industrialization program (maquiladoras) to develop substantive backward linkages to the rest of the economy. This is by no means an exhaustive list of the candidates for poor economic performance. The choice of a cause tends to reflect the ideology of the critic.[59]

Yet it seems that, at the end of the day, the reason why post-NAFTA Mexico has failed to grow comes down to something much more fundamental: a fear of growing, embedded in the belief that the collapse of the 1980s and early 1990s (including the devastating “Tequila Crisis” of 1994-1995, which resulted in a another enormous devaluation of the peso after an initial attempt to contain the crisis was bungled)  was so traumatic and costly as to render event modest efforts to promote growth, let alone the dirigisme of times past, as essentially unwarranted. The central bank, the Banco de México (Banxico) rules out the promotion of economic growth as part of its remit—even as a theoretical proposition, let alone as a goal of macroeconomic policy– and concerns itself only with price stability. The language of its formulation is striking. “During the 1970s, there was a debate as to whether it was possible to stimulate economic growth via monetary policy.  As a result, some governments and central banks tried to reduce unemployment through expansive monetary policy.  Both economic theory and the experience of economies that tried this prescription demonstrated that it lacked validity. Thus, it became clear that monetary policy could not actively and directly stimulate economic activity and employment. For that reason, modern central banks have as their primary goal the promotion of price stability” (translation mine). Banxico is not the Fed: there is no dual mandate in Mexico.[60]

The Mexican banking system has scarcely made things easier. Private credit stands at only about a third of GDP. In recent years, the increase in private sector savings has been largely channeled to government bonds, but until quite recently, public sector deficits were very small, which is to say, fiscal policy has not been expansionary. If monetary and fiscal policy are both relatively tight, if private credit is not easy to come by, and if growth is typically presumed to be an inevitable concomitant to economic stability for which no actor (other than the private sector) is deemed responsible, it should come as no surprise that economic growth over the past two decades has been lackluster.  In the long run, aggregate supply determines real GDP, but in the short run, nominal demand matters: there is no point in creating productive capacity to satisfy demand that does not exist. And, unlike during the period of the Miracle and Stabilizing Development, attention to demand since 1982 has been limited, not to say off the table completely. It may be understandable, but Mexico’s fiscal and monetary authorities seem to suffer from what could be termed, “Fear of Growth.” For better or worse, the results are now on display. After its current (2016) return to a relatively austere budget, it remains to be seen how the economic and political system in contemporary Mexico handles slow economic growth. For that would now seem to be, in a basic sense, its largest challenge for the future.

[1] I am grateful to Ivan Escamilla and Robert Whaples for their careful readings and thoughtful criticisms.

[2] The standard reference work is Sandra Kuntz Ficker, (ed), Historia económica general de México. De la Colonia a nuestros días (México, DF: El Colegio de Mexico, 2010).

[3] Oscar Martinez, Troublesome Border (rev. ed., University of Arizona Press: Tucson, AZ, 2006) is the most helpful general account in English.

[4] There are literally dozens of general accounts of the pre-conquest world. A good starting point is Richard E.W. Adams, Prehistoric Mesoamerica (3d ed., University of Oklahoma Press: Norman, OK, 2005). More advanced is Richard E.W. Adams and Murdo J. Macleod, The Cambridge History of the Mesoamerican Peoples: Mesoamerica. (2 parts, New York: Cambridge University Press, 2000).

[5] Nora C. England and Roberto Zavala Maldonado, “Mesoamerican Languages” Oxford Bibliographies

(Accessed July 10, 2016)

[6] For an introduction to the nearly endless controversy over the pre- and post-contact population of the Americas, see William M. Denevan (ed.), The Native Population of the Americas in 1492 (2d rev ed., Madison: University of Wisconsin Press, 1992).

[7] Sherburne F Cook and Woodrow Borah, Essays in Population History: Mexico and California (Berkeley, CA: University of California Press, 1979), p. 159.

[8]Gene C. Wilken, Good Farmers Traditional Agricultural Resource Management in Mexico and Central America (Berkeley: University of California Press, 1987), p. 24.

[9] Bernard Ortiz de Montellano, Aztec Medicine Health and Nutrition (New Brunswick, NJ: Rutgers University Press, 1990).

[10] Bernardo García Martínez, “Encomenderos españoles y British residents: El sistema de dominio indirecto desde la perspectiva novohispana”, in Historia Mexicana, LX: 4 [140] (abr-jun 2011), pp. 1915-1978.

[11] These epidemics are extensively and exceedingly well documented. One of the most recent examinations is Rodofo Acuna-Soto, David W. Stahle, Matthew D. Therrell , Richard D. Griffin,  and Malcolm K. Cleaveland, “When Half of the Population Died: The Epidemic of Hemorrhagic Fevers of 1576 in Mexico,” FEMS Microbiology Letters 240 (2004) 1–5. (http://, accessed July 10, 2016.) See in particular the exceptional map and table on pp. 2-3.

[12] See in particular, Bernardo García Martínez. Los pueblos de la Sierrael poder y el espacio entre los indios del norte de Puebla hasta 1700 (Mexico, DF: El Colegio de México, 1987) and Elinor G.K. Melville, A Plague of Sheep: Environmental Consequences of the Conquest of Mexico (New York: Cambridge University Press, 1997).

[13] J. H. Elliott, “A Europe of Composite Monarchies,” Past & Present 137 (The Cultural and Political Construction of Europe): 48–71; Guadalupe Jiménez Codinach, “De Alta Lealtad: Ignacio Allende y los sucesos de 1808-1811,” in Marta Terán and José Antonio Serrano Ortega, eds., Las guerras de independencia en la América Española (La Piedad, Michoacán, MX: El Colegio de Michoacán, 2002), p. 68.

[14] Richard Salvucci, “Capitalism and Dependency in Latin America,” in Larry Neal and Jeffrey G. Williamson, eds., The Cambridge History of Capitalism (2 vols.), New York: Cambridge University Press, 2014), 1: pp. 403-408.

[15] Source: TePaske Page, (Accessed July 19, 2016)

[16]  Edith Boorstein Couturier, The Silver King: The Remarkable Life of the Count of Regla in Colonial Mexico (Albuquerque, NM: University of New Mexico Press, 2003).  Dana Velasco Murillo, Urban Indians in a Silver City: Zacatecas, Mexico, 1546-1810 (Stanford, CA: Stanford University Press, 2015), p. 43. The standard work on the subject is David Brading, Miners and Merchants in Bourbon Mexico, 1763-1810 (New York: Cambridge University Press, 1971) But also see Robert Haskett, “Our Suffering with the Taxco Tribute: Involuntary Mine Labor and Indigenous Society in Central New Spain,” Hispanic American Historical Review, 71:3 (1991), pp. 447-475. For silver in China see (accessed July 13, 2016). For the rents of empire question, see Michael Costeloe, Response to Revolution: Imperial Spain and the Spanish American Revolutions, 1810-1840 (New York: Cambridge University Press, 1986).

[17] This is an estimate. David Ringrose concluded that in the 1780s, the colonies accounted for 45 percent of Crown income, and one would suppose that Mexico would account for at least about half of that. See David R. Ringrose, Spain, Europe and the ‘Spanish Miracle’, 1700-1900 (New York: Cambridge University Press, 1996), p. 93; Mauricio Drelichman, “The Curse of Moctezuma: American Silver and the Dutch Disease,” Explorations in Economic History 42:3 (2005), pp. 349-380.

[18] José Antonio Escudero, El supuesto memorial del Conde de Aranda sobre la Independencia de América) México, DF: Universidad Nacional Autónoma de México, 2014) (, accessed July 13, 2016)

[19] Allan J. Kuethe and Kenneth J. Andrien, The Spanish Atlantic World in the Eighteenth Century. War and the Bourbon Reforms, 1713-1796 (New York: Cambridge University Press, 2014) is the most recent account of this period.

[20] Richard J. Salvucci, “Economic Growth and Change in Bourbon Mexico: A Review Essay,” The Americas, 51:2 (1994), pp. 219-231; William B Taylor, Magistrates of the Sacred: Priests and Parishioners in Eighteenth Century Mexico (Palo Alto: Stanford University Press, 1996), p. 24; Luis Jáuregui, La Real Hacienda de Nueva España. Su Administración en la Época de los Intendentes, 1786-1821 (México, DF: UNAM, 1999), p. 157.

[21] Jeremy Baskes, Staying AfloatRisk and Uncertainty in Spanish Atlantic World Trade, 1760-1820 (Stanford, CA: Stanford University Press, 2013); Xabier Lamikiz, Trade and Trust in the Eighteenth-century Atlantic World: Spanish Merchants and their Overseas Networks (Suffolk, UK: The Boydell Press., 2013). The starting point of all these studies is Clarence Haring, Trade and Navigation between Spain and the Indies in the Time of the Hapsburgs (Cambridge, MA: Harvard University Press, 1918).

[22] The best, and indeed, virtually unique starting point for considering these changes in their broadest dimensions   are the joint works of Stanley and Barbara Stein: Silver, Trade, and War (2003); Apogee of Empire (2004), and Edge of Crisis (2010), All were published by Johns Hopkins University Press and do for the Spanish Empire what Laurence Henry Gipson did for the First British Empire.

[23] The key work is María Eugenia Romero Sotelo, Minería y Guerra. La economía de Nueva España, 1810-1821 (México, DF: UNAM, 1997)

[24] Calculated from José María Luis Mora, Crédito Público ([1837] México, DF: Miguel Angel Porrúa, 1986), pp. 413-460. Also see Richard J. Salvucci, Politics, Markets, and Mexico’s “London Debt,” 1823-1887 (NY: Cambridge University Press, 2009).

[25] Jesús Hernández Jaimes, La Formación de la Hacienda Pública Mexicana y las Tensiones Centro -Periferia, 1821-1835  (México, DF: El Colegio de México, 2013). Javier Torres Medina, Centralismo y Reorganización. La Hacienda Pública Durante la Primera República Central de México, 1835-1842 (México, DF: Instituto Mora, 2013). The only treatment in English is Michael P. Costeloe, The Central Republic in Mexico, 1835-1846 (New York: Cambridge University Press, 1993).

[26] An agricultural worker who worked full time, 6 days a week, for the entire year (a strong assumption), in Central Mexico could have expected cash income of perhaps 24 pesos. If food, such as beans and tortilla were added, the whole pay might reach 30. The figure of 40 pesos comes from considerably richer agricultural lands around the city of Querétaro, and includes as an average income from nonagricultural employment as well, which was higher.  Measuring Worth would put the relative historic standard of living value in 2010 prices at $1.040, with the caveat that this is relative to a bundle of goods purchased in the United States. (

[27]The phrase comes from Guido di Tella and Manuel Zymelman. See Colin Lewis, “Explaining Economic Decline: A review of recent debates in the economic and social history literature on the Argentine,” European Review of Latin American and Caribbean Studies, 64 (1998), pp. 49-68.

[28] Francisco Téllez Guerrero, De reales y granos. Las finanzas y el abasto de la Puebla de los Angeles, 1820-1840 (Puebla, MX: CIHS, 1986). Pp. 47-79.

[29]This is based on an analysis of government lending contracts. See Rosa María Meyer and Richard Salvucci, “The Panic of 1837 in Mexico: Evidence from Government Contracts” (in progress).

[30] There is an interesting summary of this data in U.S Govt., 57th Cong., 1 st sess., House, Monthly Summary of Commerce and Finance of the United States (September 1901) (Washington, DC: GPO, 1901), pp. 984-986.

[31] Salvucci, Politics and Markets, pp. 201-221.

[32] Miguel Galindo y Galindo, La Gran Década Nacional o Relación Histórica de la Guerra de Reforma, Intervención Extranjera, y gobierno del archiduque Maximiliano, 1857-1867 ([1902], 3 vols., México, DF: Fondo de Cultura Económica, 1987).

[33] Carmen Vázquez Mantecón, Santa Anna y la encrucijada del Estado. La dictadura, 1853-1855 (México, DF: Fondo de Cultura Económica, 1986).

[34] Moramay López-Alonso, Measuring Up: A History of Living Standards in Mexico, 1850-1950 (Stanford, CA: Stanford University Press, 2012);  Amilcar Challú and Auroro Gómez Galvarriato, “Mexico’s Real Wages in the Age of the Great Divergence, 1730-1930,” Revista de Historia Económica 33:1 (2015), pp. 123-152; Amílcar E. Challú, “The Great Decline: Biological Well-Being and Living Standards in Mexico, 1730-1840,” in Ricardo Salvatore, John H. Coatsworth, and Amilcar E. Challú, Living Standards in Latin American History: Height, Welfare, and Development, 1750-2000 (Cambridge, MA: Harvard University Press, 2010), pp. 23-67.

[35]See Challú and Gómez Galvarriato, “Real Wages,” Figure 5, p. 101.

[36] Luis González et al, La economía mexicana durante la época de Juárez (México, DF: 1976).

[37] Teresa Rojas Rabiela and Ignacio Gutiérrez Ruvalcaba, Cien ventanas a los países de antaño: fotografías del campo mexicano de hace un siglo) (México, DF: CONACYT, 2013), pp. 18-65.

[38] Alma Parra, “La Plata en la Estructura Económica Mexicana al Inicio del Siglo XX,” El Mercado de Valores 49:11 (1999), p. 14.

[39] Sandra Kuntz Ficker, Empresa Extranjera y Mercado Interno: El Ferrocarril Central Mexicano (1880-1907) (México, DF: El Colegio de México, 1995).

[40] Priscilla Connolly, El Contratista de Don Porfirio. Obras públicas, deuda y desarrollo desigual (México, DF: Fondo de Cultura Económica, 1997).

[41] Most notably John Tutino, From Insurrection to Revolution in Mexico: Social Bases of Agrarian Violence, 1750-1940 (Princeton, NJ: Princeton University Press, 1986). p. 229. My growth figures are based on the INEGI, Estadísticas Historicas de México, 2014) (, Accessed July 15, 2016).

[42] Stephen H. Haber, Industry and Underdevelopment: The Industrialization of Mexico, 1890-1940 (Stanford, CA: Stanford University Press, 1989); Aurora Gómez-Galvarriato, Industry and Revolution: Social and Economic Change in the Orizaba Valley (Cambridge, MA: Harvard University Press, 2013).

[43] There are literally dozens of accounts of the Revolution. The usual starting point, in English, is Alan Knight, The Mexican Revolution (reprint ed., 2 vols., Lincoln, NE: 1990).

[44] This argument has been made most insistently in Armando Razo and Stephen Haber, “The Rate of Growth of Productivity in Mexico, 1850-1933: Evidence from the Cotton Textile Industry,” Journal of Latin American Studies 30:3 (1998), pp. 481-517.

[45]Robert McCaa, “Missing Millions: The Demographic Cost of the Mexican revolution,” Mexican Studies/Estudios Mexicanos 19:2 (Summer 2003): 367-400; Virgilio Partida-Bush, “Demographic Transition, Demographic Bonus, and Ageing in Mexico, “ Proceedings of the United Nations Expert Group Meeting on Social and Economic Implications of Changing Population Age Structures. ( (Accessed July 15, 2016), pp. 287-290.

[46] An implication of the studies of Alan Knight, and of Clark Reynolds, The Mexican Economy: Twentieth Century Structure and Growth (New Haven, CT: Yale University Press, 1971).

[47] An interesting summary of revisionist thinking on the nature and history of the ejido appears in Emilio Kuri, “La invención del ejido, Nexos, January 2015.

[48]Alan Knight, “Cardenismo: Juggernaut or Jalopy?” Journal of Latin American Studies, 26:1 (1994), pp. 73-107.

[49] Stephen Haber, “The Political Economy of Industrialization,” in Victor Bulmer-Thomas, John Coatsworth, and Roberto Cortes-Conde, eds., The Cambridge Economic History of Latin America (2 vols., New York: Cambridge University Press, 2006), 2:  537-584.

[50]Again, there are dozens of studies of the Mexican economy in this period. Ros’ figures come from “Mexico’s Trade and Industrialization Experience Since 1960: A Reconsideration of Past Policies and Assessment of Current Reforms,” Kellogg Institute (Working Paper 186, January 1993). For a more general study, see Juan Carlos Moreno-Brid and Jaime Ros, Development and Growth in the Me3xican Economy. A Historical Perspective (New York: Oxford University Press, 2009). A recent Spanish language treatment is Enrique Cárdenas Sánchez, El largo curso de la economía mexicana. De 1780 a nuestros días (México, DF: Fondo de Cultura Económica, 2015). A view from a different perspective is Carlos Tello, Estado y desarrollo económico. México 1920-2006 (México, DF, UNAM, 2007).

[51]André A. Hoffman, Long Run Economic Development in Latin America in a Comparative Perspective: Proximate and Ultimate Causes (Santiago, Chile: CEPAL, 2001), p. 19.

[52]Tello, Estado y desarrollo, pp. 501-505.

[53] Mario Vargas Llosa, “Mexico: The Perfect Dictatorship,” New Perspectives Quarterly 8 (1991), pp. 23-24.

[54] Rafael Izquierdo, Política Hacendario del Desarrollo Estabilizador, 1958-1970 (México, DF: Fondo de Cultura Económica, 1995. The term stabilizing development was itself termed by Izquierdo as a government minister.

[55]See Foreign Relations of the United States, 1964-1968. Mexico and Central America (Accessed July 15, 2016).

[56] José Aguilar Retureta, “The GDP Per Capita of the Mexican Regions (1895:1930): New Estimates, Revista de Historia Económica, 33: 3 (2015), pp. 387-423.

[57] For a contemporary account with a sense of the immediacy of the end of the Echeverría regime, see “Así se devaluó el peso,” Proceso, November 13, 1976.

[58] The standard account is Stephen Haber, Herbert Klein, Noel Maurer, and Kevin Middlebrook, Mexico since 1980 (New York: Cambridge University Press, 2008). A particularly astute economic account is Nora Lustig, Mexico: The Remaking of an Economy (2d ed., Washington, DC: The Brookings Institution, 1998).  But also Louise E. Walker, Waking from the Dream. Mexico’s Middle Classes After 1968 (Stanford, CA: Stanford University Press, 2013).

[59] See, for example, Jaime Ros Bosch, Algunas tesis equivocadas sobre el estancamiento económico de México (México, DF: El Colegio de México, 2013).

[60] La Banca Central y la Importancia de la Estabilidad Económica  June 16, 2008.  (, Accessed July 15, 2016.). Also see Brian Winter, “This Man is Brilliant: So Why Doesn’t Mexico’s Economy Grow Faster?” Americas Quarterly ( (Accessed July 21, 2016)



The Sterling Area

Jerry Mushin, Victoria University of Wellington


One of the consequences of the economic crisis of 1929–33 was that a large number of countries abandoned the gold standard. This meant that their governments no longer guaranteed, in gold terms, their currencies’ values. The United Kingdom (and the Irish Free State, whose currency had a rigidly fixed exchange rate with the British pound) left the gold standard in 1931. To reduce the fluctuation of exchange rates, many of the countries that left the gold standard decided to stabilize their currencies with respect to the value of the British pound (which is also known as sterling). These countries became known, initially unofficially, as the Sterling Area (and also as the Sterling Bloc). Sterling Area countries tended (as they had before the end of the gold standard) to hold their reserves in the form of sterling balances in London.

The countries that formed the Sterling Area generally had at least one of two characteristics. The UK had strong historical links with these countries and/or was a major market for their exports. Membership of the Sterling Area was not constant. By 1933, it comprised most of the British Empire, and Denmark, Egypt, Estonia, Finland, Iran, Iraq, Latvia, Lithuania, Norway, Portugal, Siam (Thailand), Sweden, and other countries. Despite being parts of the British Empire, Canada, Hong Kong, and Newfoundland did not join the Sterling Area. However, Hong Kong joined the Sterling Area after the Second World War. Other countries, including Argentina, Brazil, Bolivia, Greece, Japan, and Yugoslavia, stabilized their exchange rates with respect to the British pound for several years and (especially Argentina and Japan) often held significant reserves in sterling but, partly because they enforced exchange control, were not regarded as part of the Sterling Area.

Following the 1931 crisis, the UK introduced restrictions on overseas lending. This provided an additional incentive for Sterling Area membership. Countries that pegged their currencies to the British pound, and held their official external reserves largely in sterling assets, had preferential access to the British capital market. The British pound was perceived to have a relatively stable value and to be widely acceptable.

Membership of the Sterling Area also involved an effective pooling of non-sterling (especially U.S.dollar) reserves, which were frequently a scarce resource. This was of mutual benefit; the surpluses of some countries financed the deficits of others. The UK could perhaps be regarded as the banker for the other members of the Sterling Area.

Following the gold standard crisis in the early 1930s, the Sterling Area was one of three major currency groups. The gold bloc, comprising Belgium, France, Italy, Luxembourg, Netherlands, Switzerland, and Poland (and the colonial territories of four of these), consisted of those countries that, in 1933, expressed a formal determination to continue to operate the gold standard. However, this bloc began to collapse from 1935. The third group of countries was known as the exchange-control countries. The members of this bloc, comprising Austria, Bulgaria, Czechoslovakia, Germany, Greece, Hungary, Turkey, and Yugoslavia, regulated the currency market and imposed tariffs and import restrictions. Germany was the dominant member of this bloc.


In September 1939, at the start of the Second World War, the British government introduced exchange controls. However, there were no restrictions on payments between Sterling Area countries. The value of the pound was fixed at US$4.03, which was a devaluation of about 14%. Partly as a result of these measures, most of the Sterling Area countries without a British connection withdrew. Egypt, Faroe Islands, Iceland, and Iraq remained members, and the Free French (non-Vichy) territories became members, of the Sterling Area.


There were three main changes in the Sterling Area after the Second World War. First, its membership was precisely defined, as the Scheduled Territories, in the Exchange Control Act, 1947. It was previously unclear whether certain countries were members. Second, the Sterling Area became more discriminatory. Members tended not to restrict trade with other Sterling Area countries while applying restrictions to trade with other countries. The intention was to economize on the use of United States dollars, and other non-sterling currencies, which were in short supply. Third, war finances had increased many countries’ sterling balances in London without increasing the reserves held by the British government. This exposed the reserves to heavier pressures than they had had to withstand before the war.

In 1947, the Sterling Area was defined as all members of the Commonwealth except Canada and Newfoundland, all British territories, Burma, Iceland, Iraq, Irish Republic, Jordan, Kuwait and the other Persian Gulf sheikhdoms, and Libya. In the rest of the world, which was categorized as the Prescribed Territories, controls prevented the conversion of British pounds to U.S. dollars (and to currencies that were pegged to the U.S. dollar). Formal convertibility of British pounds into U.S. dollars, which was introduced in 1958, applied only to non-residents of the Sterling Area (Schenk, 2010).

Following the 1949 devaluation of the British pound, by 30.5% from US$4.03 to US$2.80, much of the rest of the world, and almost all of the Sterling Area, devalued too. This indicates the major international trading role of the British economy. A notable exception, which did not devalue immediately, was Pakistan. Most currencies’ sterling parities did not change, so this destroyed the intended effect of the British devaluation.

The world economy had changed by the time of the next sterling crisis. The immediate international impact of the 1967 devaluation of the British pound, by 14.3% from US$2.80 to US$2.40, reflects the diminished significance of the Sterling Area. In marked contrast to the response to the 1949 devaluation, only fourteen members of the International Monetary Fund devalued their currencies following the British devaluation of 1967. A significant proportion of Sterling Area countries, including Australia, India, Pakistan, and South Africa, did not devalue. Many of the other Sterling Area countries, including Ceylon (Sri Lanka), Hong Kong, Iceland, Fiji, and New Zealand, devalued by different percentages, which changed their currencies’ sterling parities. Outside the Sterling Area, a small number of countries devalued; most of these devalued by percentages that were different to the British devaluation. The effect was that a large number of sterling parities were changed by the 1967 devaluation.

The Sterling Area showed obvious signs of decline even before the 1967 devaluation. For example, Nigeria ended its sterling parity in 1962 and Ghana ended its sterling parity in 1965. In 1964, sterling was 83% of the official reserves of overseas Sterling Area countries, but this share had decreased to 75% in 1966 and to 65% in 1967 (Schenk, 2010). The role of the UK in the Sterling Area was frequently seen, especially by France, as an obstacle in the British application to join the European Economic Community.

The reserves of the overseas members of the Sterling Area suffered a capital loss following the 1967 devaluation. This encouraged diversification of reserves into other types of assets. The British government responded by negotiating the Basel Agreements with other governments in the Sterling Area (Yeager, 1976). Each country in the Sterling Area undertook to limit its holdings of non-sterling assets and, in return, the U.S.dollar value of its sterling assets was guaranteed. These agreements restrained, but did not halt, the downward trend of holdings of sterling reserves. The Basel Agreements were partly underwritten by other central banks, which were concerned for international monetary stability, and were arranged with the assistance of the Bank for International Settlements.


In 1972, the UK ended the fixed exchange rate, in U.S. dollars, of the pound. In 1971 or in 1972, most other Sterling Area countries ended their fixed exchange rates with respect to the British pound. Some of these countries, including Australia, Hong Kong, Jamaica, Jordan, Kenya, Malaysia, New Zealand, Pakistan, Singapore, South Africa, Sri Lanka, Tanzania, Uganda, and Zambia, pegged their currencies to the U.S. dollar. The minority of Sterling Area members that retained their sterling parities included Bangladesh, Gambia, Irish Republic, Seychelles, and the Eastern Caribbean Currency Union. Other countries in the Sterling Area introduced floating exchange rates.

Also in 1972, the UK extended to Sterling Area countries the exchange controls on capital transactions that had previously applied only to other countries. This decision, combined with the changes in sterling parities, meant that the Sterling Area effectively ceased to exist in 1972.

In 1979, when it joined the European Monetary System, the Irish Republic ended its fixed exchange rate with respect to the British pound. Membership of the EMS, which the UK did not join until 1990, required the ending of the link between the British pound and the Irish Republic pound. Also in 1979, the UK abolished all of its remaining exchange controls.


The Sterling Area was a zone of relative stability of exchange rates but not a monetary union. It did not have a single central bank. Distinct national currencies circulated within its boundaries, and their exchange rates, although fixed with respect to the British pound, were occasionally changed. For example, although the New Zealand pound was devalued in 1949 by the same percentage as the British pound, it was revalued in 1948 and devalued in 1967, both relative to the British pound. The other important feature of the Sterling Area is that capital movements between its members were generally unregulated.

The decline of the Sterling Area was related to the decline of the British pound as a reserve currency. In 1950, more than 55% of the world’s reserves were in sterling (Schenk, 2010). In 2011, the proportion was about 2% (International Monetary Fund).

In addition to the UK, the vestige of the Sterling Area now consists only of Falkland Islands, Gibraltar, Guernsey, Isle of Man, Jersey, and St. Helena, and is of purely local significance. No other countries now fix their exchange rates in terms of the British pound. Since 1985, no members of the International Monetary Fund have specified fixed exchange rates in British pounds. In one generation, the British pound has evolved from a pivotal role in the world economy to its present minor role.

References and other important sources:

Aldcroft, Derek and Michael Oliver. ExchangeRate Régimes in the Twentieth Century. Edward Elgar Publishing, Cheltenham, 1998.

Conan, Arthur. The Problem of Sterling. Macmillan Press, London, 1966.

Day, Alan. Outline of Monetary Economics. Oxford University Press, 1966.

McMahon, Christopher. Sterling in the Sixties. Oxford University Press, 1964.

Sayers, Richard. Modern Banking [7th ed]. Oxford University Press, 1967.

Scammell, W.M. The International Economy since 1945 [2nd ed]. Macmillan Press, London, 1983.

Schenk, Catherine. The Decline of Sterling: Managing the Retreat of an International Currency, 1945–92. Cambridge University Press, 2010.

Tew, Brian. The Evolution of the International Monetary System, 1945–88. Hutchinson and Co, London, 1988.

Wells, Sidney. International Economics. George Allen and Unwin Ltd, London, 1971.

Yeager, Leland. International Monetary Relations: Theory, History, and Policy [2nd ed]. Harper and Row Publishers, New York, 1976.

Jerry Mushin can be reached at

Women Workers in the British Industrial Revolution

Joyce Burnette, Wabash College

Historians disagree about whether the British Industrial Revolution (1760-1830) was beneficial for women. Frederick Engels, writing in the late nineteenth century, thought that the Industrial Revolution increased women’s participation in labor outside the home, and claimed that this change was emancipating. 1 More recent historians dispute the claim that women’s labor force participation rose, and focus more on the disadvantages women experienced during this time period.2 One thing is certain: the Industrial Revolution was a time of important changes in the way that women worked.

The Census

Unfortunately, the historical sources on women’s work are neither as complete nor as reliable as we would like. Aggregate information on the occupations of women is available only from the census, and while census data has the advantage of being comprehensive, it is not a very good measure of work done by women during the Industrial Revolution. For one thing, the census does not provide any information on individual occupations until 1841, which is after the period we wish to study.3 Even then the data on women’s occupations is questionable. For the 1841 census, the directions for enumerators stated that “The professions &c. of wives, or of sons or daughters living with and assisting their parents but not apprenticed or receiving wages, need not be inserted.” Clearly this census would not give us an accurate measure of female labor force participation. Table One illustrates the problem further; it shows the occupations of men and women recorded in the 1851 census, for 20 occupational categories. These numbers suggest that female labor force participation was low, and that 40 percent of occupied women worked in domestic service. However, economic historians have demonstrated that these numbers are misleading. First, many women who were actually employed were not listed as employed in the census. Women who appear in farm wage books have no recorded occupation in the census.4 At the same time, the census over-estimates participation by listing in the “domestic service” category women who were actually family members. In addition, the census exaggerates the extent to which women were concentrated in domestic service occupations because many women listed as “maids”, and included in the domestic servant category in the aggregate tables, were really agricultural workers.5

Table One

Occupational Distribution in the 1851 Census of Great Britain

Occupational Category Males (thousands) Females (thousands) Percent Female
Public Administration




Armed Forces








Domestic Services








Transportation & Communications
















Metal Manufactures




Building & Construction




Wood & Furniture




Bricks, Cement, Pottery, Glass








Leather & Skins




Paper & Printing












Food, Drink, Lodging








Total Occupied




Total Unoccupied




Source: B.R. Mitchell, Abstract of British Historical Statistics, Cambridge: Cambridge University Press, 1962, p. 60.

Domestic Service

Domestic work – cooking, cleaning, caring for children and the sick, fetching water, making and mending clothing – took up the bulk of women’s time during the Industrial Revolution period. Most of this work was unpaid. Some families were well-off enough that they could employ other women to do this work, as live-in servants, as charring women, or as service providers. Live-in servants were fairly common; even middle-class families had maids to help with the domestic chores. Charring women did housework on a daily basis. In London women were paid 2s.6d. per day for washing, which was more than three times the 8d. typically paid for agricultural labor in the country. However, a “day’s work” in washing could last 20 hours, more than twice as long as a day’s work in agriculture.6 Other women worked as laundresses, doing the washing in their own homes.

Cottage Industry

Before factories appeared, most textile manufacture (including the main processes of spinning and weaving) was carried out under the “putting-out” system. Since raw materials were expensive, textile workers rarely had enough capital to be self-employed, but would take raw materials from a merchant, spin or weave the materials in their homes, and then return the finished product and receive a piece-rate wage. This system disappeared during the Industrial Revolution as new machinery requiring water or steam power appeared, and work moved from the home to the factory.

Before the Industrial Revolution, hand spinning had been a widespread female employment. It could take as many as ten spinners to provide one hand-loom weaver with yarn, and men did not spin, so most of the workers in the textile industry were women. The new textile machines of the Industrial Revolution changed that. Wages for hand-spinning fell, and many rural women who had previously spun found themselves unemployed. In a few locations, new cottage industries such as straw-plaiting and lace-making grew and took the place of spinning, but in other locations women remained unemployed.

Another important cottage industry was the pillow-lace industry, so called because women wove the lace on pins stuck in a pillow. In the late-eighteenth century women in Bedford could earn 6s. a week making lace, which was about 50 percent more than women earned in argiculture. However, this industry too disappeared due to mechanization. Following Heathcote’s invention of the bobbinet machine (1809), cheaper lace could be made by embroidering patterns on machine-made lace net. This new type of lace created a new cottage industry, that of “lace-runners” who emboidered patterns on the lace.

The straw-plaiting industry employed women braiding straw into bands used for making hats and bonnets. The industry prospered around the turn of the century due to the invention of a simple tool for splitting the straw and war, which cut off competition from Italy. At this time women could earn 4s. to 6s. per week plaiting straw. This industry also declined, though, following the increase in free trade with the Continent in the 1820s.


A defining feature of the Industrial Revolution was the rise of factories, particularly textile factories. Work moved out of the home and into a factory, which used a central power source to run its machines. Water power was used in most of the early factories, but improvements in the steam engine made steam power possible as well. The most dramatic productivity growth occurred in the cotton industry. The invention of James Hargreaves’ spinning jenny (1764), Richard Arkwright’s “throstle” or “water frame” (1769), and Samuel Crompton’s spinning mule (1779, so named because it combined features of the two earlier machines) revolutionized spinning. Britain began to manufacture cotton cloth, and declining prices for the cloth encouraged both domestic consumption and export. Machines also appeared for other parts of the cloth-making process, the most important of which was Edmund Cartwright’s powerloom, which was adopted slowly because of imperfections in the early designs, but was widely used by the 1830s. While cotton was the most important textile of the Industrial Revolution, there were advances in machinery for silk, flax, and wool production as well.7

The advent of new machinery changed the gender division of labor in textile production. Before the Industrial Revolution, women spun yarn using a spinning wheel (or occasionally a distaff and spindle). Men didn’t spin, and this division of labor made sense because women were trained to have more dexterity than men, and because men’s greater strength made them more valuable in other occupations. In contrast to spinning, handloom weaving was done by both sexes, but men outnumbered women. Men monopolized highly skilled preparation and finishing processes such as wool combing and cloth-dressing. With mechanization, the gender division of labor changed. Women used the spinning jenny and water frame, but mule spinning was almost exclusively a male occupation because it required more strength, and because the male mule-spinners actively opposed the employment of female mule-spinners. Women mule-spinners in Glasgow, and their employers, were the victims of violent attacks by male spinners trying to reduce the competition in their occupation.8 While they moved out of spinning, women seem to have increased their employment in weaving (both in handloom weaving and eventually in powerloom factories). Both sexes were employed as powerloom operators.

Table Two

Factory Workers in 1833: Females as a Percent of the Workforce

Industry Ages 12 and under Ages 13-20 Ages 21+ All Ages
Cotton 51.8 65.0 52.2 58.0
Wool 38.6 46.2 37.7 40.9
Flax 54.8 77.3 59.5 67.4
Silk 74.3 84.3 71.3 78.1
Lace 38.7 57.4 16.6 36.5
Potteries 38.1 46.9 27.1 29.4
Dyehouse 0.0 0.0 0.0 0.0
Glass 0.0 0.0 0.0 0.0
Paper - 100.0 39.2 53.6
Whole Sample 52.8 66.4 48.0 56.8

Source: “Report from Dr. James Mitchell to the Central Board of Commissioners, respecting the Returns made from the Factories, and the Results obtained from them.” British Parliamentary Papers, 1834 (167) XIX. Mitchell collected data from 82 cotton factories, 65 wool factories, 73 flax factories, 29 silk factories, 7 potteries, 11 lace factories, one dyehouse, one “glass works”, and 2 paper mills throughout Great Britain.

While the highly skilled and highly paid task of mule-spinning was a male occupation, many women and girls were engaged in other tasks in textile factories. For example, the wet-spinning of flax, introduced in Leeds in 1825, employed mainly teenage girls. Girls often worked as assistants to mule-spinners, piecing together broken threads. In fact, females were a majority of the factory labor force. Table Two shows that 57 percent of factory workers were female, most of them under age 20. Women were widely employed in all the textile industries, and constituted the majority of workers in cotton, flax, and silk. Outside of textiles, women were employed in potteries and paper factories, but not in dye or glass manufacture. Of the women who worked in factories, 16 percent were under age 13, 51 percent were between the ages of 13 and 20, and 33 percent were age 21 and over. On average, girls earned the same wages as boys. Children’s wages rose from about 1s.6d. per week at age 7 to about 5s. per week at age 15. Beginning at age 16, and a large gap between male and female wages appeared. At age 30, women factory workers earned only one-third as much as men.

Figure One

Distribution of Male and Female Factory Employment by Age, 1833

Figure 1

Source: “Report from Dr. James Mitchell to the Central Board of Commissioners, respecting the Returns made from the Factories, and the Results obtained from them.” British Parliamentary Papers, 1834 (167) XIX.

The y-axis shows the percentage of total employment within each sex that is in that five-year age category.

Figure Two

Wages of Factory Workers in 1833

Figure 2

Source: “Report from Dr. James Mitchell to the Central Board of Commissioners, respecting the Returns made from the Factories, and the Results obtained from them.” British Parliamentary Papers, 1834 (167) XIX.


Wage Workers

Wage-earners in agriculture generally fit into one of two broad categories – servants who were hired annually and received part of their wage in room and board, and day-laborers who lived independently and were paid a daily or weekly wage. Before industrialization servants comprised between one-third and one-half of labor in agriculture.9 For servants the value of room and board was a substantial portion of their compensation, so the ratio of money wages is an under-estimate of the ratio of total wages (see Table Three). Most servants were young and unmarried. Because servants were paid part of their wage in kind, as board, the use of the servant contract tended to fall when food prices were high. During the Industrial Revolution the use of servants seems to have fallen in the South and East.10 The percentage of servants who were female also declined in the first half of the nineteenth century.11

Table Three

Wages of Agricultural Servants (£ per year)

Year Location Male Money Wage Male In-Kind Wage Female Money Wage Female In-Kind Wage Ratio of Money Wages Ratio of Total Wages
1770 Lancashire







1770 Oxfordshire







1770 Staffordshire







1821 Yorkshire







Source: Joyce Burnette, “An Investigation of the Female-Male Wage Gap during the Industrial Revolution in Britain,” Economic History Review 50 (May 1997): 257-281.

While servants lived with the farmer and received food and lodging as part of their wage, laborers lived independently, received fewer in-kind payments, and were paid a daily or a weekly wage. Though the majority of laborers were male, some were female. Table Four shows the percentage of laborers who were female at various farms in the late-18th and early-19th centuries. These numbers suggest that female employment was widespread, but varied considerably from one location to the next. Compared to men, female laborers generally worked fewer days during the year. The employment of female laborers was concentrated around the harvest, and women rarely worked during the winter. While men commonly worked six days per week, outside of harvest women generally averaged around four days per week.

Year Location Percent Female
1772-5 Oakes in Norton, Derbyshire


1774-7 Dunster Castle Farm, Somerset


1785-92 Dunster Castle Farm, Somerset


1794-5 Dunster Castle Farm, Somerset


1801-3 Dunster Castle Farm, Somerset


1801-4 Nettlecombe Barton, Somerset


1814-6 Nettlecombe Barton, Somerset


1826-8 Nettlecombe Barton, Somerset


1828-39 Shipton Moyne, Gloucestershire


1831-45 Oakes in Norton, Derbyshire


1836-9 Dunster Castle Farm, Somerset


1839-40 Lustead, Norfolk


1846-9 Dunster Castle Farm, Somerset


Sources: Joyce Burnette, “Labourers at the Oakes: Changes in the Demand for Female Day-Laborers at a Farm near Sheffield During the Agricultural Revolution,” Journal of Economic History 59 (March 1999): 41-67; Helen Speechley, Female and Child Agricultural Day Labourers in Somerset, c. 1685-1870, dissertation, Univ. of Exeter, 1999. Sotheron-Estcourt accounts, G.R.O. D1571; Ketton-Cremer accounts, N.R.O. WKC 5/250

The wages of female day-laborers were fairly uniform; generally a farmer paid the same wage to all the adult women he hired. Women’s daily wages were between one-third and one-half of male wages. Women generally worked shorter days, though, so the gap in hourly wages was not quite this large.12 In the less populous counties of Northumberland and Durham, male laborers were required to provide a “bondager,” a woman (usually a family member) who was available for day-labor whenever the employer wanted her.13

Table Five

Wages of Agricultural Laborers

Year Location Male Wage (d./day) Female Wage (d./day) Ratio
1770 Yorkshire 5 12 0.42
1789 Hertfordshire 6 16 0.38
1797 Warwickshire 6 14 0.43
1807 Oxfordshire 9 23 0.39
1833 Cumberland 12 24 0.50
1833 Essex 10 22 0.45
1838 Worcester 9 18 0.50

Source: Joyce Burnette, “An Investigation of the Female-Male Wage Gap during the Industrial Revolution in Britain,” Economic History Review 50 (May 1997): 257-281.

Various sources suggest that women’s employment in agriculture declined during the early nineteenth century. Enclosure increased farm size and changed the patterns of animal husbandry, both of which seem to have led to reductions in female employment.14 More women were employed during harvest than during other seasons, but women’s employment during harvest declined as the scythe replaced the sickle as the most popular harvest tool. While women frequently harvested with the sickle, they did not use the heavier scythe.15 Female employment fell the most in the East, where farms increasingly specialized in grain production. Women had more work in the West, which specialized more in livestock and dairy farming.16


During the eighteenth century there were many opportunities for women to be productively employed in farm work on their own account, whether they were wives of farmers on large holdings, or wives of landless laborers. In the early nineteenth century, however, many of these opportunities disappeared, and women’s participation in agricultural production fell.

In a village that had a commons, even if the family merely rented a cottage the wife could be self-employed in agriculture because she could keep a cow, or other animals, on the commons. By careful management of her stock, a woman might earn as much during the year as her husband earned as a laborer. Women also gathered fuel from the commons, saving the family considerable expense. The enclosure of the commons, though, eliminated these opportunities. In an enclosure, land was re-assigned so as to eliminate the commons and consolidate holdings. Even when the poor had clear legal rights to use the commons, these rights were not always compensated in the enclosure agreement. While enclosure occurred at different times for different locations, the largest waves of enclosures occurred in the first two decades of the nineteenth century, meaning that, for many, opportunities for self-employment in agriculture declined as the same time as employment in cottage industry declined. 17

Only a few opportunities for agricultural production remained for the landless laboring family. In some locations landlords permitted landless laborers to rent small allotments, on which they could still grow some of their own food. The right to glean on fields after harvest seems to have been maintained at least through the middle of the nineteenth century, by which time it had become one of the few agricultural activities available to women in some areas. Gleaning was a valuable right; the value of the grain gleaned was often between 5 and 10 percent of the family’s total annual income.18

In the eighteenth century it was common for farmers’ wives to be actively involved in farm work, particularly in managing the dairy, pigs, and poultry. The diary was an important source of income for many farms, and its success depended on the skill of the mistress, who usually ran the operation with no help from men. In the nineteenth century, however, farmer’s wives were more likely to withdraw from farm management, leaving the dairy to the management of dairymen who paid a fixed fee for the use of the cows.19 While poor women withdrew from self-employment in agriculture because of lost opportunities, farmer’s wives seem to have withdraw because greater prosperity allowed them to enjoy more leisure.

It was less common for women to manage their own farms, but not unknown. Commercial directories list numerous women farmers. For example, the 1829 Directory of the County of Derby lists 3354 farmers, of which 162, or 4.8%, were clearly female.20 While the commercial directories themselves do not indicate to what extent these women were actively involved in their farms, other evidence suggests that at least some women farmers were actively involved in the work of the farm.21


During the Industrial Revolution period women were also active businesswomen in towns. Among business owners listed in commercial directories, about 10 percent were female. Table Seven shows the percentage female in all the trades with at least 25 people listed in the 1788 Manchester commercial directory. Single women, married women, and widows are included in these numbers. Sometimes these women were widows carrying on the businesses of their deceased husbands, but even in this case that does not mean they were simply figureheads. Widows often continued their husband’s businesses because they had been active in management of the business while their husband was alive, and wished to continue.22 Sometimes married women were engaged in trade separately from their husbands. Women most commonly ran shops and taverns, and worked as dressmakers and milliners, but they were not confined to these areas, and appear in most of the trades listed in commercial directories. Manchester, for example, had six female blacksmiths and five female machine makers in 1846. Between 1730 and 1800 there were 121 “rouping women” selling off estates in Edinburgh. 23

Table Six

Business Owners Listed in Commercial Directories

Date City Male Female Unknown Gender Percent Female
1788 Manchester





1824-5 Manchester





1846 Manchester





1850 Birmingham





1850 Derby





Sources: Lewis’s Manchester Directory for 1788 (reprinted by Neil Richardson, Manchester, 1984); Pigot and Dean’s Directory for Manchester, Salford, &c. for 1824-5 (Manchester 1825); Slater’s National Commercial Directory of Ireland (Manchester, 1846); Slater’s Royal National and Commercial Directory (Manchester, 1850)

Table Seven

Women in Trades in Manchester, 1788

Trade Men Women Gender Unknown Percent Female
Apothecary/ Surgeon / Midwife










Boot and Shoe makers















Corn & Flour Dealer





Cotton Dealer





Draper, Mercer, Dealer of Cloth










Fustian Cutter / Shearer





Grocers & Tea Dealers





Hairdresser & Peruke maker















Liquor dealer





Manufacturer, cloth










Publichouse / Inn / Tavern





School master / mistress




















Source: Lewis’s Manchester Directory for 1788 (reprinted by Neil Richardson, Manchester, 1984)

Guilds often controlled access to trades, admitting only those who had served an apprenticeship and thus earned the “freedom” of the trade. Women could obtain “freedom” not only by apprenticeship, but also by widowhood. The widow of a tradesman was often considered knowledgeable enough in the trade that she was given the right to carry on the trade even without an apprenticeship. In the eighteenth century women were apprenticed to a wide variety of trades, including butchery, bookbinding, brush making, carpentry, ropemaking and silversmithing.24 Between the eighteenth and nineteenth centuries the number of females apprenticed to trades declined, possibly suggesting reduced participation by women. However, the power of the guilds and the importance of apprenticeship were also declining during this time, so the decline in female apprenticeships may not have been an important barrier to employment.25

Many women worked in the factories of the Industrial Revolution, and a few women actually owned factories. In Keighley, West Yorkshire, Ann Illingworth, Miss Rachael Leach, and Mrs. Betty Hudson built and operated textile mills.26 In 1833 Mrs. Doig owned a powerloom factory in Scotland, which employed 60 workers.27

While many women did successfully enter trades, there were obstacles to women’s employment that kept their numbers low. Women generally received less education than men (though education of the time was of limited practical use). Women may have found it more difficult than men to raise the necessary capital because English law did not consider a married woman to have any legal existence; she could not sue or be sued. A married woman was a feme covert and technically could not make any legally binding contracts, a fact which may have discouraged others from loaning money to or making other contracts with married women. However, this law was not as limiting in practice as it would seem to be in theory because a married woman engaged in trade on her own account was treated by the courts as a feme sole and was responsible for her own debts.28

The professionalization of certain occupations resulted in the exclusion of women from work they had previously done. Women had provided medical care for centuries, but the professionalization of medicine in the early-nineteenth century made it a male occupation. The Royal College of Physicians admitted only graduates of Oxford and Cambridge, schools to which women were not admitted until the twentieth century. Women were even replaced by men in midwifery. The process began in the late-eighteenth century, when we observe the use of the term “man-midwife,” an oxymoronic title suggestive of changing gender roles. In the nineteenth century the “man-midwife” disappeared, and women were replaced by physicians or surgeons for assisting childbirth. Professionalization of the clergy was also effective in excluding women. While the Church of England did not allow women ministers, the Methodists movement had many women preachers during its early years. However, even among the Methodists female preachers disappeared when lay preachers were replaced with a professional clergy in the early nineteenth century.29

In other occupations where professionalization was not as strong, women remained an important part of the workforce. Teaching, particularly in the lower grades, was a common profession for women. Some were governesses, who lived as household servants, but many opened their own schools and took in pupils. The writing profession seems to have been fairly open to women; the leading novelists of the period include Jane Austen, Charlotte and Emily Brontë, Fanny Burney, George Eliot (the pen name of Mary Ann Evans), Elizabeth Gaskell, and Frances Trollope. Female non-fiction writers of the period include Jane Marcet, Hannah More, and Mary Wollstonecraft.

Other Occupations

The occupations listed above are by no means a complete listing of the occupations of women during the Industrial Revolution. Women made buttons, nails, screws, and pins. They worked in the tin plate, silver plate, pottery and Birmingham “toy” trades (which made small articles like snuff boxes). Women worked in the mines until The Mines Act of 1842 prohibited them from working underground, but afterwards women continued to pursue above-ground mining tasks.

Married Women in the Labor Market

While there are no comprehensive sources of information on the labor force participation of married women, household budgets reported by contemporary authors give us some information on women’s participation.30 For the period 1787 to 1815, 66 percent of married women in working-class households had either a recorded occupation or positive earnings. For the period 1816-20 the rate fell to 49 percent, but in 1821-40 it recovered to 62 percent. Table Eight gives participation rates of women by date and occupation of the husband.

Table Eight

Participation Rates of Married Women


High-Wage Agriculture

Low-Wage Agriculture






























Source: Sara Horrell and Jane Humphries, “Women’s Labour Force Participation and the Transition to the male-Breadwinner Family, 1790-1865,” Economic History Review 48 (February 1995): 89-117

While many wives worked, the amount of their earnings was small relative to their husband’s earnings. Annual earnings of married women who did work averaged only about 28 percent of their husband’s earnings. Because not all women worked, and because children usually contributed more to the family budget than their mothers, for the average family the wife contributed only around seven percent of total family income.


Women workers used a variety of methods to care for their children. Sometimes childcare and work were compatible, and women took their children with them to the fields or shops where they worked.31 Sometimes women working at home would give their infants opiates such as “Godfrey’s Cordial” in order to keep the children quiet while their mothers worked.32 The movement of work into factories increased the difficulty of combining work and childcare. In most factory work the hours were rigidly set, and women who took the jobs had to accept the twelve or thirteen hour days. Work in the factories was very disciplined, so the women could not bring their children to the factory, and could not take breaks at will. However, these difficulties did not prevent women with small children from working.

Nineteenth-century mothers used older siblings, other relatives, neighbors, and dame schools to provide child care while they worked.33 Occasionally mothers would leave young children home alone, but this was dangerous enough that only a few did so.34 Children as young as two might be sent to dame schools, in which women would take children into their home and provide child care, as well as some basic literacy instruction.35 In areas where lace-making or straw-plaiting thrived, children were sent from about age seven to “schools” where they learned the trade.36

Mothers might use a combination of different types of childcare. Elizabeth Wells, who worked in a Leicester worsted factory, had five children, ages 10, 8, 6, 2, and four months. The eldest, a daughter, stayed home to tend the house and care for the infant. The second child worked, and the six-year-old and two-year-old were sent to “an infant school.”37 Mary Wright, an “over-looker” in the rag-cutting room of a Buckinghamshire paper factory, had five children. The eldest worked in the rag-cutting room with her, the youngest was cared for at home, and the middle three were sent to a school; “for taking care of an infant she pays 1s.6d. a-week, and 3d. a-week for the three others. They go to a school, where they are taken care of and taught to read.”38

The cost of childcare was substantial. At the end of the eighteenth century the price of child-care was about 1s. a week, which was about a quarter of a woman’s weekly earnings in agriculture.39 In the 1840s mothers paid anywhere from 9d. to 2s.6d. per week for child care, out of a wage of around 7s. per week.40

For Further Reading

Burnette, Joyce. “An Investigation of the Female-Male Wage Gap during the Industrial Revolution in Britain.” Economic History Review 50 (1997): 257-281.

Davidoff, Leonore, and Catherine Hall. Family Fortunes: Men and Women of the English Middle Class, 1780-1850. Chicago: University of Chicago Press, 1987.

Honeyman, Katrina. Women, Gender and Industrialisation in England, 1700-1870. New York: St. Martin’s Press, 2000.

Horrell, Sara, and Jane Humphries. “Women’s Labour Force Participation and the Transition to the Male-Breadwinner Family, 1790-1865.” Economic History Review 48 (1995): 89-117.

Humphries, Jane. “Enclosures, Common Rights, and Women: The Proletarianization of Families in the Late Eighteenth and Early Nineteenth Centuries.” Journal of Economic History 50 (1990): 17-42.

King, Peter. “Customary Rights and Women’s Earnings: The Importance of Gleaning to the Rural Labouring Poor, 1750-1850.” Economic History Review 44 (1991): 461-476

Kussmaul, Ann. Servants in Husbandry in Early Modern England. Cambridge: Cambridge University Press, 1981.

Pinchbeck, Ivy. Women Workers and the Industrial Revolution, 1750-1850, London: Routledge, 1930.

Sanderson, Elizabeth. Women and Work in Eighteenth-Century Edinburgh. New York: St. Martin’s Press, 1996.

Snell, K.D.M. Annals of the Labouring Poor: Social Change and Agrarian England, 1660-1900. Cambridge: Cambridge University Press, 1985.

Valenze, Deborah. Prophetic Sons and Daughters: Female Preaching and Popular Religion in Industrial England. Princeton University Press, 1985.

Valenze, Deborah. The First Industrial Woman. Oxford: Oxford University Press, 1995.

1 “Since large-scale industry has transferred the woman from the house to the labour market and the factory, and makes her, often enough, the bread-winner of the family, the last remnants of male domination in the proletarian home have lost all foundation – except, perhaps, for some of that brutality towards women which became firmly rooted with the establishment of monogamy. . . .It will then become evidence that the first premise for the emancipation of women is the reintroduction of the entire female sex into public industry.” Frederick Engels, The Origin of the Family, Private Property and the State, in Karl Marx and Frederick Engels: Selected Works, New York: International Publishers, 1986, p. 508, 510.

2 Ivy Pinchbeck (Women Workers and the Industrial Revolution, Routledge, 1930) claimed that higher incomes allowed some women to withdraw from the labor force. While she saw some disadvantages resulting from this withdrawal, particularly the loss of independence, she thought that overall women benefited from having more time to devote to their homes and families. Davidoff and Hall (Family Fortunes: Man and Women of the English Middle Class, 1780-1850, Univ. of Chicago Press, 1987) agree that women withdrew from work, but they see the change as a negative result of gender discrimination. Similarly, Horrell and Humphries (“Women’s Labour Force Participation and the Transition to the Male-Breadwinner Family, 1790-1865,” Economic History Review, Feb. 1995, XLVIII:89-117) do not find that rising incomes caused declining labor force participation, and they believe that declining demand for female workers caused the female exodus from the workplace.

3 While the British census began in 1801, individual enumeration did not begin until 1841. For a detailed description of the British censuses of the nineteenth century, see Edward Higgs, Making Sense of the Census, London: HMSO, 1989.

4 For example, Helen Speechley, in her dissertation, showed that seven women who worked for wages at a Somerset farm had no recorded occupation in the 1851 census See Helen Speechley, Female and Child Agricultural Day Labourers in Somerset, c. 1685-1870, dissertation, Univ. of Exeter, 1999.

5 Edward Higgs finds that removing family members from the “servants” category reduced the number of servants in Rochdale in 1851. Enumerators did not clearly distinguish between the terms “housekeeper” and “housewife.” See Edward Higgs, “Domestic Service and Household Production” in Angela John, ed., Unequal Opportunities, Oxford: Basil Blackwell, and “Women, Occupations and Work in the Nineteenth Century Censuses,” History Workshop, 1987, 23:59-80. In contrast, the censuses of the early 20th century seem to be fairly accurate; see Tim Hatton and Roy Bailey, “Women’s Work in Census and Survey, 1911-1931,” Economic History Review, Feb. 2001, LIV:87-107.

6 A shilling was equal to 12 pence, so if women earned 2s.6d. for 20 hours, they earned 1.5d. per hour. Women agricultural laborers earned closer to 1d. per hour, so the London wage was higher. See Dorothy George, London Life in the Eighteenth-Century, London: Kegan Paul, Trench, Trubner & Co., 1925, p. 208, and Patricia Malcolmson, English Laundresses, Univ. of Illinois Press, 1986, p. 25. .

7 On the technology of the Industrial Revolution, see David Landes, The Unbound Prometheus, Cambridge Univ. Press, 1969, and Joel Mokyr, The Lever of Riches, Oxford Univ. Press, 1990.

8 A petition from Glasgow cotton manufactures makes the following claim, “In almost every department of the cotton spinning business, the labour of women would be equally efficient with that of men; yet in several of these departments, such measures of violence have been adopted by the combination, that the women who are willing to be employed, and who are anxious by being employed to earn the bread of their families, have been driven from their situations by violence. . . . Messrs. James Dunlop and Sons, some years ago, erected cotton mills in Calton of Glasgow, on which they expended upwards of [£]27,000 forming their spinning machines, (Chiefly with the view of ridding themselves of the combination [the male union],) of such reduced size as could easily be wrought by women. They employed women alone, as not being parties to the combination, and thus more easily managed, and less insubordinate than male spinners. These they paid at the same rate of wages, as were paid at other works to men. But they were waylaid and attacked, in going to, and returning from their work; the houses in which they resided, were broken open in the night. The women themselves were cruelly beaten and abused; and the mother of one of them killed; . . . And these nefarious attempts were persevered in so systematically, and so long, that Messrs. Dunlop and sons, found it necessary to dismiss all female spinners from their works, and to employ only male spinners, most probably the very men who had attempted their ruin.” First Report from the Select Committee on Artizans and Machinery, British Parliamentary Papers, 1824 vol. V, p. 525.

9 Ann Kussmaul, Servants in Husbandry in Early Modern England, Cambridge Univ. Press, 1981, Ch. 1

10 See Ivy Pinchbeck, Women Workers and the Industrial Revolution, Routledge, 1930, Ch. 1, and K.D.M. Snell, Annals of the Labouring Poor, Cambridge Univ. Press, 1985, Ch. 2.

11 For the period 1574 to 1821 about 45 percent of servants were female, but this fell to 32 percent in 1851. See Ann Kussmaul, Servants in Husbandry in Early Modern England, Cambridge Univ. Press, 1981, Ch. 1.

12 Men usually worked 12-hour days, and women averaged closer to 10 hours. See Joyce Burnette, “An Investigation of the Female-Male Wage Gap during the Industrial Revolution in Britain,” Economic History Review, May 1997, 50:257-281.

13 See Ivy Pinchbeck, Women Workers and the Industrial Revolution, Routledge, 1930, p. 65.

14 See Robert Allen, Enclosure and the Yeoman, Clarendon Press, 1992, and Joyce Burnette, “Labourers at the Oakes: Changes in the Demand for Female Day-Laborers at a Farm near Sheffield During the Agricultural Revolution,” Journal of Economics History, March 1999, 59:41-67.

15 While the scythe had been used for mowing grass for hay or cheaper grains for some time, the sickle was used for harvesting wheat until the nineteenth century. Thus adoption of the scythe for harvesting wheat seems to be a response to changing prices rather than invention of a new technology. The scythe required less labor to harvest a given acre, but left more grain on the ground, so as grain prices fell relative to wages, farmers substituted the scythe for the sickle. See E.J.T. Collins, “Harvest Technology and Labour Supply in Britain, 1790-1870,” Economic History Review, Dec. 1969, XXIII:453-473.

16 K.D.M. Snell, Annals of the Labouring Poor, Cambridge, 1985.

17 See Jane Humphries, “Enclosures, Common Rights, and Women: The Proletarianization of Families in the Late Eighteenth and Early Nineteenth Centuries,” Journal of Economic History, March 1990, 50:17-42, and J.M. Neeson, Commoners: Common Rights, Enclosure and Social Change in England, 1700-1820, Cambridge Univ. Press, 1993.

18 See Peter King, “Customary Rights and Women’s Earnings: The Importance of Gleaning to the Rural Labouring Poor, 1750-1850,” Economic History Review, 1991, XLIV:461-476.

19 Pinchbeck, Women Workers and the Industrial Revolution, Routledge, 1930, p. 41-42 See also Deborah Valenze, The First Industrial Woman, Oxford Univ. Press, 1995

20 Stephen Glover, The Directory of the County of Derby, Derby: Henry Mozley and Son, 1829.

21 Eden gives an example of gentlewomen who, on the death of their father, began to work as farmers. He notes, “not seldom, in one and the same day, they have divided their hours in helping to fill the dung-cart, and receiving company of the highest rank and distinction.” (F.M. Eden, The State of the Poor, vol. i., p. 626.) One woman farmer who was clearly an active manager celebrated her success in a letter sent to the Annals of Agriculture, (quoted by Pinchbeck, Women Workers and the Industrial Revolution, Routledge, 1930, p. 30): “I bought a small estate, and took possession of it in the month of July, 1803. . . . As a woman undertaking to farm is generally a subject of ridicule, I bought the small estate by way of experiment: the gentlemen of the county have now complimented me so much on having set so good and example to the farmers, that I have determined on taking a very large farm into my hands.” The Annals of Agriculture give a number of examples of women farmers cited for their experiments or their prize-winning crops.

22 Tradesmen considered themselves lucky to find a wife who was good at business. In his autobiography James Hopkinson, a cabinetmaker, said of his wife, “I found I had got a good and suitable companion one with whom I could take sweet council and whose love and affections was only equall’d by her ability as a business woman.” Victorian Cabinet Maker: The Memoirs of James Hopkinson, 1819-1894, 1968, p. 96.

23 See Elizabeth Sanderson, Women and Work in Eighteenth-Century Edinburgh, St. Martin’s Press, 1996.

24 See K.D.M. Snell, Annals of the Labouring Poor, Cambridge Univ. Press, 1985, Table 6.1.

25 The law requiring a seven-year apprenticeship before someone could work in a trade was repealed in 1814.

26 See Francois Crouzet, The First Industrialists, Cambridge Univ. Press, 1985, and M.L. Baumber, From Revival to Regency: A History of Keighley and Haworth, 1740-1820, Crabtree Ltd., Keighley, 1983.

27 First Report of the Central Board of His Majesty’s Commissioners for inquiry into the Employment of Children in Factories, with Minutes of Evidence, British Parliamentary Papers, 1833 (450) XX, A1, p. 120. \

28 For example, in the case of “LaVie and another Assignees against Philips and another Assignees,” the court upheld the right of a woman to operate as feme sole. In 1764 James Cox and his wife Jane were operating separate businesses, and both went bankrupt within the space of two months. Jane’s creditors sued James’s creditors for the recovery of five fans, goods from her shop that had been taken for James’s debts. The court ruled that, since Jane was trading as a feme sole, her husband did not own the goods in her shop, and thus James’s creditors had no right to seize them. See William Blackstone, Reports of Cases determined in the several Courts of Westminster-Hall, from 1746 to 1779, London, 1781, p. 570-575.

29 See Deborah Valenze, Prophetic Sons and Daughters: Female Preaching and Popular Religion in Industrial England, Princeton Univ. Press, 1985.

30 See Sara Horrell and Jane Humphries, “Women’s Labour Force Participation and the Transition to the male-Breadwinner Family, 1790-1865,” Economic History Review, Feb. 1995, XLVIII:89-117.

31 In his autobiography James Hopkinson says of his wife, “How she laboured at the press and assisted me in the work of my printing office, with a child in her arms, I have no space to tell, nor in fact have I space to allude to the many ways she contributed to my good fortune.” James Hopkinson, Victorian Cabinet Marker: The Memoirs of James Hopkinson, 1819-1894, J.B. Goodman, ed., Routledge & Kegan Paul, 1968, p. 96. A 1739 poem by Mary Collier suggests that carrying babies into the field was fairly common; it contains these lines:

Our tender Babes into the Field we bear,

And wrap them in our Cloaths to keep them warm,

While round about we gather up the Corn;

. . .

When Night comes on, unto our Home we go,

Our Corn we carry, and our Infant too.

Mary Collier, The Woman’s Labour, Augustan Reprint Society, #230, 1985, p. 10. A 1835 Poor Law report stated that in Sussex, “the custom of the mother of a family carrying her infant with her in its cradle into the field, rather than lose the opportunity of adding her earnings to the general stock, though partially practiced before, is becoming very much more general now.” (Quoted in Pinchbeck, Women Workers and the Industrial Revolution, Routledge, 1930, p. 85.)

32 Sarah Johnson of Nottingham claimed that she ” Knows it is quite a common custom for mothers to give Godfrey’s and the Anodyne cordial to their infants, ‘it is quite too common.’ It is given to infants at the breast; it is not given because the child is ill, but ‘to compose it to rest, to sleep it,’ so that the mother may get to work. ‘Has seen an infant lay asleep on its mother’s lap whilst at the lace-frame for six or eight hours at a time.’ This has been from the effects of the cordial.” [Reports from Assistant Handloom-Weavers’ Commissioners, British Parliamentary Papers, 1840 (43) XXIII, p. 157] Mary Colton, a lace worker from Nottingham, described her use of the drug to parliamentary investigators thus: ‘Was confined of an illegitimate child in November, 1839. When the child was a week old she gave it a half teaspoonful of Godfrey’s twice a-day. She could not afford to pay for the nursing of the child, and so gave it Godfrey’s to keep it quiet, that she might not be interrupted at the lace piece; she gradually increased the quantity by a drop or two at a time until it reached a teaspoonful; when the infant was four months old it was so “wankle” and thin that folks persuaded her to give it laudanum to bring it on, as it did other children. A halfpenny worth, which was about a teaspoonful and three-quarters, was given in two days; continued to give her this quantity since February, 1840, until this last past (1841), and then reduced the quantity. She now buys a halfpenny worth of laudanum and a halfpenny worth of Godfrey’s mixed, which lasts her three days. . . . If it had not been for her having to sit so close to work she would never have given the child Godfrey’s. She has tried to break it off many times but cannot, for if she did, she should not have anything to eat.” [Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary
, 1843 (431) XIV, p. 630].

33 Elizabeth Leadbeater, who worked for a Birmingham brass-founder, worked while she was nursing and had her mother look after the infant. [Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 710.] Mrs. Smart, an agricultural worker from Calne, Wiltshire, noted, “Sometimes I have had my mother, and sometimes my sister, to take care of the children, or I could not have gone out.” [Reports of Special Assistant Poor Law Commissioners on the Employment of Women and Children in Agriculture, British Parliamentary Papers, 1843 (510) XII, p. 65.] More commonly, though, older siblings provided the childcare. “Older siblings” generally meant children of nine or ten years old, and included boys as well as girls. Mrs. Britton of Calne, Wiltshire, left her children in the care of her eldest boy. [Reports of Special Assistant Poor Law Commissioners on the Employment of Women and Children in Agriculture, British Parliamentary Papers, 1843 (510) XII, p. 66] In a family from Presteign, Wales, containing children aged 9, 7, 5, 3, and 1, we find that “The oldest children nurse the youngest.” [F.M. Eden, State of the Poor, London: Davis, 1797, vol. iii, p. 904] When asked what income a labourer’s wife and children could earn, some respondents to the 1833 “Rural Queries” assumed that the eldest child would take care of the others, leaving the mother free to work. The returns from Bengeworth, Worcester, report that, “If the Mother goes to field work, the eldest Child had need to stay at home, to tend the younger branches of the Family.” Ewhurst, Surrey, reported that “If the Mother were employed, the elder Children at home would probably be required to attend to the younger Children.” [Report of His Majesty’s Commissioners for Inquiry in the Administration and Practical Operation of the Poor Law, Appendix B,
“Rural Queries,” British Parliamentary Papers, 1834 (44) XXX, p. 488 and 593]

34 Parents heard of incidents, such as one reported in the Times (Feb. 6, 1819):

A shocking accident occurred at Llandidno, near Conway, on Tuesday night, during the absence of a miner and his wife, who had gone to attend a methodist meeting, and locked the house door, leaving two children within; the house by some means took fire, and was, together with the unfortunate children, consumed to ashes; the eldest only four years old!

Mothers were aware of these dangers. One mother who admitted to leaving her children at home worried greatly about the risks:

I have always left my children to themselves, and, God be praised! nothing has ever happened to them, though I thought it dangerous. I have many a time come home, and have thought it a mercy to find nothing has happened to them. . . . Bad accidents often happen. [Reports of Special Assistant Poor Law Commissioners on the Employment of Women and Children in Agriculture, British Parliamentary Papers, 1843 (510) XII, p. 68.]

Leaving young children home without child care had real dangers, and the fact that most working mothers paid for childcare suggests that they did not consider leaving young children alone to be an acceptable option.

35 In 1840 an observer of Spitalfields noted, “In this neighborhood, where the women as well as the men are employed in the manufacture of silk, many children are sent to small schools, not for instruction, but to be taken care of whilst their mothers are at work.”[ Reports from Assistant Handloom-Weavers’ Commissioners, British Parliamentary Papers, 1840 (43) XXIII, p. 261] In 1840 the wife of a Gloucester weaver earned 2s. a week from running a school; she had twelve students and charged each 2d. a week. [Reports from Assistant Handloom Weavers’ Commissioners, British Parliamentary Papers, 1840 (220) XXIV, p. 419] In 1843 the lace-making schools of the midlands generally charged 3d. per week. [Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 46, 64, 71, 72]

36 At one straw-plaiting school in Hertfordshire,

Children commence learning the trade about seven years old: parents pay 3d. a-week for each child, and for this they are taught the trade and taught to read. The mistress employs about from 15 to 20 at work in a room; the parents get the profits of the children’s labour.[ Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 64]

At these schools there was very little instruction; some time was devoted to teaching the children to read, but they spent most of their time working. One mistress complained that the children worked too much and learned too little, “In my judgment I think the mothers task the children too much; the mistress is obliged to make them perform it, otherwise they would put them to other schools.” Ann Page of Newport Pagnell, Buckinghamshire, had “eleven scholars” and claimed to “teach them all reading once a-day.” [Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 66, 71] The standard rate of 3d. per week seems to have been paid for supervision of the children rather than for the instruction.

37 First Report of the Central Board of His Majesty’s Commissioners for Inquiring into the Employment of Children in Factories, British Parliamentary Papers, 1833 (450) XX, C1 p. 33.

38 Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 46.

39 David Davies, The Case of Labourers in Husbandry Stated and Considered, London: Robinson, 1795, p.14. Agricultural wages for this time period are found in Eden, State of the Poor, London: Davis, 1797.

40 In 1843 parliamentary investigator Alfred Austin reports, “Where a girl is hired to take care of children, she is paid about 9d. a week, and has her food besides, which is a serious deduction from the wages of the woman at work.”[ Reports of Special Assistant Poor Law Commissioners on the Employment of Women and Children in Agriculture, British Parliamentary Papers,1843 (510) XII, p.26] Agricultural wages in the area were 8d. per day, so even without the cost of food, the cost of child care was about one-fifth a woman’s wage. One Scottish woman earned 7s. per week in a coal mine and paid 2s.6d., or 36 percent of her income, for the care of her children.[ B.P.P. 1844 (592) XVI, p. 6] In 1843 Mary Wright, a “over-looker” at a Buckinghamshire paper factory, paid even more for child care; she told parliamentary investigators that “for taking care of an infant she pays 1s.6d. a-week, and 3d. a-week for three others.” [Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 46] She earned 10s.6d. per week, so her total child-care payments were 21 percent of her wage. Engels put the cost of child care at 1s. or 18d. a week. [Engels, [1845] 1926, p. 143] Factory workers often made 7s. a week, so again these women may have paid around one-fifth of their earnings for child care. Some estimates suggest even higher fractions of women’s income went to child care. The overseer of Wisbech, Cambridge, suggests a higher fraction; he reports, “The earnings of the Wife we consider comparatively small, in cases where she has a large family to attend to; if she has one or two children, she has to pay half, or perhaps more of her earnings for a person to take care of them.” [Report of His Majesty’s Commissioners for Inquiry in the Administration and Practical Operation of the Poor Law, Appendix B, “Rural Queries,”
British Parliamentary Papers, 1834 (44) XXX, p. 76

Antebellum Banking in the United States

Howard Bodenhorn, Lafayette College

The first legitimate commercial bank in the United States was the Bank of North America founded in 1781. Encouraged by Alexander Hamilton, Robert Morris persuaded the Continental Congress to charter the bank, which loaned to the cash-strapped Revolutionary government as well as private citizens, mostly Philadelphia merchants. The possibilities of commercial banking had been widely recognized by many colonists, but British law forbade the establishment of commercial, limited-liability banks in the colonies. Given that many of the colonists’ grievances against Parliament centered on economic and monetary issues, it is not surprising that one of the earliest acts of the Continental Congress was the establishment of a bank.

The introduction of banking to the U.S. was viewed as an important first step in forming an independent nation because banks supplied a medium of exchange (banknotes1 and deposits) in an economy perpetually strangled by shortages of specie money and credit, because they animated industry, and because they fostered wealth creation and promoted well-being. In the last case, contemporaries typically viewed banks as an integral part of a wider system of government-sponsored commercial infrastructure. Like schools, bridges, road, canals, river clearing and harbor improvements, the benefits of banks were expected to accrue to everyone even if dividends accrued only to shareholders.

Financial Sector Growth

By 1800 each major U.S. port city had at least one commercial bank serving the local mercantile community. As city banks proved themselves, banking spread into smaller cities and towns and expanded their clientele. Although most banks specialized in mercantile lending, others served artisans and farmers. In 1820 there were 327 commercial banks and several mutual savings banks that promoted thrift among the poor. Thus, at the onset of the antebellum period (defined here as the period between 1820 and 1860), urban residents were familiar with the intermediary function of banks and used bank-supplied currencies (deposits and banknotes) for most transactions. Table 1 reports the number of banks and the value of loans outstanding at year end between 1820 and 1860. During the era, the number of banks increased from 327 to 1,562 and total loans increased from just over $55.1 million to $691.9 million. Bank-supplied credit in the U.S. economy increased at a remarkable annual average rate of 6.3 percent. Growth in the financial sector, then outpaced growth in aggregate economic activity. Nominal gross domestic product increased an average annual rate of about 4.3 percent over the same interval. This essay discusses how regional regulatory structures evolved as the banking sector grew and radiated out from northeastern cities to the hinterlands.

Table 1

Number of Banks and Total Loans, 1820-1860

Year Banks Loans ($ millions)
1820 327 55.1
1821 273 71.9
1822 267 56.0
1823 274 75.9
1824 300 73.8
1825 330 88.7
1826 331 104.8
1827 333 90.5
1828 355 100.3
1829 369 103.0
1830 381 115.3
1831 424 149.0
1832 464 152.5
1833 517 222.9
1834 506 324.1
1835 704 365.1
1836 713 457.5
1837 788 525.1
1838 829 485.6
1839 840 492.3
1840 901 462.9
1841 784 386.5
1842 692 324.0
1843 691 254.5
1844 696 264.9
1845 707 288.6
1846 707 312.1
1847 715 310.3
1848 751 344.5
1849 782 332.3
1850 824 364.2
1851 879 413.8
1852 913 429.8
1853 750 408.9
1854 1208 557.4
1855 1307 576.1
1856 1398 634.2
1857 1416 684.5
1858 1422 583.2
1859 1476 657.2
1860 1562 691.9

Sources: Fenstermaker (1965); U.S. Comptroller of the Currency (1931).


As important as early American banks were in the process of capital accumulation, perhaps their most notable feature was their adaptability. Kuznets (1958) argues that one measure of the financial sector’s value is how and to what extent it evolves with changing economic conditions. Put in place to perform certain functions under one set of economic circumstances, how did it alter its behavior and service the needs of borrowers as circumstances changed. One benefit of the federalist U.S. political system was that states were given the freedom to establish systems reflecting local needs and preferences. While the political structure deserves credit in promoting regional adaptations, North (1994) credits the adaptability of America’s formal rules and informal constraints that rewarded adventurism in the economic, as well as the noneconomic, sphere. Differences in geography, climate, crop mix, manufacturing activity, population density and a host of other variables were reflected in different state banking systems. Rhode Island’s banks bore little resemblance to those in far away Louisiana or Missouri, or even those in neighboring Connecticut. Each state’s banks took a different form, but their purpose was the same; namely, to provide the state’s citizens with monetary and intermediary services and to promote the general economic welfare. This section provides a sketch of regional differences. A more detailed discussion can be found in Bodenhorn (2002).

State Banking in New England

New England’s banks most resemble the common conception of the antebellum bank. They were relatively small, unit banks; their stock was closely held; they granted loans to local farmers, merchants and artisans with whom the bank’s managers had more than a passing familiarity; and the state took little direct interest in their daily operations.

Of the banking systems put in place in the antebellum era, New England’s is typically viewed as the most stable and conservative. Friedman and Schwartz (1986) attribute their stability to an Old World concern with business reputations, familial ties, and personal legacies. New England was long settled, its society well established, and its business community mature and respected throughout the Atlantic trading network. Wealthy businessmen and bankers with strong ties to the community — like the Browns of Providence or the Bowdoins of Boston — emphasized stability not just because doing so benefited and reflected well on them, but because they realized that bad banking was bad for everyone’s business.

Besides their reputation for soundness, the two defining characteristics of New England’s early banks were their insider nature and their small size. The typical New England bank was small compared to banks in other regions. Table 2 shows that in 1820 the average Massachusetts country bank was about the same size as a Pennsylvania country bank, but both were only about half the size of a Virginia bank. A Rhode Island bank was about one-third the size of a Massachusetts or Pennsylvania bank and a mere one-sixth as large as Virginia’s banks. By 1850 the average Massachusetts bank declined relatively, operating on about two-thirds the paid-in capital of a Pennsylvania country bank. Rhode Island’s banks also shrank relative to Pennsylvania’s and were tiny compared to the large branch banks in the South and West.

Table 2

Average Bank Size by Capital and Lending in 1820 and 1850 Selected States and Cities

(in $ thousands)



Loans 1850 Capital Loans
Massachusetts $374.5 $480.4 $293.5 $494.0
except Boston 176.6 230.8 170.3 281.9
Rhode Island 95.7 103.2 186.0 246.2
except Providence 60.6 72.0 79.5 108.5
New York na na 246.8 516.3
except NYC na na 126.7 240.1
Pennsylvania 221.8 262.9 340.2 674.6
except Philadelphia 162.6 195.2 246.0 420.7
Virginia1,2 351.5 340.0 270.3 504.5
South Carolina2 na na 938.5 1,471.5
Kentucky2 na na 439.4 727.3

Notes: 1 Virginia figures for 1822. 2 Figures represent branch averages.

Source: Bodenhorn (2002).

Explanations for New England Banks’ Relatively Small Size

Several explanations have been offered for the relatively small size of New England’s banks. Contemporaries attributed it to the New England states’ propensity to tax bank capital, which was thought to work to the detriment of large banks. They argued that large banks circulated fewer banknotes per dollar of capital. The result was a progressive tax that fell disproportionately on large banks. Data compiled from Massachusetts’s bank reports suggest that large banks were not disadvantaged by the capital tax. It was a fact, as contemporaries believed, that large banks paid higher taxes per dollar of circulating banknotes, but a potentially better benchmark is the tax to loan ratio because large banks made more use of deposits than small banks. The tax to loan ratio was remarkably constant across both bank size and time, averaging just 0.6 percent between 1834 and 1855. Moreover, there is evidence of constant to modestly increasing returns to scale in New England banking. Large banks were generally at least as profitable as small banks in all years between 1834 and 1860, and slightly more so in many.

Lamoreaux (1993) offers a different explanation for the modest size of the region’s banks. New England’s banks, she argues, were not impersonal financial intermediaries. Rather, they acted as the financial arms of extended kinship trading networks. Throughout the antebellum era banks catered to insiders: directors, officers, shareholders, or business partners and kin of directors, officers, shareholders and business partners. Such preferences toward insiders represented the perpetuation of the eighteenth-century custom of pooling capital to finance family enterprises. In the nineteenth century the practice continued under corporate auspices. The corporate form, in fact, facilitated raising capital in greater amounts than the family unit could raise on its own. But because the banks kept their loans within a relatively small circle of business connections, it was not until the late nineteenth century that bank size increased.2

Once the kinship orientation of the region’s banks was established it perpetuated itself. When outsiders could not obtain loans from existing insider organizations, they formed their own insider bank. In doing so the promoters assured themselves of a steady supply of credit and created engines of economic mobility for kinship networks formerly closed off from many sources of credit. State legislatures accommodated the practice through their liberal chartering policies. By 1860, Rhode Island had 91 banks, Maine had 68, New Hampshire 51, Vermont 44, Connecticut 74 and Massachusetts 178.

The Suffolk System

One of the most commented on characteristic of New England’s banking system was its unique regional banknote redemption and clearing mechanism. Established by the Suffolk Bank of Boston in the early 1820s, the system became known as the Suffolk System. With so many banks in New England, each issuing it own form of currency, it was sometimes difficult for merchants, farmers, artisans, and even other bankers, to discriminate between real and bogus banknotes, or to discriminate between good and bad bankers. Moreover, the rural-urban terms of trade pulled most banknotes toward the region’s port cities. Because country merchants and farmers were typically indebted to city merchants, country banknotes tended to flow toward the cities, Boston more so than any other. By the second decade of the nineteenth century, country banknotes became a constant irritant for city bankers. City bankers believed that country issues displaced Boston banknotes in local transactions. More irritating though was the constant demand by the city banks’ customers to accept country banknotes on deposit, which placed the burden of interbank clearing on the city banks.3

In 1803 the city banks embarked on a first attempt to deal with country banknotes. They joined together, bought up a large quantity of country banknotes, and returned them to the country banks for redemption into specie. This effort to reduce country banknote circulation encountered so many obstacles that it was quickly abandoned. Several other schemes were hatched in the next two decades, but none proved any more successful than the 1803 plan.

The Suffolk Bank was chartered in 1818 and within a year embarked on a novel scheme to deal with the influx of country banknotes. The Suffolk sponsored a consortium of Boston bank in which each member appointed the Suffolk as its lone agent in the collection and redemption of country banknotes. In addition, each city bank contributed to a fund used to purchase and redeem country banknotes. When the Suffolk collected a large quantity of a country bank’s notes, it presented them for immediate redemption with an ultimatum: Join in a regular and organized redemption system or be subject to further unannounced redemption calls.4 Country banks objected to the Suffolk’s proposal, because it required them to keep noninterest-earning assets on deposit with the Suffolk in amounts equal to their average weekly redemptions at the city banks. Most country banks initially refused to join the redemption network, but after the Suffolk made good on a few redemption threats, the system achieved near universal membership.

Early interpretations of the Suffolk system, like those of Redlich (1949) and Hammond (1957), portray the Suffolk as a proto-central bank, which acted as a restraining influence that exercised some control over the region’s banking system and money supply. Recent studies are less quick to pronounce the Suffolk a successful experiment in early central banking. Mullineaux (1987) argues that the Suffolk’s redemption system was actually self-defeating. Instead of making country banknotes less desirable in Boston, the fact that they became readily redeemable there made them perfect substitutes for banknotes issued by Boston’s prestigious banks. This policy made country banknotes more desirable, which made it more, not less, difficult for Boston’s banks to keep their own notes in circulation.

Fenstermaker and Filer (1986) also contest the long-held view that the Suffolk exercised control over the region’s money supply (banknotes and deposits). Indeed, the Suffolk’s system was self-defeating in this regard as well. By increasing confidence in the value of a randomly encountered banknote, people were willing to hold increases in banknotes issues. In an interesting twist on the traditional interpretation, a possible outcome of the Suffolk system is that New England may have grown increasingly financial backward as a direct result of the region’s unique clearing system. Because banknotes were viewed as relatively safe and easily redeemed, the next big financial innovation — deposit banking — in New England lagged far behind other regions. With such wide acceptance of banknotes, there was no reason for banks to encourage the use of deposits and little reason for consumers to switch over.

Summary: New England Banks

New England’s banking system can be summarized as follows: Small unit banks predominated; many banks catered to small groups of capitalists bound by personal and familial ties; banking was becoming increasingly interconnected with other lines of business, such as insurance, shipping and manufacturing; the state took little direct interest in the daily operations of the banks and its supervisory role amounted to little more than a demand that every bank submit an unaudited balance sheet at year’s end; and that the Suffolk developed an interbank clearing system that facilitated the use of banknotes throughout the region, but had little effective control over the region’s money supply.

Banking in the Middle Atlantic Region


After 1810 or so, many bank charters were granted in New England, but not because of the presumption that the bank would promote the commonweal. Charters were granted for the personal gain of the promoter and the shareholders and in proportion to the personal, political and economic influence of the bank’s founders. No New England state took a significant financial stake in its banks. In both respects, New England differed markedly from states in other regions. From the beginning of state-chartered commercial banking in Pennsylvania, the state took a direct interest in the operations and profits of its banks. The Bank of North America was the obvious case: chartered to provide support to the colonial belligerents and the fledgling nation. Because the bank was popularly perceived to be dominated by Philadelphia’s Federalist merchants, who rarely loaned to outsiders, support for the bank waned.5 After a pitched political battle in which the Bank of North America’s charter was revoked and reinstated, the legislature chartered the Bank of Pennsylvania in 1793. As its name implies, this bank became the financial arm of the state. Pennsylvania subscribed $1 million of the bank’s capital, giving it the right to appoint six of thirteen directors and a $500,000 line of credit. The bank benefited by becoming the state’s fiscal agent, which guaranteed a constant inflow of deposits from regular treasury operations as well as western land sales.

By 1803 the demand for loans outstripped the existing banks’ supply and a plan for a new bank, the Philadelphia Bank, was hatched and its promoters petitioned the legislature for a charter. The existing banks lobbied against the charter, and nearly sank the new bank’s chances until it established a precedent that lasted throughout the antebellum era. Its promoters bribed the legislature with a payment of $135,000 in return for the charter, handed over one-sixth of its shares, and opened a line of credit for the state.

Between 1803 and 1814, the only other bank chartered in Pennsylvania was the Farmers and Mechanics Bank of Philadelphia, which established a second substantive precedent that persisted throughout the era. Existing banks followed a strict real-bills lending policy, restricting lending to merchants at very short terms of 30 to 90 days.6 Their adherence to a real-bills philosophy left a growing community of artisans, manufacturers and farmers on the outside looking in. The Farmers and Mechanics Bank was chartered to serve excluded groups. At least seven of its thirteen directors had to be farmers, artisans or manufacturers and the bank was required to lend the equivalent of 10 percent of its capital to farmers on mortgage for at least one year. In later years, banks were established to provide services to even more narrowly defined groups. Within a decade or two, most substantial port cities had banks with names like Merchants Bank, Planters Bank, Farmers Bank, and Mechanics Bank. By 1860 it was common to find banks with names like Leather Manufacturers Bank, Grocers Bank, Drovers Bank, and Importers Bank. Indeed, the Emigrant Savings Bank in New York City served Irish immigrants almost exclusively. In the other instances, it is not known how much of a bank’s lending was directed toward the occupational group included in its name. The adoption of such names may have been marketing ploys as much as mission statements. Only further research will reveal the answer.

New York

State-chartered banking in New York arrived less auspiciously than it had in Philadelphia or Boston. The Bank of New York opened in 1784, but operated without a charter and in open violation of state law until 1791 when the legislature finally sanctioned it. The city’s second bank obtained its charter surreptitiously. Alexander Hamilton was one of the driving forces behind the Bank of New York, and his long-time nemesis, Aaron Burr, was determined to establish a competing bank. Unable to get a charter from a Federalist legislature, Burr and his colleagues petitioned to incorporate a company to supply fresh water to the inhabitants of Manhattan Island. Burr tucked a clause into the charter of the Manhattan Company (the predecessor to today’s Chase Manhattan Bank) granting the water company the right to employ any excess capital in financial transactions. Once chartered, the company’s directors announced that $500,000 of its capital would be invested in banking.7 Thereafter, banking grew more quickly in New York than in Philadelphia, so that by 1812 New York had seven banks compared to the three operating in Philadelphia.

Deposit Insurance

Despite its inauspicious banking beginnings, New York introduced two innovations that influenced American banking down to the present. The Safety Fund system, introduced in 1829, was the nation’s first experiment in bank liability insurance (similar to that provided by the Federal Deposit Insurance Corporation today). The 1829 act authorized the appointment of bank regulators charged with regular inspections of member banks. An equally novel aspect was that it established an insurance fund insuring holders of banknotes and deposits against loss from bank failure. Ultimately, the insurance fund was insufficient to protect all bank creditors from loss during the panic of 1837 when eleven failures in rapid succession all but bankrupted the insurance fund, which delayed noteholder and depositor recoveries for months, even years. Even though the Safety Fund failed to provide its promised protections, it was an important episode in the subsequent evolution of American banking. Several Midwestern states instituted deposit insurance in the early twentieth century, and the federal government adopted it after the banking panics in the 1930s resulted in the failure of thousands of banks in which millions of depositors lost money.

“Free Banking”

Although the Safety Fund was nearly bankrupted in the late 1830s, it continued to insure a number of banks up to the mid 1860s when it was finally closed. No new banks joined the Safety Fund system after 1838 with the introduction of free banking — New York’s second significant banking innovation. Free banking represented a compromise between those most concerned with the underlying safety and stability of the currency and those most concerned with competition and freeing the country’s entrepreneurs from unduly harsh and anticompetitive restraints. Under free banking, a prospective banker could start a bank anywhere he saw fit, provided he met a few regulatory requirements. Each free bank’s capital was invested in state or federal bonds that were turned over to the state’s treasurer. If a bank failed to redeem even a single note into specie, the treasurer initiated bankruptcy proceedings and banknote holders were reimbursed from the sale of the bonds.

Actually Michigan preempted New York’s claim to be the first free-banking state, but Michigan’s 1837 law was modeled closely after a bill then under debate in New York’s legislature. Ultimately, New York’s influence was profound in this as well, because free banking became one of the century’s most widely copied financial innovations. By 1860 eighteen states adopted free banking laws closely resembling New York’s law. Three other states introduced watered-down variants. Eventually, the post-Civil War system of national banking adopted many of the substantive provisions of New York’s 1838 act.

Both the Safety Fund system and free banking were attempts to protect society from losses resulting from bank failures and to entice people to hold financial assets. Banks and bank-supplied currency were novel developments in the hinterlands in the early nineteenth century and many rural inhabitants were skeptical about the value of small pieces of paper. They were more familiar with gold and silver. Getting them to exchange one for the other was a slow process, and one that relied heavily on trust. But trust was built slowly and destroyed quickly. The failure of a single bank could, in a week, destroy the confidence in a system built up over a decade. New York’s experiments were designed to mitigate, if not eliminate, the negative consequences of bank failures. New York’s Safety Fund, then, differed in the details but not in intent, from New England’s Suffolk system. Bankers and legislators in each region grappled with the difficult issue of protecting a fragile but vital sector of the economy. Each region responded to the problem differently. The South and West settled on yet another solution.

Banking in the South and West

One distinguishing characteristic of southern and western banks was their extensive branch networks. Pennsylvania provided for branch banking in the early nineteenth century and two banks jointly opened about ten branches. In both instances, however, the branches became a net liability. The Philadelphia Bank opened four branches in 1809 and by 1811 was forced to pass on its semi-annual dividends because losses at the branches offset profits at the Philadelphia office. At bottom, branch losses resulted from a combination of ineffective central office oversight and unrealistic expectations about the scale and scope of hinterland lending. Philadelphia’s bank directors instructed branch managers to invest in high-grade commercial paper or real bills. Rural banks found a limited number of such lending opportunities and quickly turned to mortgage-based lending. Many of these loans fell into arrears and were ultimately written when land sales faltered.

Branch Banking

Unlike Pennsylvania, where branch banking failed, branch banks throughout the South and West thrived. The Bank of Virginia, founded in 1804, was the first state-chartered branch bank and up to the Civil War branch banks served the state’s financial needs. Several small, independent banks were chartered in the 1850s, but they never threatened the dominance of Virginia’s “Big Six” banks. Virginia’s branch banks, unlike Pennsylvania’s, were profitable. In 1821, for example, the net return to capital at the Farmers Bank of Virginia’s home office in Richmond was 5.4 percent. Returns at its branches ranged from a low of 3 percent at Norfolk (which was consistently the low-profit branch) to 9 percent in Winchester. In 1835, the last year the bank reported separate branch statistics, net returns to capital at the Farmers Bank’s branches ranged from 2.9 and 11.7 percent, with an average of 7.9 percent.

The low profits at the Norfolk branch represent a net subsidy from the state’s banking sector to the political system, which was not immune to the same kind of infrastructure boosterism that erupted in New York, Pennsylvania, Maryland and elsewhere. In the immediate post-Revolutionary era, the value of exports shipped from Virginia’s ports (Norfolk and Alexandria) slightly exceeded the value shipped from Baltimore. In the 1790s the numbers turned sharply in Baltimore’s favor and Virginia entered the internal-improvements craze and the battle for western shipments. Banks represented the first phase of the state’s internal improvements plan in that many believed that Baltimore’s new-found advantage resulted from easier credit supplied by the city’s banks. If Norfolk, with one of the best natural harbors on the North American Atlantic coast, was to compete with other port cities, it needed banks and the state required three of the state’s Big Six branch banks to operate branches there. Despite its natural advantages, Norfolk never became an important entrepot and it probably had more bank capital than it required. This pattern was repeated elsewhere. Other states required their branch banks to serve markets such as Memphis, Louisville, Natchez and Mobile that might, with the proper infrastructure grow into important ports.

State Involvement and Intervention in Banking

The second distinguishing characteristic of southern and western banking was sweeping state involvement and intervention. Virginia, for example, interjected the state into the banking system by taking significant stakes in its first chartered banks (providing an implicit subsidy) and by requiring them, once they established themselves, to subsidize the state’s continuing internal improvements programs of the 1820s and 1830s. Indiana followed such a strategy. So, too, did Kentucky, Louisiana, Mississippi, Illinois, Kentucky, Tennessee and Georgia in different degrees. South Carolina followed a wholly different strategy. On one hand, it chartered several banks in which it took no financial interest. On the other, it chartered the Bank of the State of South Carolina, a bank wholly owned by the state and designed to lend to planters and farmers who complained constantly that the state’s existing banks served only the urban mercantile community. The state-owned bank eventually divided its lending between merchants, farmers and artisans and dominated South Carolina’s financial sector.

The 1820s and 1830s witnessed a deluge of new banks in the South and West, with a corresponding increase in state involvement. No state matched Louisiana’s breadth of involvement in the 1830s when it chartered three distinct types of banks: commercial banks that served merchants and manufacturers; improvement banks that financed various internal improvements projects; and property banks that extended long-term mortgage credit to planters and other property holders. Louisiana’s improvement banks included the New Orleans Canal and Banking Company that built a canal connecting Lake Ponchartrain to the Mississippi River. The Exchange and Banking Company and the New Orleans Improvement and Banking Company were required to build and operate hotels. The New Orleans Gas Light and Banking Company constructed and operated gas streetlights in New Orleans and five other cities. Finally, the Carrollton Railroad and Banking Company and the Atchafalaya Railroad and Banking Company were rail construction companies whose bank subsidiaries subsidized railroad construction.

“Commonwealth Ideal” and Inflationary Banking

Louisiana’s 1830s banking exuberance reflected what some historians label the “commonwealth ideal” of banking; that is, the promotion of the general welfare through the promotion of banks. Legislatures in the South and West, however, never demonstrated a greater commitment to the commonwealth ideal than during the tough times of the early 1820s. With the collapse of the post-war land boom in 1819, a political coalition of debt-strapped landowners lobbied legislatures throughout the region for relief and its focus was banking. Relief advocates lobbied for inflationary banking that would reduce the real burden of debts taken on during prior flush times.

Several western states responded to these calls and chartered state-subsidized and state-managed banks designed to reinflate their embattled economies. Chartered in 1821, the Bank of the Commonwealth of Kentucky loaned on mortgages at longer than customary periods and all Kentucky landowners were eligible for $1,000 loans. The loans allowed landowners to discharge their existing debts without being forced to liquidate their property at ruinously low prices. Although the bank’s notes were not redeemable into specie, they were given currency in two ways. First, they were accepted at the state treasury in tax payments. Second, the state passed a law that forced creditors to accept the notes in payment of existing debts or agree to delay collection for two years.

The commonwealth ideal was not unique to Kentucky. During the depression of the 1820s, Tennessee chartered the State Bank of Tennessee, Illinois chartered the State Bank of Illinois and Louisiana chartered the Louisiana State Bank. Although they took slightly different forms, they all had the same intent; namely, to relieve distressed and embarrassed farmers, planters and land owners. What all these banks shared in common was the notion that the state should promote the general welfare and economic growth. In this instance, and again during the depression of the 1840s, state-owned banks were organized to minimize the transfer of property when economic conditions demanded wholesale liquidation. Such liquidation would have been inefficient and imposed unnecessary hardship on a large fraction of the population. To the extent that hastily chartered relief banks forestalled inefficient liquidation, they served their purpose. Although most of these banks eventually became insolvent, requiring taxpayer bailouts, we cannot label them unsuccessful. They reinflated economies and allowed for an orderly disposal of property. Determining if the net benefits were positive or negative requires more research, but for the moment we are forced to accept the possibility that the region’s state-owned banks of the 1820s and 1840s advanced the commonweal.

Conclusion: Banks and Economic Growth

Despite notable differences in the specific form and structure of each region’s banking system, they were all aimed squarely at a common goal; namely, realizing that region’s economic potential. Banks helped achieve the goal in two ways. First, banks monetized economies, which reduced the costs of transacting and helped smooth consumption and production across time. It was no longer necessary for every farm family to inventory their entire harvest. They could sell most of it, and expend the proceeds on consumption goods as the need arose until the next harvest brought a new cash infusion. Crop and livestock inventories are prone to substantial losses and an increased use of money reduced them significantly. Second, banks provided credit, which unleashed entrepreneurial spirits and talents. A complete appreciation of early American banking recognizes the banks’ contribution to antebellum America’s economic growth.

Bibliographic Essay

Because of the large number of sources used to construct the essay, the essay was more readable and less cluttered by including a brief bibliographic essay. A full bibliography is included at the end.

Good general histories of antebellum banking include Dewey (1910), Fenstermaker (1965), Gouge (1833), Hammond (1957), Knox (1903), Redlich (1949), and Trescott (1963). If only one book is read on antebellum banking, Hammond’s (1957) Pulitzer-Prize winning book remains the best choice.

The literature on New England banking is not particularly large, and the more important historical interpretations of state-wide systems include Chadbourne (1936), Hasse (1946, 1957), Simonton (1971), Spencer (1949), and Stokes (1902). Gras (1937) does an excellent job of placing the history of a single bank within the larger regional and national context. In a recent book and a number of articles Lamoreaux (1994 and sources therein) provides a compelling and eminently readable reinterpretation of the region’s banking structure. Nathan Appleton (1831, 1856) provides a contemporary observer’s interpretation, while Walker (1857) provides an entertaining if perverse and satirical history of a fictional New England bank. Martin (1969) provides details of bank share prices and dividend payments from the establishment of the first banks in Boston through the end of the nineteenth century. Less technical studies of the Suffolk system include Lake (1947), Trivoli (1979) and Whitney (1878); more technical interpretations include Calomiris and Kahn (1996), Mullineaux (1987), and Rolnick, Smith and Weber (1998).

The literature on Middle Atlantic banking is huge, but the better state-level histories include Bryan (1899), Daniels (1976), and Holdsworth (1928). The better studies of individual banks include Adams (1978), Lewis (1882), Nevins (1934), and Wainwright (1953). Chaddock (1910) provides a general history of the Safety Fund system. Golembe (1960) places it in the context of modern deposit insurance, while Bodenhorn (1996) and Calomiris (1989) provide modern analyses. A recent revival of interest in free banking has brought about a veritable explosion in the number of studies on the subject, but the better introductory ones remain Rockoff (1974, 1985), Rolnick and Weber (1982, 1983), and Dwyer (1996).

The literature on southern and western banking is large and of highly variable quality, but I have found the following to be the most readable and useful general sources: Caldwell (1935), Duke (1895), Esary (1912), Golembe (1978), Huntington (1915), Green (1972), Lesesne (1970), Royalty (1979), Schweikart (1987) and Starnes (1931).

References and Further Reading

Adams, Donald R., Jr. Finance and Enterprise in Early America: A Study of Stephen Girard’s Bank, 1812-1831. Philadelphia: University of Pennsylvania Press, 1978.

Alter, George, Claudia Goldin and Elyce Rotella. “The Savings of Ordinary Americans: The Philadelphia Saving Fund Society in the Mid-Nineteenth-Century.” Journal of Economic History 54, no. 4 (December 1994): 735-67.

Appleton, Nathan. A Defence of Country Banks: Being a Reply to a Pamphlet Entitled ‘An Examination of the Banking System of Massachusetts, in Reference to the Renewal of the Bank Charters.’ Boston: Stimpson & Clapp, 1831.

Appleton, Nathan. Bank Bills or Paper Currency and the Banking System of Massachusetts with Remarks on Present High Prices. Boston: Little, Brown and Company, 1856.

Berry, Thomas Senior. Revised Annual Estimates of American Gross National Product: Preliminary Estimates of Four Major Components of Demand, 1789-1889. Richmond: University of Richmond Bostwick Paper No. 3, 1978.

Bodenhorn, Howard. “Zombie Banks and the Demise of New York’s Safety Fund.” Eastern Economic Journal 22, no. 1 (1996): 21-34.

Bodenhorn, Howard. “Private Banking in Antebellum Virginia: Thomas Branch & Sons of Petersburg.” Business History Review 71, no. 4 (1997): 513-42.

Bodenhorn, Howard. A History of Banking in Antebellum America: Financial Markets and Economic Development in an Era of Nation-Building. Cambridge and New York: Cambridge University Press, 2000.

Bodenhorn, Howard. State Banking in Early America: A New Economic History. New York: Oxford University Press, 2002.

Bryan, Alfred C. A History of State Banking in Maryland. Baltimore: Johns Hopkins University Press, 1899.

Caldwell, Stephen A. A Banking History of Louisiana. Baton Rouge: Louisiana State University Press, 1935.

Calomiris, Charles W. “Deposit Insurance: Lessons from the Record.” Federal Reserve Bank of Chicago Economic Perspectives 13 (1989): 10-30.

Calomiris, Charles W., and Charles Kahn. “The Efficiency of Self-Regulated Payments Systems: Learnings from the Suffolk System.” Journal of Money, Credit, and Banking 28, no. 4 (1996): 766-97.

Chadbourne, Walter W. A History of Banking in Maine, 1799-1930. Orono: University of Maine Press, 1936.

Chaddock, Robert E. The Safety Fund Banking System in New York, 1829-1866. Washington, D.C.: Government Printing Office, 1910.

Daniels, Belden L. Pennsylvania: Birthplace of Banking in America. Harrisburg: Pennsylvania Bankers Association, 1976.

Davis, Lance, and Robert E. Gallman. “Capital Formation in the United States during the Nineteenth Century.” In Cambridge Economic History of Europe (Vol. 7, Part 2), edited by Peter Mathias and M.M. Postan, 1-69. Cambridge: Cambridge University Press, 1978.

Davis, Lance, and Robert E. Gallman. “Savings, Investment, and Economic Growth: The United States in the Nineteenth Century.” In Capitalism in Context: Essays on Economic Development and Cultural Change in Honor of R.M. Hartwell, edited by John A. James and Mark Thomas, 202-29. Chicago: University of Chicago Press, 1994.

Dewey, Davis R. State Banking before the Civil War. Washington, D.C.: Government Printing Office, 1910.

Duke, Basil W. History of the Bank of Kentucky, 1792-1895. Louisville: J.P. Morton, 1895.

Dwyer, Gerald P., Jr. “Wildcat Banking, Banking Panics, and Free Banking in the United States.” Federal Reserve Bank of Atlanta Economic Review 81, no. 3 (1996): 1-20.

Engerman, Stanley L., and Robert E. Gallman. “U.S. Economic Growth, 1783-1860.” Research in Economic History 8 (1983): 1-46.

Esary, Logan. State Banking in Indiana, 1814-1873. Indiana University Studies No. 15. Bloomington: Indiana University Press, 1912.

Fenstermaker, J. Van. The Development of American Commercial Banking, 1782-1837. Kent, Ohio: Kent State University, 1965.

Fenstermaker, J. Van, and John E. Filer. “Impact of the First and Second Banks of the United States and the Suffolk System on New England Bank Money, 1791-1837.” Journal of Money, Credit, and Banking 18, no. 1 (1986): 28-40.

Friedman, Milton, and Anna J. Schwartz. “Has the Government Any Role in Money?” Journal of Monetary Economics 17, no. 1 (1986): 37-62.

Gallman, Robert E. “American Economic Growth before the Civil War: The Testimony of the Capital Stock Estimates.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 79-115. Chicago: University of Chicago Press, 1992.

Goldsmith, Raymond. Financial Structure and Development. New Haven: Yale University Press, 1969.

Golembe, Carter H. “The Deposit Insurance Legislation of 1933: An Examination of its Antecedents and Purposes.” Political Science Quarterly 76, no. 2 (1960): 181-200.

Golembe, Carter H. State Banks and the Economic Development of the West. New York: Arno Press, 1978.

Gouge, William M. A Short History of Paper Money and Banking in the United States. Philadelphia: T.W. Ustick, 1833.

Gras, N.S.B. The Massachusetts First National Bank of Boston, 1784-1934. Cambridge, MA: Harvard University Press, 1937.

Green, George D. Finance and Economic Development in the Old South: Louisiana Banking, 1804-1861. Stanford: Stanford University Press, 1972.

Hammond, Bray. Banks and Politics in America from the Revolution to the Civil War. Princeton: Princeton University Press, 1957.

Hasse, William F., Jr. A History of Banking in New Haven, Connecticut. New Haven: privately printed, 1946.

Hasse, William F., Jr. A History of Money and Banking in Connecticut. New Haven: privately printed, 1957.

Holdsworth, John Thom. Financing an Empire: History of Banking in Pennsylvania. Chicago: S.J. Clarke Publishing Company, 1928.

Huntington, Charles Clifford. A History of Banking and Currency in Ohio before the Civil War. Columbus: F. J. Herr Printing Company, 1915.

Knox, John Jay. A History of Banking in the United States. New York: Bradford Rhodes & Company, 1903.

Kuznets, Simon. “Foreword.” In Financial Intermediaries in the American Economy, by Raymond W. Goldsmith. Princeton: Princeton University Press, 1958.

Lake, Wilfred. “The End of the Suffolk System.” Journal of Economic History 7, no. 4 (1947): 183-207.

Lamoreaux, Naomi R. Insider Lending: Banks, Personal Connections, and Economic Development in Industrial New England. Cambridge: Cambridge University Press, 1994.

Lesesne, J. Mauldin. The Bank of the State of South Carolina. Columbia: University of South Carolina Press, 1970.

Lewis, Lawrence, Jr. A History of the Bank of North America: The First Bank Chartered in the United States. Philadelphia: J.B. Lippincott & Company, 1882.

Lockard, Paul A. Banks, Insider Lending and Industries of the Connecticut River Valley of Massachusetts, 1813-1860. Unpublished Ph.D. thesis, University of Massachusetts, 2000.

Martin, Joseph G. A Century of Finance. New York: Greenwood Press, 1969.

Moulton, H.G. “Commercial Banking and Capital Formation.” Journal of Political Economy 26 (1918): 484-508, 638-63, 705-31, 849-81.

Mullineaux, Donald J. “Competitive Monies and the Suffolk Banking System: A Contractual Perspective.” Southern Economic Journal 53 (1987): 884-98.

Nevins, Allan. History of the Bank of New York and Trust Company, 1784 to 1934. New York: privately printed, 1934.

New York. Bank Commissioners. “Annual Report of the Bank Commissioners.” New York General Assembly Document No. 74. Albany, 1835.

North, Douglass. “Institutional Change in American Economic History.” In American Economic Development in Historical Perspective, edited by Thomas Weiss and Donald Schaefer, 87-98. Stanford: Stanford University Press, 1994.

Rappaport, George David. Stability and Change in Revolutionary Pennsylvania: Banking, Politics, and Social Structure. University Park, PA: The Pennsylvania State University Press, 1996.

Redlich, Fritz. The Molding of American Banking: Men and Ideas. New York: Hafner Publishing Company, 1947.

Rockoff, Hugh. “The Free Banking Era: A Reexamination.” Journal of Money, Credit, and Banking 6, no. 2 (1974): 141-67.

Rockoff, Hugh. “New Evidence on the Free Banking Era in the United States.” American Economic Review 75, no. 4 (1985): 886-89.

Rolnick, Arthur J., and Warren E. Weber. “Free Banking, Wildcat Banking, and Shinplasters.” Federal Reserve Bank of Minneapolis Quarterly Review 6 (1982): 10-19.

Rolnick, Arthur J., and Warren E. Weber. “New Evidence on the Free Banking Era.” American Economic Review 73, no. 5 (1983): 1080-91.

Rolnick, Arthur J., Bruce D. Smith, and Warren E. Weber. “Lessons from a Laissez-Faire Payments System: The Suffolk Banking System (1825-58).” Federal Reserve Bank of Minneapolis Quarterly Review 22, no. 3 (1998): 11-21.

Royalty, Dale. “Banking and the Commonwealth Ideal in Kentucky, 1806-1822.” Register of the Kentucky Historical Society 77 (1979): 91-107.

Schumpeter, Joseph A. The Theory of Economic Development: An Inquiry into Profit, Capital, Credit, Interest, and the Business Cycle. Cambridge, MA: Harvard University Press, 1934.

Schweikart, Larry. Banking in the American South from the Age of Jackson to Reconstruction. Baton Rouge: Louisiana State University Press, 1987.

Simonton, William G. Maine and the Panic of 1837. Unpublished master’s thesis: University of Maine, 1971.

Sokoloff, Kenneth L. “Productivity Growth in Manufacturing during Early Industrialization.” In Long-Term Factors in American Economic Growth, edited by Stanley L. Engerman and Robert E. Gallman. Chicago: University of Chicago Press, 1986.

Sokoloff, Kenneth L. “Invention, Innovation, and Manufacturing Productivity Growth in the Antebellum Northeast.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 345-78. Chicago: University of Chicago Press, 1992.

Spencer, Charles, Jr. The First Bank of Boston, 1784-1949. New York: Newcomen Society, 1949.

Starnes, George T. Sixty Years of Branch Banking in Virginia. New York: Macmillan Company, 1931.

Stokes, Howard Kemble. Chartered Banking in Rhode Island, 1791-1900. Providence: Preston & Rounds Company, 1902.

Sylla, Richard. “Forgotten Men of Money: Private Bankers in Early U.S. History.” Journal of Economic History 36, no. 2 (1976):

Temin, Peter. The Jacksonian Economy. New York: W. W. Norton & Company, 1969.

Trescott, Paul B. Financing American Enterprise: The Story of Commercial Banking. New York: Harper & Row, 1963.

Trivoli, George. The Suffolk Bank: A Study of a Free-Enterprise Clearing System. London: The Adam Smith Institute, 1979.

U.S. Comptroller of the Currency. Annual Report of the Comptroller of the Currency. Washington, D.C.: Government Printing Office, 1931.

Wainwright, Nicholas B. History of the Philadelphia National Bank. Philadelphia: William F. Fell Company, 1953.

Walker, Amasa. History of the Wickaboag Bank. Boston: Crosby, Nichols & Company, 1857.

Wallis, John Joseph. “What Caused the Panic of 1839?” Unpublished working paper, University of Maryland, October 2000.

Weiss, Thomas. “U.S. Labor Force Estimates and Economic Growth, 1800-1860.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 19-75. Chicago: University of Chicago Press, 1992.

Whitney, David R. The Suffolk Bank. Cambridge, MA: Riverside Press, 1878.

Wright, Robert E. “Artisans, Banks, Credit, and the Election of 1800.” The Pennsylvania Magazine of History and Biography 122, no. 3 (July 1998), 211-239.

Wright, Robert E. “Bank Ownership and Lending Patterns in New York and Pennsylvania, 1781-1831.” Business History Review 73, no. 1 (Spring 1999), 40-60.

1 Banknotes were small demonination IOUs printed by banks and circulated as currency. Modern U.S. money are simply banknotes issued by the Federal Reserve Bank, which has a monopoly privilege in the issue of legal tender currency. In antebellum American, when a bank made a loan, the borrower was typically handed banknotes with a face value equal to the dollar value of the loan. The borrower then spent these banknotes in purchasing goods and services, putting them into circulation. Contemporary law held that banks were required to redeem banknotes into gold and silver legal tender on demand. Banks found it profitable to issue notes because they typically held about 30 percent of the total value of banknotes in circulation as reserves. Thus, banks were able to leverage $30 in gold and silver into $100 in loans that returned about 7 percent interest on average.

2 Paul Lockard (2000) challenges Lamoreaux’s interpretation. In a study of 4 banks in the Connecticut River valley, Lockard finds that insiders did not dominate these banks’ resources. As provocative as Lockard’s findings are, he draws conclusions from a small and unrepresentative sample. Two of his four sample banks were savings banks, which were designed as quasi-charitable organizations designed to encourage savings by the working classes and provide small loans. Thus, Lockard’s sample is effectively reduced to two banks. At these two banks, he identifies about 10 percent of loans as insider loans, but readily admits that he cannot always distinguish between insiders and outsiders. For a recent study of how early Americans used savings banks, see Alter, Goldin and Rotella (1994). The literature on savings banks is so large that it cannot be be given its due here.

3 Interbank clearing involves the settling of balances between banks. Modern banks cash checks drawn on other banks and credit the funds to the depositor. The Federal Reserve system provides clearing services between banks. The accepting bank sends the checks to the Federal Reserve, who credits the sending bank’s accounts and sends the checks back to the bank on which they were drawn for reimbursement. In the antebellum era, interbank clearing involved sending banknotes back to issuing banks. Because New England had so many small and scattered banks, the costs of returning banknotes to their issuers were large and sometimes avoided by recirculating notes of distant banks rather than returning them. Regular clearings and redemptions served an important purpose, however, because they kept banks in touch with the current market conditions. A massive redemption of notes was indicative of a declining demand for money and credit. Because the bank’s reserves were drawn down with the redemptions, it was forced to reduce its volume of loans in accord with changing demand conditions.

4 The law held that banknotes were redeemable on demand into gold or silver coin or bullion. If a bank refused to redeem even a single $1 banknote, the banknote holder could have the bank closed and liquidated to recover his or her claim against it.

5 Rappaport (1996) found that the bank’s loans were about equally divided between insiders (shareholders and shareholders’ family and business associates) and outsiders, but nonshareholders received loans about 30 percent smaller than shareholders. The issue remains about whether this bank was an “insider” bank, and depends largely on one’s definition. Any modern bank which made half of its loans to shareholders and their families would be viewed as an “insider” bank. It is less clear where the line can be usefully drawn for antebellum banks.

6 Real-bills lending followed from a nineteenth-century banking philosophy, which held that bank lending should be used to finance the warehousing or wholesaling of already-produced goods. Loans made on these bases were thought to be self-liquidating in that the loan was made against readily sold collateral actually in the hands of a merchant. Under the real-bills doctrine, the banks’ proper functions were to bridge the gap between production and retail sale of goods. A strict adherence to real-bills tenets excluded loans on property (mortgages), loans on goods in process (trade credit), or loans to start-up firms (venture capital). Thus, real-bills lending prescribed a limited role for banks and bank credit. Few banks were strict adherents to the doctrine, but many followed it in large part.

7 Robert E. Wright (1998) offers a different interpretation, but notes that Burr pushed the bill through at the end of a busy legislative session so that many legislators voted on the bill without having read it thoroughly or at all.

An Overview of the Economic History of Uruguay
since the 1870s

Luis Bértola, Universidad de la República — Uruguay

Uruguay’s Early History

Without silver or gold, without valuable species, scarcely peopled by gatherers and fishers, the Eastern Strand of the Uruguay River (Banda Oriental was the colonial name; República Oriental del Uruguay is the official name today) was, in the sixteenth and seventeenth centuries, distant and unattractive to the European nations that conquered the region. The major export product was the leather of wild descendants of cattle introduced in the early 1600s by the Spaniards. As cattle preceded humans, the state preceded society: Uruguay’s first settlement was Colonia del Sacramento, a Portuguese military fortress founded in 1680, placed precisely across from Buenos Aires, Argentina. Montevideo, also a fortress, was founded by the Spaniards in 1724. Uruguay was on the border between the Spanish and Portuguese empires, a feature which would be decisive for the creation, with strong British involvement, in 1828-1830, of an independent state.

Montevideo had the best natural harbor in the region, and rapidly became the end-point of the trans-Atlantic routes into the region, the base for a strong commercial elite, and for the Spanish navy in the region. During the first decades after independence, however, Uruguay was plagued by political instability, precarious institution building and economic retardation. Recurrent civil wars with intensive involvement by Britain, France, Portugal-Brazil and Argentina, made Uruguay a center for international conflicts, the most important being the Great War (Guerra Grande), which lasted from 1839 to 1851. At its end Uruguay had only about 130,000 inhabitants.

“Around the middle of the nineteenth century, Uruguay was dominated by the latifundium, with its ill-defined boundaries and enormous herds of native cattle, from which only the hides were exported to Great Britain and part of the meat, as jerky, to Brazil and Cuba. There was a shifting rural population that worked on the large estates and lived largely on the parts of beef carcasses that could not be marketed abroad. Often the landowners were also the caudillos of the Blanco or Colorado political parties, the protagonists of civil wars that a weak government was unable to prevent” (Barrán and Nahum, 1984, 655). This picture still holds, even if it has been excessively stylized, neglecting the importance of subsistence or domestic-market oriented peasant production.

Economic Performance in the Long Run

Despite its precarious beginnings, Uruguay’s per capita gross domestic product (GDP) growth from 1870 to 2002 shows an amazing persistence, with the long-run rate averaging around one percent per year. However, this apparent stability hides some important shifts. As shown in Figure 1, both GDP and population grew much faster before the 1930s; from 1930 to 1960 immigration vanished and population grew much more slowly, while decades of GDP stagnation and fast growth alternated; after the 1960s Uruguay became a net-emigration country, with low natural growth rates and a still spasmodic GDP growth.

GDP growth shows a pattern featured by Kuznets-like swings (Bértola and Lorenzo 2004), with extremely destructive downward phases, as shown in Table 1. This cyclical pattern is correlated with movements of the terms of trade (the relative price of exports versus imports), world demand and international capital flows. In the expansive phases exports performed well, due to increased demand and/or positive terms of trade shocks (1880s, 1900s, 1920s, 1940s and even during the Mercosur years from 1991 to 1998). Capital flows would sometimes follow these booms and prolong the cycle, or even be a decisive force to set the cycle up, as were financial flows in the 1970s and 1990s. The usual outcome, however, has been an overvalued currency, which blurred the debt problem and threatened the balance of trade by overpricing exports. Crises have been the result of a combination of changing trade conditions, devaluation and over-indebtedness, as in the 1880s, early 1910s, late 1920s, 1950s, early 1980s and late 1990s.

Population and per capita GDP of Uruguay, 1870-2002 (1913=100)

Table 1: Swings in the Uruguayan Economy, 1870-2003

Per capita GDP fall (%) Length of recession (years) Time to pre-crisis levels (years) Time to next crisis (years)
1872-1875 26 3 15 16
1888-1890 21 2 19 25
1912-1915 30 3 15 19
1930-1933 36 3 17 24-27
1954/57-59 9 2-5 18-21 27-24
1981-1984 17 3 11 17
1998-2003 21 5

Sources: See Figure 1.

Besides its cyclical movement, the terms of trade showed a sharp positive trend in 1870-1913, a strongly fluctuating pattern around similar levels in 1913-1960 and a deteriorating trend since then. While the volume of exports grew quickly up to the 1920s, it stagnated in 1930-1960 and started to grow again after 1970. As a result, the purchasing power of exports grew fourfold in 1870-1913, fluctuated along with the terms of trade in 1930-1960, and exhibited a moderate growth in 1970-2002.

The Uruguayan economy was very open to trade in the period up to 1913, featuring high export shares, which naturally declined as the rapidly growing population filled in rather empty areas. In 1930-1960 the economy was increasingly and markedly closed to international trade, but since the 1970s the economy opened up to trade again. Nevertheless, exports, which earlier were mainly directed to Europe (beef, wool, leather, linseed, etc.), were increasingly oriented to Argentina and Brazil, in the context of bilateral trade agreements in the 1970s and 1980s and of Mercosur (the trading zone encompassing Argentina, Brazil, Paraguay and Uruguay) in the 1990s.

While industrial output kept pace with agrarian export-led growth during the first globalization boom before World War I, the industrial share in GDP increased in 1930-54, and was mainly domestic-market orientated. Deindustrialization has been profound since the mid-1980s. The service sector was always large: focusing on commerce, transport and traditional state bureaucracy during the first globalization boom; focusing on health care, education and social services, during the import-substituting industrialization (ISI) period in the middle of the twentieth century; and focusing on military expenditure, tourism and finance since the 1970s.

The income distribution changed markedly over time. During the first globalization boom before World War I, an already uneven distribution of income and wealth seems to have worsened, due to massive immigration and increasing demand for land, both rural and urban. However, by the 1920s the relative prices of land and labor changed their previous trend, reducing income inequality. The trend later favored industrialization policies, democratization, introduction of wage councils, and the expansion of the welfare state based on an egalitarian ideology. Inequality diminished in many respects: between sectors, within sectors, between genders and between workers and pensioners. While the military dictatorship and the liberal economic policy implemented since the 1970s initiated a drastic reversal of the trend toward economic equality, the globalizing movements of the 1980s and 1990s under democratic rule didn’t increase equality. Thus, inequality remains at the higher levels reached during the period of dictatorship (1973-85).

Comparative Long-run Performance

If the stable long-run rate of Uruguayan per capita GDP growth hides important internal transformations, Uruguay’s changing position in the international scene is even more remarkable. During the first globalization boom the world became more unequal: the United States forged ahead as the world leader (nearly followed by other settler economies); Asia and Africa lagged far behind. Latin America showed a confusing map, in which countries as Argentina and Uruguay performed rather well, and others, such as the Andean region, lagged far behind (Bértola and Williamson 2003). Uruguay’s strong initial position tended to deteriorate in relation to the successful core countries during the late 1800s, as shown in Figure 2. This trend of negative relative growth was somewhat weak during the first half of the twentieth century, improved significantly during the 1960s, as the import-substituting industrialization model got exhausted, and has continued since the 1970s, despite policies favoring increased integration into the global economy.

 Per capita GDP of Uruguay relative to four core countries, 1870-2002

If school enrollment and literacy rates are reasonable proxies for human capital, in late 1800s both Argentina and Uruguay had a great handicap in relation to the United States, as shown in Table 2. The gap in literacy rates tended to disappear — as well as this proxy’s ability to measure comparative levels of human capital. Nevertheless, school enrollment, which includes college-level and technical education, showed a catching-up trend until the 1960’s, but reverted afterwards.

The gap in life-expectancy at birth has always been much smaller than the other development indicators. Nevertheless, some trends are noticeable: the gap increased in 1900-1930; decreased in 1930-1950; and increased again after the 1970s.

Table 2: Uruguayan Performance in Comparative Perspective, 1870-2000 (US = 100)

1870 1880 1890 1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000
GDP per capita


101 65 63 27 32 27 33 27 26 24 19 18 15 16


63 34 38 31 32 29 25 25 24 21 15 16


23 8 8 8 8 8 7 9 9 13 11 10
Latin America 13 12 13 10 9 9 9 6 6


100 100 100 100 100 100 100 100 100 100 100 100 100 100
Literacy rates


57 65 72 79 85 91 92 94 95 97 99


57 65 72 79 85 91 93 94 94 96 98


39 38 37 42 46 51 61 69 76 81 86

Latin America

28 30 34 37 42 47 56 65 71 77 83


100 100 100 100 100 100 100 100 100 100 100
School enrollment


23 31 31 30 34 42 52 46 43


28 41 42 36 39 43 55 44 45


12 11 12 14 18 22 30 42

Latin America


100 100 100 100 100 100 100 100 100
Life expectancy at birth


102 100 91 85 91 97 97 97 95 96 96


81 85 86 90 88 90 93 94 95 96 95
Brazil 60 60 56 58 58 63 79 83 85 88 88
Latin America 65 63 58 58 59 63 71 77 81 88 87
USA 100 100 100 100 100 100 100 100 100 100 100

Sources: Per capita GDP: Maddison (2001) and Astorga, Bergés and FitzGerald (2003). Literacy rates and life expectancy; Astorga, Bergés and FitzGerald (2003). School enrollment; Bértola and Bertoni (1998).

Uruguay during the First Globalization Boom: Challenge and Response

During the post-Great-War reconstruction after 1851, Uruguayan population grew rapidly (fueled by high natural rates and immigration) and so did per capita output. Productivity grew due to several causes including: the steam ship revolution, which critically reduced the price spread between Europe and America and eased access to the European market; railways, which contributed to the unification of domestic markets and reduced domestic transport costs; the diffusion and adaptation to domestic conditions of innovations in cattle-breeding and services; a significant reduction in transaction costs, related to a fluctuating but noticeable process of institutional building and strengthening of the coercive power of the state.

Wool and woolen products, hides and leather were exported mainly to Europe; salted beef (tasajo) to Brazil and Cuba. Livestock-breeding (both cattle and sheep) was intensive in natural resources and dominated by large estates. By the 1880s, the agrarian frontier was exhausted, land properties were fenced and property rights strengthened. Labor became abundant and concentrated in urban areas, especially around Montevideo’s harbor, which played an important role as a regional (supranational) commercial center. By 1908, it contained 40 percent of the nation’s population, which had risen to more than a million inhabitants, and provided the main part of Uruguay’s services, civil servants and the weak and handicraft-dominated manufacturing sector.

By the 1910s, Uruguayan competitiveness started to weaken. As the benefits of the old technological paradigm were eroding, the new one was not particularly beneficial for resource-intensive countries such as Uruguay. International demand shifted away from primary consumption, the population of Europe grew slowly and European countries struggled for self-sufficiency in primary production in a context of soaring world supply. Beginning in the 1920s, the cattle-breeding sector showed a very poor performance, due to lack of innovation away from natural pastures. In the 1930’s, its performance deteriorated mainly due to unfavorable international conditions. Export volumes stagnated until the 1970s, while purchasing power fluctuated strongly following the terms of trade.

Inward-looking Growth and Structural Change

The Uruguayan economy grew inwards until the 1950s. The multiple exchange rate system was the main economic policy tool. Agrarian production was re-oriented towards wool, crops, dairy products and other industrial inputs, away from beef. The manufacturing industry grew rapidly and diversified significantly, with the help of protectionist tariffs. It was light, and lacked capital goods or technology-intensive sectors. Productivity growth hinged upon technology transfers embodied in imported capital goods and an intensive domestic adaptation process of mature technologies. Domestic demand grew also through an expanding public sector and the expansion of a corporate welfare state. The terms of trade substantially impacted protectionism, productivity growth and domestic demand — the government raised money by manipulating exchange rates, so that when export prices rose the state had a greater capacity to protect the manufacturing sector through low exchange rates for capital goods, raw material and fuel imports and to spur productivity increases by imports of capital, while protection allowed industry to pay higher wages and thus expand domestic demand.

However, rent-seeking industries searching for protectionism and a weak clienteslist state, crowded by civil servants recruited in exchange for political favors to the political parties, directed structural change towards a closed economy and inefficient management. The obvious limits to inward looking growth of a country peopled by only about two million inhabitants were exacerbated in the late 1950s as terms of trade deteriorated. The clientelist political system, which was created by both traditional parties while the state was expanding at the national and local level, was now not able to absorb the increasing social conflicts, dyed by stringent ideological confrontation, in a context of stagnation and huge fiscal deficits.

Re-globalization and Regional Integration

The dictatorship (1973-1985) started a period of increasing openness to trade and deregulation which has persisted until the present. Dynamic integration into the world market is still incomplete, however. An attempt to return to cattle-breeding exports, as the engine of growth, was hindered by the oil crises and the ensuing European response, which restricted meat exports to that destination. The export sector was re-orientated towards “non-traditional exports” — i.e., exports of industrial goods made of traditional raw materials, to which low-quality and low-wage labor was added. Exports were also stimulated by means of strong fiscal exemptions and negative real interest rates and were re-orientated to the regional market (Argentina and Brazil) and to other developing regions. At the end of the 1970s, this policy was replaced by the monetarist approach to the balance of payments. The main goal was to defeat inflation (which had continued above 50% since the 1960s) through deregulation of foreign trade and a pre-announced exchange rate, the “tablita.” A strong wave of capital inflows led to a transitory success, but the Uruguayan peso became more and more overvalued, thus limiting exports, encouraging imports and deepening the chronic balance of trade deficit. The “tablita” remained dependent on increasing capital inflows and obviously collapsed when the risk of a huge devaluation became real. Recession and the debt crisis dominated the scene of the early 1980s.

Democratic regimes since 1985 have combined natural resource intensive exports to the region and other emergent markets, with a modest intra-industrial trade mainly with Argentina. In the 1990s, once again, Uruguay was overexposed to financial capital inflows which fueled a rather volatile growth period. However, by the year 2000, Uruguay had a much worse position in relation to the leaders of the world economy as measured by per capita GDP, real wages, equity and education coverage, than it had fifty years earlier.

Medium-run Prospects

In the 1990s Mercosur as a whole and each of its member countries exhibited a strong trade deficit with non-Mercosur countries. This was the result of a growth pattern fueled by and highly dependent on foreign capital inflows, combined with the traditional specialization in commodities. The whole Mercosur project is still mainly oriented toward price competitiveness. Nevertheless, the strongly divergent macroeconomic policies within Mercosur during the deep Argentine and Uruguayan crisis of the beginning of the twenty-first century, seem to have given place to increased coordination between Argentina and Brazil, thus making of the region a more stable environment.

The big question is whether the ongoing political revival of Mercosur will be able to achieve convergent macroeconomic policies, success in international trade negotiations, and, above all, achievements in developing productive networks which may allow Mercosur to compete outside its home market with knowledge-intensive goods and services. Over that hangs Uruguay’s chance to break away from its long-run divergent siesta.


Astorga, Pablo, Ame R. Bergés and Valpy FitzGerald. “The Standard of Living in Latin America during the Twentieth Century.” University of Oxford Discussion Papers in Economic and Social History 54 (2004).

Barrán, José P. and Benjamín Nahum. “Uruguayan Rural History.” Latin American Historical Review, 1985.

Bértola, Luis. The Manufacturing Industry of Uruguay, 1913-1961: A Sectoral Approach to Growth, Fluctuations and Crisis. Publications of the Department of Economic History, University of Göteborg, 61; Institute of Latin American Studies of Stockholm University, Monograph No. 20, 1990.

Bértola, Luis and Reto Bertoni. “Educación y aprendizaje en escenarios de convergencia y divergencia.” Documento de Trabajo, no. 46, Unidad Multidisciplinaria, Facultad de Ciencias Sociales, Universidad de la República, 1998.

Bértola, Luis and Fernando Lorenzo. “Witches in the South: Kuznets-like Swings in Argentina, Brazil and Uruguay since the 1870s.” In The Experience of Economic Growth, edited by J.L. van Zanden and S. Heikenen. Amsterdam: Aksant, 2004.

Bértola, Luis and Gabriel Porcile. “Argentina, Brasil, Uruguay y la Economía Mundial: una aproximación a diferentes regímenes de convergencia y divergencia.” In Ensayos de Historia Económica by Luis Bertola. Montevideo: Uruguay en la región y el mundo, 2000.

Bértola, Luis and Jeffrey Williamson. “Globalization in Latin America before 1940.” National Bureau of Economic Research Working Paper, no. 9687 (2003).

Bértola, Luis and others. El PBI uruguayo 1870-1936 y otras estimaciones. Montevideo, 1998.

Maddison, A. Monitoring the World Economy, 1820-1992. Paris: OECD, 1995.

Maddison, A. The World Economy: A Millennial Perspective. Paris: OECD, 2001.

The Economics of the American Revolutionary War

Ben Baack, Ohio State University

By the time of the onset of the American Revolution, Britain had attained the status of a military and economic superpower. The thirteen American colonies were one part of a global empire generated by the British in a series of colonial wars beginning in the late seventeenth century and continuing on to the mid eighteenth century. The British military establishment increased relentlessly in size during this period as it engaged in the Nine Years War (1688-97), the War of Spanish Succession (1702-13), the War of Austrian Succession (1739-48), and the Seven Years War (1756-63). These wars brought considerable additions to the British Empire. In North America alone the British victory in the Seven Years War resulted in France ceding to Britain all of its territory east of the Mississippi River as well as all of Canada and Spain surrendering its claim to Florida (Nester, 2000).

Given the sheer magnitude of the British military and its empire, the actions taken by the American colonists for independence have long fascinated scholars. Why did the colonists want independence? How were they able to achieve a victory over what was at the time the world’s preeminent military power? What were the consequences of achieving independence? These and many other questions have engaged the attention of economic, legal, military, political, and social historians. In this brief essay we will focus only on the economics of the Revolutionary War.

Economic Causes of the Revolutionary War

Prior to the conclusion of the Seven Years War there was little, if any, reason to believe that one day the American colonies would undertake a revolution in an effort to create an independent nation-state. As apart of the empire the colonies were protected from foreign invasion by the British military. In return, the colonists paid relatively few taxes and could engage in domestic economic activity without much interference from the British government. For the most part the colonists were only asked to adhere to regulations concerning foreign trade. In a series of acts passed by Parliament during the seventeenth century the Navigation Acts required that all trade within the empire be conducted on ships which were constructed, owned and largely manned by British citizens. Certain enumerated goods whether exported or imported by the colonies had to be shipped through England regardless of the final port of destination.

Western Land Policies

The movement for independence arose in the colonies following a series of critical decisions made by the British government after the end of the war with France in 1763. Two themes emerge from what was to be a fundamental change in British economic policy toward the American colonies. The first involved western land. With the acquisition from the French of the territory between the Allegheny Mountains and the Mississippi River the British decided to isolate the area from the rest of the colonies. Under the terms of the Proclamation of 1763 and the Quebec Act of 1774 colonists were not allowed to settle here or trade with the Indians without the permission of the British government. These actions nullified the claims to land in the area by a host of American colonies, individuals, and land companies. The essence of the policy was to maintain British control of the fur trade in the West by restricting settlement by the Americans.

Tax Policies

The second fundamental change involved taxation. The British victory over the French had come at a high price. Domestic taxes had been raised substantially during the war and total government debt had increased nearly twofold (Brewer, 1989). Furthermore, the British had decided in1763 to place a standing army of 10,000 men in North America. The bulk of these forces were stationed in newly acquired territory to enforce its new land policy in the West. Forts were to be built which would become the new centers of trade with the Indians. The British decided that the Americans should share the costs of the military buildup in the colonies. The reason seemed obvious. Taxes were significantly higher in Britain than in the colonies. One estimate suggests the per capita tax burden in the colonies ranged from two to four per cent of that in Britain (Palmer, 1959). It was time in the British view that the Americans began to pay a larger share of the expenses of the empire.

Accordingly, a series of tax acts were passed by Parliament the revenue from which was to be used to help pay for the standing army in America. The first was the Sugar Act of 1764. Proposed by England’s Prime Minister the act lowered tariff rates on non-British products from the West Indies as well as strengthened their collection. It was hoped this would reduce the incentive for smuggling and thereby increase tariff revenue (Bullion, 1982). The following year Parliament passed the Stamp Act that imposed a tax commonly used in England. It required stamps for a broad range of legal documents as well as newspapers and pamphlets. While the colonial stamp duties were less than those in England they were expected to generate enough revenue to finance a substantial portion of the cost the new standing army. The same year passage of the Quartering Act imposed essentially a tax in kind by requiring the colonists to provide British military units with housing, provisions, and transportation. In 1767 the Townshend Acts imposed tariffs upon a variety of imported goods and established a Board of Customs Commissioners in the colonies to collect the revenue.


American opposition to these acts was expressed initially in a variety of peaceful forms. While they did not have representation in Parliament, the colonists did attempt to exert some influence in it through petition and lobbying. However, it was the economic boycott that became by far the most effective means of altering the new British economic policies. In 1765 representatives from nine colonies met at the Stamp Act Congress in New York and organized a boycott of imported English goods. The boycott was so successful in reducing trade that English merchants lobbied Parliament for the repeal of the new taxes. Parliament soon responded to the political pressure. During 1766 it repealed both the Stamp and Sugar Acts (Johnson, 1997). In response to the Townshend Acts of 1767 a second major boycott started in 1768 in Boston and New York and subsequently spread to other cities leading Parliament in 1770 to repeal all of the Townshend duties except the one on tea. In addition, Parliament decided at the same time not to renew the Quartering Act.

With these actions taken by Parliament the Americans appeared to have successfully overturned the new British post war tax agenda. However, Parliament had not given up what it believed to be its right to tax the colonies. On the same day it repealed the Stamp Act, Parliament passed the Declaratory Act stating the British government had the full power and authority to make laws governing the colonies in all cases whatsoever including taxation. Policies not principles had been overturned.

The Tea Act

Three years after the repeal of the Townshend duties British policy was once again to emerge as an issue in the colonies. This time the American reaction was not peaceful. It all started when Parliament for the first time granted an exemption from the Navigation Acts. In an effort to assist the financially troubled British East India Company Parliament passed the Tea Act of 1773, which allowed the company to ship tea directly to America. The grant of a major trading advantage to an already powerful competitor meant a potential financial loss for American importers and smugglers of tea. In December a small group of colonists responded by boarding three British ships in the Boston harbor and throwing overboard several hundred chests of tea owned by the East India Company (Labaree, 1964). Stunned by the events in Boston, Parliament decided not to cave in to the colonists as it had before. In rapid order it passed the Boston Port Act, the Massachusetts Government Act, the Justice Act, and the Quartering Act. Among other things these so-called Coercive or Intolerable Acts closed the port of Boston, altered the charter of Massachusetts, and reintroduced the demand for colonial quartering of British troops. Once done Parliament then went on to pass the Quebec Act as a continuation of its policy of restricting the settlement of the West.

The First Continental Congress

Many Americans viewed all of this as a blatant abuse of power by the British government. Once again a call went out for a colonial congress to sort out a response. On September 5, 1774 delegates appointed by the colonies met in Philadelphia for the First Continental Congress. Drawing upon the successful manner in which previous acts had been overturned the first thing Congress did was to organize a comprehensive embargo of trade with Britain. It then conveyed to the British government a list of grievances that demanded the repeal of thirteen acts of Parliament. All of the acts listed had been passed after 1763 as the delegates had agreed not to question British policies made prior to the conclusion of the Seven Years War. Despite all the problems it had created, the Tea Act was not on the list. The reason for this was that Congress decided not to protest British regulation of colonial trade under the Navigation Acts. In short, the delegates were saying to Parliament take us back to 1763 and all will be well.

The Second Continental Congress

What happened then was a sequence of events that led to a significant increase in the degree of American resistance to British polices. Before the Congress adjourned in October the delegates voted to meet again in May of 1775 if Parliament did not meet their demands. Confronted by the extent of the American demands the British government decided it was time to impose a military solution to the crisis. Boston was occupied by British troops. In April a military confrontation occurred at Lexington and Concord. Within a month the Second Continental Congress was convened. Here the delegates decided to fundamentally change the nature of their resistance to British policies. Congress authorized a continental army and undertook the purchase of arms and munitions. To pay for all of this it established a continental currency. With previous political efforts by the First Continental Congress to form an alliance with Canada having failed, the Second Continental Congress took the extraordinary step of instructing its new army to invade Canada. In effect, these actions taken were those of an emerging nation-state. In October as American forces closed in on Quebec the King of England in a speech to Parliament declared that the colonists having formed their own government were now fighting for their independence. It was to be only a matter of months before Congress formally declared it.

Economic Incentives for Pursuing Independence: Taxation

Given the nature of British colonial policies, scholars have long sought to evaluate the economic incentives the Americans had in pursuing independence. In this effort economic historians initially focused on the period following the Seven Years War up to the Revolution. It turned out that making a case for the avoidance of British taxes as a major incentive for independence proved difficult. The reason was that many of the taxes imposed were later repealed. The actual level of taxation appeared to be relatively modest. After all, the Americans soon after adopting the Constitution taxed themselves at far higher rates than the British had prior to the Revolution (Perkins, 1988). Rather it seemed the incentive for independence might have been the avoidance of the British regulation of colonial trade. Unlike some of the new British taxes, the Navigation Acts had remained intact throughout this period.

The Burden of the Navigation Acts

One early attempt to quantify the economic effects of the Navigation Acts was by Thomas (1965). Building upon the previous work of Harper (1942), Thomas employed a counterfactual analysis to assess what would have happened to the American economy in the absence of the Navigation Acts. To do this he compared American trade under the Acts with that which would have occurred had America been independent following the Seven Years War. Thomas then estimated the loss of both consumer and produce surplus to the colonies as a result of shipping enumerated goods indirectly through England. These burdens were partially offset by his estimated value of the benefits of British protection and various bounties paid to the colonies. The outcome of his analysis was that the Navigation Acts imposed a net burden of less than one percent of colonial per capita income. From this he concluded the Acts were an unlikely cause of the Revolution. A long series of subsequent works questioned various parts of his analysis but not his general conclusion (Walton, 1971). The work of Thomas also appeared to be consistent with the observation that the First Continental Congress had not demanded in its list of grievances the repeal of either the Navigation Acts or the Sugar Act.

American Expectations about Future British Policy

Did this mean then that the Americans had few if any economic incentives for independence? Upon further consideration economic historians realized that perhaps more important to the colonists were not the past and present burdens but rather the expected future burdens of continued membership in the British Empire. The Declaratory Act made it clear the British government had not given up what it viewed as its right to tax the colonists. This was despite the fact that up to 1775 the Americans had employed a variety of protest measures including lobbying, petitions, boycotts, and violence. The confluence of not having representation in Parliament while confronting an aggressive new British tax policy designed to raise their relatively low taxes may have made it reasonable for the Americans to expect a substantial increase in the level of taxation in the future (Gunderson, 1976, Reid, 1978). Furthermore a recent study has argued that in 1776 not only did the future burdens of the Navigation Acts clearly exceed those of the past, but a substantial portion would have borne by those who played a major role in the Revolution (Sawers, 1992). Seen in this light the economic incentive for independence would have been avoiding the potential future costs of remaining in the British Empire.

The Americans Undertake a Revolution


British Military Advantages

The American colonies had both strengths and weaknesses in terms of undertaking a revolution. The colonial population of well over two million was nearly one third of that in Britain (McCusker and Menard, 1985). The growth in the colonial economy had generated a remarkably high level of per capita wealth and income (Jones, 1980). Yet the hurdles confronting the Americans in achieving independence were indeed formidable. The British military had an array of advantages. With virtual control of the Atlantic its navy could attack anywhere along the American coast at will and would have borne logistical support for the army without much interference. A large core of experienced officers commanded a highly disciplined and well-drilled army in the large-unit tactics of eighteenth century European warfare. By these measures the American military would have great difficulty in defeating the British. Its navy was small. The Continental Army had relatively few officers proficient in large-unit military tactics. Lacking both the numbers and the discipline of its adversary the American army was unlikely to be able to meet the British army on equal terms on the battlefield (Higginbotham, 1977).

British Financial Advantages

In addition, the British were in a better position than the Americans to finance a war. A tax system was in place that had provided substantial revenue during previous colonial wars. Also for a variety of reasons the government had acquired an exceptional capacity to generate debt to fund wartime expenses (North and Weingast, 1989). For the Continental Congress the situation was much different. After declaring independence Congress had set about defining the institutional relationship between it and the former colonies. The powers granted to Congress were established under the Articles of Confederation. Reflecting the political environment neither the power to tax nor the power to regulate commerce was given to Congress. Having no tax system to generate revenue also made it very difficult to borrow money. According to the Articles the states were to make voluntary payments to Congress for its war efforts. This precarious revenue system was to hamper funding by Congress throughout the war (Baack, 2001).

Military and Financial Factors Determine Strategy

It was within these military and financial constraints that the war strategies by the British and the Americans were developed. In terms of military strategies both of the contestants realized that America was simply too large for the British army to occupy all of the cities and countryside. This being the case the British decided initially that they would try to impose a naval blockade and capture major American seaports. Having already occupied Boston, the British during 1776 and 1777 took New York, Newport, and Philadelphia. With plenty of room to maneuver his forces and unable to match those of the British, George Washington chose to engage in a war of attrition. The purpose was twofold. First, by not engaging in an all out offensive Washington reduced the probability of losing his army. Second, over time the British might tire of the war.


Frustrated without a conclusive victory, the British altered their strategy. During 1777 a plan was devised to cut off New England from the rest of the colonies, contain the Continental Army, and then defeat it. An army was assembled in Canada under the command of General Burgoyne and then sent to and down along the Hudson River. It was to link up with an army sent from New York City. Unfortunately for the British the plan totally unraveled as in October Burgoyne’s army was defeated at the battle of Saratoga and forced to surrender (Ketchum, 1997).

The American Financial Situation Deteriorates

With the victory at Saratoga the military side of the war had improved considerably for the Americans. However, the financial situation was seriously deteriorating. The states to this point had made no voluntary payments to Congress. At the same time the continental currency had to compete with a variety of other currencies for resources. The states were issuing their own individual currencies to help finance expenditures. Moreover the British in an effort to destroy the funding system of the Continental Congress had undertaken a covert program of counterfeiting the Continental dollar. These dollars were printed and then distributed throughout the former colonies by the British army and agents loyal to the Crown (Newman, 1957). Altogether this expansion of the nominal money supply in the colonies led to a rapid depreciation of the Continental dollar (Calomiris, 1988, Michener, 1988). Furthermore, inflation may have been enhanced by any negative impact upon output resulting from the disruption of markets along with the destruction of property and loss of able-bodied men (Buel, 1998). By the end of 1777 inflation had reduced the specie value of the Continental to about twenty percent of what it had been when originally issued. This rapid decline in value was becoming a serious problem for Congress in that up to this point almost ninety percent of its revenue had been generated from currency emissions.


British Invasion of the South

The British defeat at Saratoga had a profound impact upon the nature of the war. The French government still upset by their defeat by the British in the Seven Years War and encouraged by the American victory signed a treaty of alliance with the Continental Congress in early 1778. Fearing a new war with France the British government sent a commission to negotiate a peace treaty with the Americans. The commission offered to repeal all of the legislation applying to the colonies passed since 1763. Congress rejected the offer. The British response was to give up its efforts to suppress the rebellion in the North and in turn organize an invasion of the South. The new southern campaign began with the taking of the port of Savannah in December. Pursuing their southern strategy the British won major victories at Charleston and Camden during the spring and summer of 1780.

Worsening Inflation and Financial Problems

As the American military situation deteriorated in the South so did the financial circumstances of the Continental Congress. Inflation continued as Congress and the states dramatically increased the rate of issuance of their currencies. At the same time the British continued to pursue their policy of counterfeiting the Continental dollar. In order to deal with inflation some states organized conventions for the purpose of establishing wage and price controls (Rockoff, 1984). With its currency rapidly depreciating in value Congress increasingly relied on funds from other sources such as state requisitions, domestic loans, and French loans of specie. As a last resort Congress authorized the army to confiscate property.


Fortunately for the Americans the British military effort collapsed before the funding system of Congress. In a combined effort during the fall of 1781 French and American forces trapped the British southern army under the command of Cornwallis at Yorktown, Virginia. Under siege by superior forces the British army surrendered on October 19. The British government had now suffered not only the defeat of its northern strategy at Saratoga but also the defeat of its southern campaign at Yorktown. Following Yorktown, Britain suspended its offensive military operations against the Americans. The war was over. All that remained was the political maneuvering over the terms for peace.

The Treaty of Paris

The Revolutionary War officially concluded with the signing of the Treaty of Paris in 1783. Under the terms of the treaty the United States was granted independence and British troops were to evacuate all American territory. While commonly viewed by historians through the lens of political science, the Treaty of Paris was indeed a momentous economic achievement by the United States. The British ceded to the Americans all of the land east of the Mississippi River which they had taken from the French during the Seven Years War. The West was now available for settlement. To the extent the Revolutionary War had been undertaken by the Americans to avoid the costs of continued membership in the British Empire, the goal had been achieved. As an independent nation the United States was no longer subject to the regulations of the Navigation Acts. There was no longer to be any economic burden from British taxation.


When you start a revolution you have to be prepared for the possibility you might win. This means being prepared to form a new government. When the Americans declared independence their experience of governing at a national level was indeed limited. In 1765 delegates from various colonies had met for about eighteen days at the Stamp Act Congress in New York to sort out a colonial response to the new stamp duties. Nearly a decade passed before delegates from colonies once again got together to discuss a colonial response to British policies. This time the discussions lasted seven weeks at the First Continental Congress in Philadelphia during the fall of 1774. The primary action taken at both meetings was an agreement to boycott trade with England. After having been in session only a month, delegates at the Second Continental Congress for the first time began to undertake actions usually associated with a national government. However, when the colonies were declared to be free and independent states Congress had yet to define its institutional relationship with the states.

The Articles of Confederation

Following the Declaration of Independence, Congress turned to deciding the political and economic powers it would be given as well as those granted to the states. After more than a year of debate among the delegates the allocation of powers was articulated in the Articles of Confederation. Only Congress would have the authority to declare war and conduct foreign affairs. It was not given the power to tax or regulate commerce. The expenses of Congress were to be made from a common treasury with funds supplied by the states. This revenue was to be generated from exercising the power granted to the states to determine their own internal taxes. It was not until November of 1777 that Congress approved the final draft of the Articles. It took over three years for the states to ratify the Articles. The primary reason for the delay was a dispute over control of land in the West as some states had claims while others did not. Those states with claims eventually agreed to cede them to Congress. The Articles were then ratified and put into effect on March 1, 1781. This was just a few months before the American victory at Yorktown. The process of institutional development had proved so difficult that the Americans fought almost the entire Revolutionary War with a government not sanctioned by the states.

Difficulties in the 1780s

The new national government that emerged from the Revolution confronted a host of issues during the 1780s. The first major one to be addressed by Congress was what to do with all of the land acquired in the West. Starting in 1784 Congress passed a series of land ordinances that provided for land surveys, sales of land to individuals, and the institutional foundation for the creation of new states. These ordinances opened the West for settlement. While this was a major accomplishment by Congress, other issues remained unresolved. Having repudiated its own currency and no power of taxation, Congress did not have an independent source of revenue to pay off its domestic and foreign debts incurred during the war. Since the Continental Army had been demobilized no protection was being provided for settlers in the West or against foreign invasion. Domestic trade was being increasingly disrupted during the 1780s as more states began to impose tariffs on goods from other states. Unable to resolve these and other issues Congress endorsed a proposed plan to hold a convention to meet in Philadelphia in May of 1787 to revise the Articles of Confederation.

Rather than amend the Articles, the delegates to the convention voted to replace them entirely with a new form of national government under the Constitution. There are of course many ways to assess the significance of this truly remarkable achievement. One is to view the Constitution as an economic document. Among other things the Constitution specifically addressed many of the economic problems that confronted Congress during and after the Revolutionary War. Drawing upon lessons learned in financing the war, no state under the Constitution would be allowed to coin money or issue bills of credit. Only the national government could coin money and regulate its value. Punishment was to be provided for counterfeiting. The problems associated with the states contributing to a common treasury under the Articles were overcome by giving the national government the coercive power of taxation. Part of the revenue was to be used to pay for the common defense of the United States. No longer would states be allowed to impose tariffs as they had done during the 1780s. The national government was now given the power to regulate both foreign and interstate commerce. As a result the nation was to become a common market. There is a general consensus among economic historians today that the economic significance of the ratification of the Constitution was to lay the institutional foundation for long run growth. From the point of view of the former colonists, however, it meant they had succeeded in transferring the power to tax and regulate commerce from Parliament to the new national government of the United States.

Table 1 Continental Dollar Emissions (1775-1779)

Year of Emission Nominal Dollars Emitted (000) Annual Emission As Share of Total Nominal Stock Emitted Specie Value of Annual Emission (000) Annual Emission As Share of Total Specie Value Emitted
1775 $6,000 3% $6,000 15%
1776 19,000 8 15,330 37
1777 13,000 5 4,040 10
1778 63,000 26 10,380 25
1779 140,500 58 5,270 13
Total $241,500 100% $41,020 100%

Source: Bullock (1895), 135.
Table 2 Currency Emissions by the States (1775-1781)

Year of Emission Nominal Dollars Emitted (000) Year of Emission Nominal Dollars Emitted (000)
1775 $4,740 1778 $9,118
1776 13,328 1779 17,613
1777 9,573 1780 66,813
1781 123.376
Total $27,641 Total $216,376

Source: Robinson (1969), 327-28.


Baack, Ben. “Forging a Nation State: The Continental Congress and the Financing of the War of American Independence.” Economic History Review 54, no.4 (2001): 639-56.

Brewer, John. The Sinews of Power: War, Money and the English State, 1688- 1783. London: Cambridge University Press, 1989.

Buel, Richard. In Irons: Britain’s Naval Supremacy and the American Revolutionary Economy. New Haven: Yale University Press, 1998.

Bullion, John L. A Great and Necessary Measure: George Grenville and the Genesis of the Stamp Act, 1763-1765. Columbia: University of Missouri Press, 1982.

Bullock, Charles J. “The Finances of the United States from 1775 to 1789, with Especial Reference to the Budget.” Bulletin of the University of Wisconsin 1 no. 2 (1895): 117-273.

Calomiris, Charles W. “Institutional Failure, Monetary Scarcity, and the Depreciation of the Continental.” Journal of Economic History 48 no. 1 (1988): 47-68.

Egnal, Mark. A Mighty Empire: The Origins of the American Revolution. Ithaca: Cornell University Press, 1988.

Ferguson, E. James. The Power of the Purse: A History of American Public Finance, 1776-1790. Chapel Hill: University of North Carolina Press, 1961.

Gunderson, Gerald. A New Economic History of America. New York: McGraw- Hill, 1976.

Harper, Lawrence A. “Mercantilism and the American Revolution.” Canadian Historical Review 23 (1942): 1-15.

Higginbotham, Don. The War of American Independence: Military Attitudes, Policies, and Practice, 1763-1789. Bloomington: Indiana University Press, 1977.

Jensen, Merrill, editor. English Historical Documents: American Colonial Documents to 1776 New York: Oxford university Press, 1969.

Johnson, Allen S. A Prologue to Revolution: The Political Career of George Grenville (1712-1770). New York: University Press, 1997.

Jones, Alice H. Wealth of a Nation to Be: The American Colonies on the Eve of the Revolution. New York: Columbia University Press, 1980.

Ketchum, Richard M. Saratoga: Turning Point of America’s Revolutionary War. New York: Henry Holt and Company, 1997.

Labaree, Benjamin Woods. The Boston Tea Party. New York: Oxford University Press, 1964.

Mackesy, Piers. The War for America, 1775-1783. Cambridge: Harvard University Press, 1964.

McCusker, John J. and Russell R. Menard. The Economy of British America, 1607- 1789. Chapel Hill: University of North Carolina Press, 1985.

Michener, Ron. “Backing Theories and the Currencies of Eighteenth-Century America: A Comment.” Journal of Economic History 48 no. 3 (1988): 682-692.

Nester, William R. The First Global War: Britain, France, and the Fate of North America, 1756-1775. Westport: Praeger, 2000.

Newman, E. P. “Counterfeit Continental Currency Goes to War.” The Numismatist 1 (January, 1957): 5-16.

North, Douglass C., and Barry R. Weingast. “Constitutions and Commitment: The Evolution of Institutions Governing Public Choice in Seventeenth-Century England.” Journal of Economic History 49 No. 4 (1989): 803-32.

O’Shaughnessy, Andrew Jackson. An Empire Divided: The American Revolution and the British Caribbean. Philadelphia: University of Pennsylvania Press, 2000.

Palmer, R. R. The Age of Democratic Revolution: A Political History of Europe and America. Vol. 1. Princeton: Princeton University Press, 1959.

Perkins, Edwin J. The Economy of Colonial America. New York: Columbia University Press, 1988.

Reid, Joseph D., Jr. “Economic Burden: Spark to the American Revolution?” Journal of Economic History 38, no. 1 (1978): 81-100.

Robinson, Edward F. “Continental Treasury Administration, 1775-1781: A Study in the Financial History of the American Revolution.” Ph.D. diss., University of Wisconsin, 1969.

Rockoff, Hugh. Drastic Measures: A History of Wage and Price Controls in the United States. Cambridge: Cambridge University Press, 1984.

Sawers, Larry. “The Navigation Acts Revisited.” Economic History Review 45, no. 2 (1992): 262-84.

Thomas, Robert P. “A Quantitative Approach to the Study of the Effects of British Imperial Policy on Colonial Welfare: Some Preliminary Findings.” Journal of Economic History 25, no. 4 (1965): 615-38.

Tucker, Robert W. and David C. Hendrickson. The Fall of the First British Empire: Origins of the War of American Independence. Baltimore: Johns Hopkins Press, 1982.

Walton, Gary M. “The New Economic History and the Burdens of the Navigation Acts.” Economic History Review 24, no. 4 (1971): 533-42.

The National Recovery Administration

Barbara Alexander, Charles River Associates

This article outlines the history of the National Recovery Administration, one of the most important and controversial agencies in Roosevelt’s New Deal. It discusses the agency’s “codes of fair competition” under which antitrust law exemptions could be granted in exchange for adoption of minimum wages, problems some industries encountered in their subsequent attempts to fix prices under the codes, and the macroeconomic effects of the program.

The early New Deal suspension of antitrust law under the National Recovery Administration (NRA) is surely one of the oddest episodes in American economic history. In its two-year life, the NRA oversaw the development of so-called “codes of fair competition” covering the larger part of the business landscape.1 The NRA generally is thought to have represented a political exchange whereby business gave up some of its rights over employees in exchange for permission to form cartels.2 Typically, labor is taken to have gotten the better part of the bargain; the union movement having extended its new powers after the Supreme Court abolished the NRA in 1935, while the business community faced a newly aggressive FTC by the end of the 1930s. While this characterization may be true in broad outline, close examination of the NRA reveals that matters may be somewhat more complicated than is suggested by the interpretation of the program as a win for labor contrasted with a missed opportunity for business.

Recent evaluations of the NRA have wended their way back to themes sounded during the early nineteen thirties, in particular, interrelationships between the so-called “trade practice” or cartelization provisions of the program and the grant of enhanced bargaining power to trade unions.3 On the microeconomic side, allowing unions to bargain for industry-wide wages may have facilitated cartelization in some industries. Meanwhile, macroeconomists have suggested that the Act and its progeny, especially labor measures such as the National Labor Relations Act may bear more responsibility for the length and severity of the Great Depression than has been recognized heretofore. 4 If this thesis holds up to closer scrutiny, the era may come to be seen as a primary example of the potential macroeconomic costs of shifts in political and economic power.

Kickoff Campaign and Blanket Codes

The NRA began operations in a burst of “ballyhoo” during the summer of 1933. 5 The agency was formed upon passage of the National Industrial Recovery Act (NIRA) in mid-June. A kick-off campaign of parades and press events succeeded in getting over 2 million employers to sign a preliminary “blanket code” known as the “President’s Re-Employment Agreement.” Signatories of the PRA pledged to pay minimum wages ranging from around $12 to $15 per 40-hour week, depending on size of town. Some 16 million workers were covered, out of a non-farm labor force of some 25 million. “Share-the-work” provisions called for limits of 35 to 40 hours per week for most employees. 6

NRA Codes

Over the next year and a half, the blanket code was superseded by over 500 codes negotiated for individual industries. The NIRA provided that: “Upon the application to the President by one or more trade or industrial associations or groups, the President may approve a code or codes of fair competition for the trade or industry.” 7 The carrot held out to induce participation was enticing: “any code … and any action complying with the provisions thereof . . . shall be exempt from the provisions of the antitrust laws of the United States.” 8 Representatives of trade associations overran Washington, and by the time the NRA was abolished, hundreds of codes covering over three-quarters of private, non-farm employment had been approved.9 Code signatories were supposed to be allowed to use the NRA “Blue Eagle” as a symbol that “we do our part” only as long as they remained in compliance with code provisions.10

Disputes Arise

Almost 80 percent of the codes had provisions that were directed at establishment of price floors.11 The Act did not specifically authorize businesses to fix prices, and indeed it specified that ” . . .codes are not designed to promote monopolies.” 12 However, it is an understatement to say that there was never any consensus among firms, industries and NRA officials as to precisely what was to be allowed as part of an acceptable code. Arguments about exactly what the NIRA allowed, and how the NRA should implement the Act began during its drafting and continued unabated throughout its life. The arguments extended from the level of general principles to the smallest details of policy, unsurprising given the complete dependence of appropriate regulatory design on precise regulatory objectives, which here were embroiled in dispute from start to finish.

To choose just one out of many examples of such disputes: There was a debate within the NRA as to whether “code authorities” (industry governing bodies) should be allowed to use industry-wide or “representative” cost data to define a price floor based on “lowest reasonable cost.” Most economists would understand this type of rule as a device that would facilitate monopoly pricing. However, a charitable interpretation of the views of administration proponents is that they had some sort of “soft competition” in mind. That is, they wished to develop and allow the use of mechanisms that would extend to more fragmented industries a type of peaceful coexistence more commonly associated with oligopoly. Those NRA supporters of the representative-cost-based price floor imagined that a range of prices would emerge if such a floor were to be set, whereas detractors believed that “the minimum would become the maximum,” that is, the floor would simply be a cartel price, constraining competition across all firms in an industry.13

Price Floors

While a rule allowing emergency price floors based on “lowest reasonable cost” was eventually approved, there was no coherent NRA program behind it.14 Indeed, the NRA and code authorities often operated at cross-purposes. At the same time that some officials of the NRA arguably took actions to promote softened competition, some in industry tried to implement measures more likely to support hard-core cartels, even when they thereby reduced the chance of soft competition should collusion fail. For example, with the partial support of the NRA, many code authorities moved to standardize products, shutting off product differentiation as an arena of potential rivalry, in spite of its role as one of the strongest mechanisms that might soften price competition.15 Of course if one is looking to run a naked price-fixing scheme, it is helpful to eliminate product differentiation as an avenue for cost-raising, profit-eroding rivalry. An industry push for standardization can thus be seen as a way of supporting hard-core cartelization, while less enthusiasm on the part of some administration officials may have reflected an understanding, however intuitive, that socially more desirable soft competition required that avenues for product differentiation be left open.

National Recovery Review Board

According to some critical observers then and later, the codes did lead to an unsurprising sort of “golden age” of cartelization. The National Recovery Review Board, led by an outraged Clarence Darrow (of Scopes “monkey trial” fame) concluded in May of 1934 that “in certain industries monopolistic practices existed.” 16 While there are legitimate examples of every variety of cartelization occurring under the NRA, many contemporaneous and subsequent assessments of Darrow’s work dismiss the Board’s “analysis” as hopelessly biased. Thus although its conclusions are interesting as a matter of political economy, it is far from clear that the Board carried out any dispassionate inventory of conditions across industries, much less a real weighing of evidence.17

Compliance Crisis

In contrast to Darrow’s perspective, other commentators focus on the “compliance crisis” that erupted within a few months of passage of the NIRA.18 Many industries were faced with “chiselers” who refused to respect code pricing rules. Firms that attempted to uphold code prices in the face of defection lost both market share and respect for the NRA.

NRA state compliance offices had recorded over 30,000 “trade practice” complaints by early 1935.19 However, the compliance program was characterized by “a marked timidity on the part of NRA enforcement officials.” 20 This timidity was fatal to the program, since monopoly pricing can easily be more damaging than is the most bare-knuckled competition to a firm that attempts it without parallel action from its competitors. NRA hesitancy came about as a result of doubts about whether a vigorous enforcement effort would withstand constitutional challenge, a not-unrelated lack of support from the Department of Justice, public antipathy for enforcement actions aimed at forcing sellers to charge higher prices, and unabating internal NRA disputes about the advisability of the price-fixing core of the trade practice program.21 Consequently, by mid-1934, firms disinclined to respect code pricing rules were ignoring them. By that point then, contrary to the initial expectations of many code signatories, the new antitrust regime represented only permission to form voluntary cartelization agreements, not the advent of government-enforced cartels. Even there, participants had to be discreet, so as not to run afoul of the antimonopoly language of the Act.

It is still far from clear how much market power was conferred by the NRA’s loosening of antitrust constraints. Of course, modern observers of the alternating successes and failures of cartels such as OPEC will not be surprised that the NRA program led to mixed results. In the absence of government enforcement, the program simply amounted to de facto legalization of self-enforcing cartels. With respect to the ease of collusion, economic theory is clear only on the point that self-enforceability is an open question; self-interest may lead to either breakdown of agreements or success at sustaining them.

Conflicts between Large and Small Firms

Some part of the difficulties encountered by NRA cartels may have had roots in a progressive mandate to offer special protection to the “little guy.” The NIRA had specified that acceptable codes of fair competition must not “eliminate or oppress small enterprises,” 22 and that “any organization availing itself of the benefits of this title shall be truly representative of the trade or industry . . . Any organization violating … shall cease to be entitled to the benefits of this title.” 23 Majority rule provisions were exceedingly common in codes, and were most likely a reflection of this statutory mandate. The concern for small enterprise had strong progressive roots.24 Justice Brandeis’s well-known antipathy for large-scale enterprise and concentration of economic power reflected a widespread and long-standing debate about the legitimate goals of the American experiment.

In addition to evaluating monopolization under the codes, the Darrow board had been charged with assessing the impact of the NRA on small business. Its conclusion was that “in certain industries small enterprises were oppressed.” Again however, as with his review of monopolization, Darrow may have seen only what he was predisposed to see. A number of NRA “code histories” detail conflicts within industries in which small, higher-cost producers sought to use majority rule provisions to support pricing at levels above those desired by larger, lower-cost producers. In the absence of effective enforcement from the government, such prices were doomed to break down, triggering repeated price wars in some industries.25

By 1935, there was understandable bitterness about what many businesses viewed as the lost promise of the NRA. Undoubtedly, the bitterness was exacerbated by the fact that the NRA wanted higher wages while failing to deliver the tools needed for effective cartelization. However, it is not entirely clear that everyone in the business community felt that the labor provisions of the Act were undesirable.26

Labor and Employment Issues

By their nature, market economies give rise to surplus-eroding rivalry among those who would be better off collectively if they could only act in concert. NRA codes of fair competition, specifying agreements on pricing and terms of employment, arose from a perceived confluence of interests among representatives of “business,” “labor,” and “the public” in muting that rivalry. Many proponents of the NIRA held that competitive pressures on business had led to downward pressure on wages, which in turn caused low consumption, leading to greater pressure on business, and so on. Allowing workers to organize and bargain collectively, while their employers pledged to one another not to sell below cost, was identified as a way to arrest harmful deflationary forces. Knowledge that one’s rivals would also be forced to pay “code wages” had some potential for aiding cartel survival. Thus the rationale for NRA wage supports at the microeconomic level potentially dovetailed with the macroeconomic theory by which higher wages were held to support higher consumption and, in turn, higher prices.

Labor provisions of the NIRA appeared in Section 7: “. . . employees shall have the right to organize and bargain collectively through representatives of their own choosing … employers shall comply with the maximum hours of labor, minimum rates of pay, and other conditions of employment…” 27 Each “code of fair competition” had to include labor provisions acceptable to the National Recovery Administration, developed during a process of negotiations, hearings, and review. Thus in order to obtain the shield against antitrust prosecution for their “trade practices” offered by an approved code, significant concessions to workers had to be made.

The NRA is generally judged to have been a success for labor and a miserable failure for business. However, evaluation is complicated to the extent that labor could not have achieved gains with respect to collective bargaining rights over wages and working conditions, had those rights not been more or less willingly granted by employers operating under the belief that stabilization of labor costs would facilitate cartelization. The labor provisions may have indeed helped some industries as well as helping workers, and for firms in such industries, the NRA cannot have been judged a failure. Moreover, while some businesses may have found the Act beneficial, because labor cost stability or freedom to negotiate with rivals enhanced their ability to cooperate on price, it is not entirely obvious that workers as a class gained as much as is sometimes contended.

The NRA did help solidify new and important norms regarding child labor, maximum hours, and other conditions of employment; it will never be known if the same progress could have been made had not industry been more or less hornswoggled into giving ground, using the antitrust laws as bait. Whatever the long-term effects of the NRA on worker welfare, the short-term gains for labor associated with higher wages were questionable. While those workers who managed to stay employed throughout the nineteen thirties benefited from higher wages, to the extent that workers were also consumers, and often unemployed consumers at that, or even potential entrepreneurs, they may have been better off without the NRA.

The issue is far from settled. Ben Bernanke and Martin Parkinson examine the economic growth that occurred during the New Deal in spite of higher wages and suggest “part of the answer may be that the higher wages ‘paid for themselves’ through increased productivity of labor. Probably more important, though, is the observation that with imperfectly competitive product markets, output depends on aggregate demand as well as the real wage. Maybe Herbert Hoover and Henry Ford were right: Higher real wages may have paid for themselves in the broader sense that their positive effect on aggregate demand compensated for their tendency to raise cost.” 28 However, Christina Romer establishes a close connection between NRA programs and the failure of wages and prices to adjust to high unemployment levels. In her view, “By preventing the large negative deviations of output from trend in the mid-1930s from exerting deflationary pressure, [the NRA] prevented the economy’s self-correction mechanism from working.” 29

Aftermath of Supreme Court’s Ruling in Schecter Case

The Supreme Court struck down the NRA on May 27, 1935; the case was a dispute over violations of labor provisions of the “Live Poultry Code” allegedly perpetrated by the Schecter Poultry Corporation. The Court held the code to be invalid on grounds of “attempted delegation of legislative power and the attempted regulation of intrastate transactions which affect interstate commerce only indirectly.” 30 There were to be no more grand bargains between business and labor under the New Deal.

Riven by divergent agendas rooted in industry- and firm-specific technology and demand, “business” was never able to speak with even the tenuous degree of unity achieved by workers. Following the abortive attempt to get the government to enforce cartels, firms and industries went their own ways, using a variety of strategies to enhance their situations. A number of sectors did succeed in getting passage of “little NRAs” with mechanisms tailored to mute competition in their particular circumstances. These mechanisms included the Robinson-Patman Act, aimed at strengthening traditional retailers against the ability of chain stores to buy at lower prices, the Guffey Acts, in which high cost bituminous coal operators and coal miners sought protection from the competition of lower cost operators, and the Motor Carrier Act in which high cost incumbent truckers obtained protection against new entrants.31

On-going macroeconomic analysis suggests that the general public interest may have been poorly served by the experiment of the NRA. Like many macroeconomic theories, the validity of the underconsumption scenario that was put forth in support of the program depended on the strength and timing of the operation of its various mechanisms. Increasingly it appears that the NRA set off inflationary forces thought by some to be desirable at the time, but that in fact had depressing effects on demand for labor and on output. Pure monopolistic deadweight losses probably were less important than higher wage costs (although there has not been any close examination of inefficiencies that may have resulted from the NRA’s attempt to protect small higher-cost producers). The strength of any mitigating effects on aggregate demand remains to be established.

1 Leverett Lyon, P. Homan, L. Lorwin, G. Terborgh, C. Dearing, L. Marshall, The National Recovery Administration: An Analysis and Appraisal, Washington: Brooking Institution, 1935, p. 313, footnote 9.

2 See, for example, Charles Frederick Roos, NRA Economic Planning, Colorado Springs: Cowles Commission, 1935, p. 343.

3See, for example, Colin Gordon, New Deals: Business, Labor, and Politics in America, 1920-1935, New York: Cambridge University Press, 1993, especially chapter 5.

4Christina D. Romer, “Why Did Prices Rise in the 1930s?” Journal of Economic History 59, no. 1 (1999): 167-199; Michael Weinstein, Recovery and Redistribution under the NIRA, Amsterdam: North Holland, 1980, and Harold L. Cole and Lee E. Ohanian, “New Deal Policies and the Persistence of the Great Depression,” Working Paper 597, Federal Reserve Bank of Minneapolis, February 2001. But also see “Unemployment, Inflation and Wages in the American Depression: Are There Lessons for Europe?” Ben Bernanke and Martin Parkinson, American Economic Review: Papers and Proceedings 79, no. 2 (1989): 210-214.

5 See, for example, Donald Brand, Corporatism and the Rule of Law: A Study of the National Recovery Administration, Ithaca: Cornell University Press, 1988, p. 94.

6 See, for example, Roos, op. cit., pp. 77, 92.

7 Section 3(a) of The National Industrial Recovery Act, reprinted at p. 478 of Roos, op. Cit.

8 Section 5 of The National Industrial Recovery Act, reprinted at p. 483 of Roos, op. cit. Note though, that the legal status of actions taken during the NRA era was never clear; Roos points out that “…President Roosevelt signed an executive order on January 20, 1934, providing that any complainant of monopolistic practices … could press it before the Federal Trade Commission or request the assistance of the Department of Justice. And, on the same date, Donald Richberg issued a supplementary statement which said that the provisions of the anti-trust laws were still in effect and that the NRA would not tolerate monopolistic practices.” (Roos, op. cit. p. 376.)

9 Lyon, op. cit., p. 307, cited at p. 52 in Lee and Ohanian, op cit.

10 Roos, op. cit., p. 75; and Blackwell Smith, My Imprint on the Sands of Time: The Life of a New Dealer, Vantage Press, New York, p. 109.

11 Lyon, op. cit., p. 570.

12 Section 3 (a)(2) of The National Industrial Recovery Act, op. Cit.

13 Roos, op. cit., at pp. 254-259. Charles Roos comments that “Leon Henderson and Blackwell Smith, in particular, became intrigued with a notion that competition could be set up within limits and that in this way wide price variations tending to demoralize an industry could be prevented.”

14 Lyon, et al., op. cit., p. 605.

15 Smith, Assistant Counsel of the NRA (per Roos, op cit., p. 254), has the following to say about standardization: One of the more controversial subjects, which we didn’t get into too deeply, except to draw guidelines, was standardization.” Smith goes on to discuss the obvious need to standardize rail track gauges, plumbing fittings, and the like, but concludes, “Industry on the whole wanted more standardization than we could go with.” (Blackwell Smith, op. cit., pp. 106-7.) One must not go overboard looking for coherence among the various positions espoused by NRA administrators; along these lines it is worth remembering Smith’s statement some 60 years later: “Business’s reaction to my policy [Smith was speaking generally here of his collective proposals] to some extent was hostile. They wished that the codes were not as strict as I wanted them to be. Also, there was criticism from the liberal/labor side to the effect that the codes were more in favor of business than they should have been. I said, ‘We are guided by a squealometer. We tune policy until the squeals are the same pitch from both sides.'” (Smith, op. cit. p. 108.)

16 Quoted at p 378 of Roos, op. Cit.

17 Brand, op. cit. at pp. 159-60 cites in agreement extremely critical conclusions by Roos (op. cit. at p. 409) and Arthur Schlesinger, The Age of Roosevelt: The Coming of the New Deal, Boston: Houghton Mifflin, 1959, p. 133.

18 Roos acknowledges a breakdown by spring of 1934: “By March, 1934 something was urgently needed to encourage industry to observe code provisions; business support for the NRA had decreased materially and serious compliance difficulties had arisen.” (Roos, op. cit., at p. 318.) Brand dates the start of the compliance crisis much earlier, in the fall of 1933. (Brand, op. cit., p. 103.)

19 Lyon, op. cit., p. 264.

20 Lyon, op. cit., p. 268.

21 Lyon, op. cit., pp. 268-272. See also Peter H. Irons, The New Deal Lawyers, Princeton: Princeton University Press, 1982.

22 Section 3(a)(2) of The National Industrial Recovery Act, op. Cit.

23 Section 6(b) of The National Industrial Recovery Act, op. Cit.

24 Brand, op. Cit.

25 Barbara Alexander and Gary D. Libecap, “The Effect of Cost Heterogeneity in the Success and Failure of the New Deal’s Agricultural and Industrial Programs,” Explorations in Economic History, 37 (2000), pp. 370-400.

26 Gordon, op. Cit.

27 Section 7 of the National Industrial Recovery Act, reprinted at pp. 484-5 of Roos, op. Cit.

28 Bernanke and Parkinson, op. cit., p. 214.

29 Romer, op. cit., p. 197.

30 Supreme Court of the United States, Nos. 854 and 864, October term, 1934, (decision issued May 27, 1935). Reprinted in Roos, op. cit., p. 580.

31 Ellis W. Hawley, The New Deal and the Problem of Monopoly: A Study in Economic Ambivalence, 1966, Princeton: Princeton University Press, p. 249; Irons, op. cit., pp. 105-106, 248.

History of Workplace Safety in the United States, 1880-1970

Mark Aldrich, Smith College

The dangers of work are usually measured by the number of injuries or fatalities occurring to a group of workers, usually over a period of one year. 1 Over the past century such measures reveal a striking improvement in the safety of work in all the advanced countries. In part this has been the result of the gradual shift of jobs from relatively dangerous goods production such as farming, fishing, logging, mining, and manufacturing into such comparatively safe work as retail trade and services. But even the dangerous trades are now far safer than they were in 1900. To take but one example, mining today remains a comparatively risky activity. Its annual fatality rate is about nine for every one hundred thousand miners employed. A century ago in 1900 about three hundred out of every one hundred thousand miners were killed on the job each year. 2

The Nineteenth Century

Before the late nineteenth century we know little about the safety of American workplaces because contemporaries cared little about it. As a result, only fragmentary information exists prior to the 1880s. Pre-industrial laborers faced risks from animals and hand tools, ladders and stairs. Industrialization substituted steam engines for animals, machines for hand tools, and elevators for ladders. But whether these new technologies generally worsened the dangers of work is unclear. What is clear is that nowhere was the new work associated with the industrial revolution more dangerous than in America.

US Was Unusually Dangerous

Americans modified the path of industrialization that had been pioneered in Britain to fit the particular geographic and economic circumstances of the American continent. Reflecting the high wages and vast natural resources of a new continent, this American system encouraged use of labor saving machines and processes. These developments occurred within a legal and regulatory climate that diminished employer’s interest in safety. As a result, Americans developed production methods that were both highly productive and often very dangerous. 3

Accidents Were “Cheap”

While workers injured on the job or their heirs might sue employers for damages, winning proved difficult. Where employers could show that the worker had assumed the risk, or had been injured by the actions of a fellow employee, or had himself been partly at fault, courts would usually deny liability. A number or surveys taken about 1900 showed that only about half of all workers fatally injured recovered anything and their average compensation only amounted to about half a year’s pay. Because accidents were so cheap, American industrial methods developed with little reference to their safety. 4


Nowhere was the American system more dangerous than in early mining. In Britain, coal seams were deep and coal expensive. As a result, British mines used mining methods that recovered nearly all of the coal because they used waste rock to hold up the roof. British methods also concentrated the working, making supervision easy, and required little blasting. American coal deposits by contrast, were both vast and near the surface; they could be tapped cheaply using techniques known as “room and pillar” mining. Such methods used coal pillars and timber to hold up the roof, because timber and coal were cheap. Since miners worked in separate rooms, labor supervision was difficult and much blasting was required to bring down the coal. Miners themselves were by no means blameless; most were paid by the ton, and when safety interfered with production, safety often took a back seat. For such reasons, American methods yielded more coal per worker than did European techniques, but they were far more dangerous, and toward the end of the nineteenth century, the dangers worsened (see Table 1).5

Table 1

British and American Mine Safety, 1890 -1904

(Fatality rates per Thousand Workers per Year)

Years American Anthracite American Bituminous Great Britain
1890-1894 3.29 2.52 1.61
1900-1904 3.13 3.53 1.28

Source: British data from Great Britain, General Report. Other data from Aldrich, Safety First.


Nineteenth century American railroads were also comparatively dangerous to their workers – and their passengers as well – and for similar reasons. Vast North American distances and low population density turned American carriers into predominantly freight haulers – and freight was far more dangerous to workers than passenger traffic, for men had to go in between moving cars for coupling and uncoupling and ride the cars to work brakes. The thin traffic and high wages also forced American carriers to economize on both capital and labor. Accordingly, American carriers were poorly built and used few signals, both of which resulted in many derailments and collisions. Such conditions made American railroad work far more dangerous than that in Britain (see Table 2).6

Table 2

Comparative Safety of British and American Railroad Workers, 1889 – 1901

(Fatality Rates per Thousand Workers per Year)

1889 1895 1901
British railroad workers

All causes
1.14 0.95 0.89
British trainmena

All causes
4.26 3.22 2.21
Coupling 0.94 0.83 0.74
American Railroad workers

All causes
2.67 2.31 2.50
American trainmen

All causes
8.52 6.45 7.35
Coupling 1.73c 1.20 0.78
Brakingb 3.25c 2.44 2.03

Source: Aldrich, Safety First, Table 1 and Great Britain Board of Trade, General Report.


Note: Death rates are per thousand employees.

a. Guards, brakemen, and shunters.

b. Deaths from falls from cars and striking overhead obstructions.


American manufacturing also developed in a distinctively American fashion that substituted power and machinery for labor and manufactured products with interchangeable arts for ease in mass production. Whether American methods were less safe than those in Europe is unclear but by 1900 they were extraordinarily risky by modern standards, for machines and power sources were largely unguarded. And while competition encouraged factory managers to strive for ever-increased output, they showed little interest in improving safety.7

Worker and Employer Responses

Workers and firms responded to these dangers in a number of ways. Some workers simply left jobs they felt were too dangerous, and risky jobs may have had to offer higher pay to attract workers. After the Civil War life and accident insurance companies expanded, and some workers purchased insurance or set aside savings to offset the income risks from death or injury. Some unions and fraternal organizations also offered their members insurance. Railroads and some mines also developed hospital and insurance plans to care for injured workers while many carriers provided jobs for all their injured men. 8

Improving safety, 1910-1939

Public efforts to improve safety date from the very beginnings of industrialization. States established railroad regulatory commissions as early as the 1840s. But while most of the commissions were intended to improve safety, they had few powers and were rarely able to exert much influence on working conditions. Similarly, the first state mining commission began in Pennsylvania in 1869, and other states soon followed. Yet most of the early commissions were ineffectual and as noted safety actually deteriorated after the Civil War. Factory commissions also dated from but most were understaffed and they too had little power.9


The most successful effort to improve work safety during the nineteenth century began on the railroads in the 1880s as a small band of railroad regulators, workers, and managers began to campaign for the development of better brakes and couplers for freight cars. In response George Westinghouse modified his passenger train air brake in about 1887 so it would work on long freights, while at roughly the same time Ely Janney developed an automatic car coupler. For the railroads such equipment meant not only better safety, but also higher productivity and after 1888 they began to deploy it. The process was given a boost in 1889-1890 when the newly-formed Interstate Commerce Commission (ICC) published its first accident statistics. They demonstrated conclusively the extraordinary risks to trainmen from coupling and riding freight (Table 2). In 1893 Congress responded, passing the Safety Appliance Act, which mandated use of such equipment. It was the first federal law intended primarily to improve work safety, and by 1900 when the new equipment was widely diffused, risks to trainmen had fallen dramatically.10

Federal Safety Regulation

In the years between 1900 and World War I, a rather strange band of Progressive reformers, muckraking journalists, businessmen, and labor unions pressed for changes in many areas of American life. These years saw the founding of the Federal Food and Drug Administration, the Federal Reserve System and much else. Work safety also became of increased public concern and the first important developments came once again on the railroads. Unions representing trainmen had been impressed by the safety appliance act of 1893 and after 1900 they campaigned for more of the same. In response Congress passed a host of regulations governing the safety of locomotives and freight cars. While most of these specific regulations were probably modestly beneficial, collectively their impact was small because unlike the rules governing automatic couplers and air brakes they addressed rather minor risks.11

In 1910 Congress also established the Bureau of Mines in response to a series of disastrous and increasingly frequent explosions. The Bureau was to be a scientific, not a regulatory body and it was intended to discover and disseminate new knowledge on ways to improve mine safety.12

Workers’ Compensation Laws Enacted

Far more important were new laws that raised the cost of accidents to employers. In 1908 Congress passed a federal employers’ liability law that applied to railroad workers in interstate commerce and sharply limited defenses an employee could claim. Worker fatalities that had once cost the railroads perhaps $200 now cost $2,000. Two years later in 1910, New York became the first state to pass a workmen’s compensation law. This was a European idea. Instead of requiring injured workers to sue for damages in court and prove the employer was negligent, the new law automatically compensated all injuries at a fixed rate. Compensation appealed to businesses because it made costs more predictable and reduced labor strife. To reformers and unions it promised greater and more certain benefits. Samuel Gompers, leader of the American Federation of Labor had studied the effects of compensation in Germany. He was impressed with how it stimulated business interest in safety, he said. Between 1911 and 1921 forty-four states passed compensation laws.13

Employers Become Interested in Safety

The sharp rise in accident costs that resulted from compensation laws and tighter employers’ liability initiated the modern concern with work safety and initiated the long-term decline in work accidents and injuries. Large firms in railroading, mining, manufacturing and elsewhere suddenly became interested in safety. Companies began to guard machines and power sources while machinery makers developed safer designs. Managers began to look for hidden dangers at work, and to require that workers wear hard hats and safety glasses. They also set up safety departments run by engineers and safety committees that included both workers and managers. In 1913 companies founded the National Safety Council to pool information. Government agencies such as the Bureau of Mines and National Bureau of Standards provided scientific support while universities also researched safety problems for firms and industries14

Accident Rates Begin to Fall Steadily

During the years between World War I and World War II the combination of higher accident costs along with the institutionalization of safety concerns in large firms began to show results. Railroad employee fatality rates declined steadily after 1910 and at some large companies such as DuPont and whole industries such as steel making (see Table 3) safety also improved dramatically. Largely independent changes in technology and labor markets also contributed to safety as well. The decline in labor turnover meant fewer new employees who were relatively likely to get hurt, while the spread of factory electrification not only improved lighting but reduced the dangers from power transmission as well. In coal mining the shift from underground work to strip mining also improved safety. Collectively these long-term forces reduced manufacturing injury rates about 38 percent between 1926 and 1939 (see Table 4).15

Table 3

Steel Industry fatality and Injury rates, 1910-1939

(Rates are per million manhours)

Period Fatality rate Injury Rate
1910-1913 0.40 44.1
1937-1939 0.13 11.7

Pattern of Improvement Was Uneven

Yet the pattern of improvement was uneven, both over time and among firms and industries. Safety still deteriorated in times of economic boon when factories mines and railroads were worked to the limit and labor turnover rose. Nor were small companies as successful in reducing risks, for they paid essentially the same compensation insurance premium irrespective of their accident rate, and so the new laws had little effect there. Underground coal mining accidents also showed only modest improvement. Safety was also expensive in coal and many firms were small and saw little payoff from a lower accident rate. The one source of danger that did decline was mine explosions, which diminished in response to technologies developed by the Bureau of Mines. Ironically, however, in 1940 six disastrous blasts that killed 276 men finally led to federal mine inspection in 1941.16

Table 4

Work Injury Rates, Manufacturing and Coal Mining, 1926-1970

(Per Million Manhours)


Year Manufacturing Coal Mining
1926 24.2
1931 18.9 89.9
1939 14.9 69.5
1945 18.6 60.7
1950 14.7 53.3
1960 12.0 43.4
1970 15.2 42.6

Source: U.S. Department of Commerce Bureau of the Census, Historical Statistics of the United States, Colonial Times to 1970 (Washington, 1975), Series D-1029 and D-1031.

Postwar Trends, 1945-1970

The economic boon and associated labor turnover during World War II worsened work safety in nearly all areas of the economy, but after 1945 accidents again declined as long-term forces reasserted themselves (Table 4). In addition, after World War II newly powerful labor unions played an increasingly important role in work safety. In the 1960s however economic expansion again led to rising injury rates and the resulting political pressures led Congress to establish the Occupational Safety and Health Administration (OSHA) and the Mine Safety and Health Administration in 1970. The continuing problem of mine explosions also led to the foundation of the Mine Safety and Health Administration (MSHA) that same year. The work of these agencies had been controversial but on balance they have contributed to the continuing reductions in work injuries after 1970.17

References and Further Reading

Aldrich, Mark. Safety First: Technology, Labor and Business in the Building of Work Safety, 1870-1939. Baltimore: Johns Hopkins University Press, 1997.

Aldrich, Mark. “Preventing ‘The Needless Peril of the Coal Mine': the Bureau of Mines and the Campaign Against Coal Mine Explosions, 1910-1940.” Technology and Culture 36, no. 3 (1995): 483-518.

Aldrich, Mark. “The Peril of the Broken Rail: the Carriers, the Steel Companies, and Rail Technology, 1900-1945.” Technology and Culture 40, no. 2 (1999): 263-291

Aldrich, Mark. “Train Wrecks to Typhoid Fever: The Development of Railroad Medicine Organizations, 1850 -World War I.” Bulletin of the History of Medicine, 75, no. 2 (Summer 2001): 254-89.

Derickson Alan. “Participative Regulation of Hazardous Working Conditions: Safety Committees of the United Mine Workers of America,” Labor Studies Journal 18, no. 2 (1993): 25-38.

Dix, Keith. Work Relations in the Coal Industry: The Hand Loading Era. Morgantown: University of West Virginia Press, 1977. The best discussion of coalmine work for this period.

Dix, Keith. What’s a Coal Miner to Do? Pittsburgh: University of Pittsburgh Press, 1988. The best discussion of coal mine labor during the era of mechanization.

Fairris, David. “From Exit to Voice in Shopfloor Governance: The Case of Company Unions.” Business History Review 69, no. 4 (1995): 494-529.

Fairris, David. “Institutional Change in Shopfloor Governance and the Trajectory of Postwar Injury Rates in U.S. Manufacturing, 1946-1970.” Industrial and Labor Relations Review 51, no. 2 (1998): 187-203.

Fishback, Price. Soft Coal Hard Choices: The Economic Welfare of Bituminous Coal Miners, 1890-1930. New York: Oxford University Press, 1992. The best economic analysis of the labor market for coalmine workers.

Fishback, Price and Shawn Kantor. A Prelude to the Welfare State: The Origins of Workers’ Compensation. Chicago: University of Chicago Press, 2000. The best discussions of how employers’ liability rules worked.

Graebner, William. Coal Mining Safety in the Progressive Period. Lexington: University of Kentucky Press, 1976.

Great Britain Board of Trade. General Report upon the Accidents that Have Occurred on Railways of the United Kingdom during the Year 1901. London, HMSO, 1902.

Great Britain Home Office Chief Inspector of Mines. General Report with Statistics for 1914, Part I. London: HMSO, 1915.

Hounshell, David. From the American System to Mass Production, 1800-1932: The Development of Manufacturing Technology in the United States. Baltimore: Johns Hopkins University Press, 1984.

Humphrey, H. B. “Historical Summary of Coal-Mine Explosions in the United States — 1810-1958.” United States Bureau of Mines Bulletin 586 (1960).

Kirkland, Edward. Men, Cities, and Transportation. 2 vols. Cambridge: Harvard University Press, 1948, Discusses railroad regulation and safety in New England.

Lankton, Larry. Cradle to Grave: Life, Work, and Death in Michigan Copper Mines. New York: Oxford University Press, 1991.

Licht, Walter. Working for the Railroad. Princeton: Princeton University Press, 1983.

Long, Priscilla. Where the Sun Never Shines. New York: Paragon, 1989. Covers coal mine safety at the end of the nineteenth century.

Mendeloff, John. Regulating Safety: An Economic and Political Analysis of Occupational Safety and Health Policy. Cambridge: MIT Press, 1979. An accessible modern discussion of safety under OSHA.

National Academy of Sciences. Toward Safer Underground Coal Mines. Washington, DC: NAS, 1982.

Rogers, Donald. “From Common Law to Factory Laws: The Transformation of Workplace Safety Law in Wisconsin before Progressivism.” American Journal of Legal History (1995): 177-213.

Root, Norman and Daley, Judy. “Are Women Safer Workers? A New Look at the Data.” Monthly Labor Review 103, no. 9 (1980): 3-10.

Rosenberg, Nathan. Technology and American Economic Growth. New York: Harper and Row, 1972. Analyzes the forces shaping American technology.

Rosner, David and Gerald Markowity, editors. Dying for Work. Blomington: Indiana University Press, 1987.

Shaw, Robert. Down Brakes: A History of Railroad Accidents, Safety Precautions, and Operating Practices in the United States of America. London: P. R. Macmillan. 1961.

Trachenberg, Alexander. The History of Legislation for the Protection of Coal Miners in Pennsylvania, 1824 – 1915. New York: International Publishers. 1942.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States, Colonial Times to 1970. Washington, DC, 1975.

Usselman, Steven. “Air Brakes for Freight Trains: Technological Innovation in the American Railroad Industry, 1869-1900.” Business History Review 58 (1984): 30-50.

Viscusi, W. Kip. Risk By Choice: Regulating Health and Safety in the Workplace. Cambridge: Harvard University Press, 1983. The most readable treatment of modern safety issues by a leading scholar.

Wallace, Anthony. Saint Clair. New York: Alfred A. Knopf, 1987. Provides a superb discussion of early anthracite mining and safety.

Whaples, Robert and David Buffum. “Fraternalism, Paternalism, the Family and the Market: Insurance a Century Ago.” Social Science History 15 (1991): 97-122.

White, John. The American Railroad Freight Car. Baltimore: Johns Hopkins University Press, 1993. The definitive history of freight car technology.

Whiteside, James. Regulating Danger: The Struggle for Mine Safety in the Rocky Mountain Coal Industry. Lincoln: University of Nebraska Press, 1990.

Wokutch, Richard. Worker Protection Japanese Style: Occupational Safety and Health in the Auto Industry. Ithaca, NY: ILR, 1992

Worrall, John, editor. Safety and the Work Force: Incentives and Disincentives in Workers’ Compensation. Ithaca, NY: ILR Press, 1983.

1 Injuries or fatalities are expressed as rates. For example, if ten workers are injured out of 450 workers during a year, the rate would be .006666. For readability it might be expressed as 6.67 per thousand or 666.7 per hundred thousand workers. Rates may also be expressed per million workhours. Thus if the average work year is 2000 hours, ten injuries in 450 workers results in [10/450×2000]x1,000,000 = 11.1 injuries per million hours worked.

2 For statistics on work injuries from 1922-1970 see U.S. Department of Commerce, Historical Statistics, Series 1029-1036. For earlier data are in Aldrich, Safety First, Appendix 1-3.

3 Hounshell, American System. Rosenberg, Technology,. Aldrich, Safety First.

4 On the workings of the employers’ liability system see Fishback and Kantor, A Prelude, chapter 2

5 Dix, Work Relations, and his What’s a Coal Miner to Do? Wallace, Saint Clair, is a superb discussion of early anthracite mining and safety. Long, Where the Sun, Fishback, Soft Coal, chapters 1, 2, and 7. Humphrey, “Historical Summary.” Aldrich, Safety First, chapter 2.

6 Aldrich, Safety First chapter 1.

7 Aldrich, Safety First chapter 3

8 Fishback and Kantor, A Prelude, chapter 3, discusses higher pay for risky jobs as well as worker savings and accident insurance See also Whaples and Buffum, “Fraternalism, Paternalism.” Aldrich, ” Train Wrecks to Typhoid Fever.”

9Kirkland, Men, Cities. Trachenberg, The History of Legislation Whiteside, Regulating Danger. An early discussion of factory legislation is in Susan Kingsbury, ed.,xxxxx. Rogers,” From Common Law.”

10 On the evolution of freight car technology see White, American Railroad Freight Car, Usselman “Air Brakes for Freight trains,” and Aldrich, Safety First, chapter 1. Shaw, Down Brakes, discusses causes of train accidents.

11 Details of these regulations may be found in Aldrich, Safety First, chapter 5.

12 Graebner, Coal-Mining Safety, Aldrich, “‘The Needless Peril.”

13 On the origins of these laws see Fishback and Kantor, A Prelude, and the sources cited therein.

14 For assessments of the impact of early compensation laws see Aldrich, Safety First, chapter 5 and Fishback and Kantor, A Prelude, chapter 3. Compensation in the modern economy is discussed in Worrall, Safety and the Work Force. Government and other scientific work that promoted safety on railroads and in coal mining are discussed in Aldrich, “‘The Needless Peril’,” and “The Broken Rail.”

15 Farris, “From Exit to Voice.”

16 Aldrich, “‘Needless Peril,” and Humphrey

17 Derickson, “Participative Regulation” and Fairris, “Institutional Change,” also emphasize the role of union and shop floor issues in shaping safety during these years. Much of the modern literature on safety is highly quantitative. For readable discussions see Mendeloff, Regulating Safety (Cambridge: MIT Press, 1979), and Viscusi, Risk by Choice

The US Coal Industry in the Nineteenth Century

Sean Patrick Adams, University of Central Florida


The coal industry was a major foundation for American industrialization in the nineteenth century. As a fuel source, coal provided a cheap and efficient source of power for steam engines, furnaces, and forges across the United States. As an economic pursuit, coal spurred technological innovations in mine technology, energy consumption, and transportation. When mine managers brought increasing sophistication to the organization of work in the mines, coal miners responded by organizing into industrial trade unions. The influence of coal was so pervasive in the United States that by the advent of the twentieth century, it became a necessity of everyday life. In an era where smokestacks equaled progress, the smoky air and sooty landscape of industrial America owed a great deal to the growth of the nation’s coal industry. By the close of the nineteenth century, many Americans across the nation read about the latest struggle between coal companies and miners by the light of a coal-gas lamp and in the warmth of a coal-fueled furnace, in a house stocked with goods brought to them by coal-fired locomotives. In many ways, this industry served as a major factor of American industrial growth throughout the nineteenth century.

The Antebellum American Coal Trade

Although coal had served as a major source of energy in Great Britain for centuries, British colonists had little use for North America’s massive reserves of coal prior to American independence. With abundant supplies of wood, water, and animal fuel, there was little need to use mineral fuel in seventeenth and eighteenth-century America. But as colonial cities along the eastern seaboard grew in population and in prestige, coal began to appear in American forges and furnaces. Most likely this coal was imported from Great Britain, but a small domestic trade developed in the bituminous fields outside of Richmond, Virginia and along the Monongahela River near Pittsburgh, Pennsylvania.

The Richmond Basin

Following independence from Britain, imported coal became less common in American cities and the domestic trade became more important. Economic nationalists such as Tench Coxe, Albert Gallatin, and Alexander Hamilton all suggested that the nation’s coal trade — at that time centered in the Richmond coal basin of eastern Virginia — would serve as a strategic resource for the nation’s growth and independence. Although it labored under these weighty expectations, the coal trade of eastern Virginia was hampered by its existence on the margins of the Old Dominion’s plantation economy. Colliers of the Richmond Basin used slave labor effectively in their mines, but scrambled to fill out their labor force, especially during peak periods of agricultural activity. Transportation networks in the region also restricted the growth of coal mining. Turnpikes proved too expensive for the coal trade and the James River and Kanawha Canal failed to make necessary improvements in order to accommodate coal barge traffic and streamline the loading, conveyance, and distribution of coal at Richmond’s tidewater port. Although the Richmond Basin was nation’s first major coalfield, miners there found growth potential to be limited.

The Rise of Anthracite Coal

At the same time that the Richmond Basin’s coal trade declined in importance, a new type of mineral fuel entered urban markets of the American seaboard. Anthracite coal has higher carbon content and is much harder than bituminous coal, thus earning the nickname “stone coal” in its early years of use. In 1803, Philadelphians watched a load of anthracite coal actually squelch a fire during a trial run, and city officials used the load of “stone coal” as attractive gravel for sidewalks. Following the War of 1812, however, a series of events paved the way for anthracite coal’s acceptance in urban markets. Colliers like Jacob Cist saw the shortage of British and Virginia coal in urban communities as an opportunity to promote the use of “stone coal.” Philadelphia’s American Philosophical Society and Franklin Institute enlisted the aid of the area’s scientific community to disseminate information to consumers on the particular needs of anthracite. The opening of several links between Pennsylvania’s anthracite fields via the Lehigh Coal and Navigation Company (1820), the Schuylkill Navigation Company (1825), and the Delaware and Hudson (1829) insured that the flow of anthracite from mine to market would be cheap and fast. “Stone coal” became less a geological curiosity by the 1830s and instead emerged as a valuable domestic fuel for heating and cooking, as well as a powerful source of energy for urban blacksmiths, bakers, brewers, and manufacturers. As demonstrated in Figure 1, Pennsylvania anthracite dominated urban markets by the late 1830s. By 1840, annual production had topped one million tons, or about ten times the annual production of the Richmond bituminous field.

Figure One: Percentage of Seaboard Coal Consumption by Origin, 1822-1842


Hunt’s Merchant’s Magazine and Commercial Review 8 (June 1843): 548;

Alfred Chandler, “Anthracite Coal and the Beginnings of the Industrial Revolution,” p. 154.

The Spread of Coalmining

The antebellum period also saw the expansion of coal mining into many more states than Pennsylvania and Virginia, as North America contains a variety of workable coalfields. Ohio’s bituminous fields employed 7,000 men and raised about 320,000 tons of coal in 1850 — only three years later the state’s miners had increased production to over 1,300,000 tons. In Maryland, the George’s Creek bituminous region began to ship coal to urban markets by the Baltimore and Ohio Railroad (1842) and the Chesapeake and Ohio Canal (1850). The growth of St. Louis provided a major boost to the coal industries of Illinois and Missouri, and by 1850 colliers in the two states raised about 350,000 tons of coal annually. By the advent of the Civil War, coal industries appeared in at least twenty states.

Organization of Antebellum Mines

Throughout the antebellum period, coal mining firms tended to be small and labor intensive. The seams that were first worked in the anthracite fields of eastern Pennsylvania or the bituminous fields in Virginia, western Pennsylvania, and Ohio tended to lie close to the surface. A skilled miner and a handful of laborers could easily raise several tons of coal a day through the use of a “drift” or “slope” mine that intersected a vein of coal along a hillside. In the bituminous fields outside of Pittsburgh, for example, coal seams were exposed along the banks of the Monongahela and colliers could simply extract the coal with a pickax or shovel and roll it down the riverbank via a handcart into a waiting barge. Once the coal left the mouth of the mine, however, the size of the business handling it varied. Proprietary colliers usually worked on land that was leased for five to fifteen years — often from a large landowner or corporation. The coal was often shipped to market via a large railroad or canal corporation such as the Baltimore and Ohio Railroad, or the Delaware and Hudson Canal. Competition between mining firms and increases in production kept prices and profit margins relatively low, and many colliers slipped in and out of bankruptcy. These small mining firms were typical of the “easy entry, easy exit” nature of American business competition in the antebellum period.

Labor Relations

Since most antebellum coal mining operations were often limited to a few skilled miners aided by lesser skilled laborers, the labor relations in American coal mining regions saw little extended conflict. Early coal miners also worked close to the surface, often in horizontal drift mines, which meant that work was not as dangerous in the era before deep shaft mining. Most mining operations were far-flung enterprises away from urban centers, which frustrated attempts to organize miners into a “critical mass” of collective power — even in the nation’s most developed anthracite fields. These factors, coupled with the mine operator’s belief that individual enterprise in the anthracite regions insured a harmonious system of independent producers, had inhibited the development of strong labor organizations in Pennsylvania’s antebellum mining industry. In less developed regions, proprietors often worked in the mines themselves, so the lines between ownership, management, and labor were often blurred.

Early Unions

Most disputes, when they did occur, were temporary affairs that focused upon the low wages spurred by the intense competition among colliers. The first such action in the anthracite industry occurred in July of 1842 when workers from Minersville in Schuylkill County marched on Pottsville to protest low wages. This short-lived strike was broken up by the Orwigsburgh Blues, a local militia company. In 1848 John Bates enrolled 5,000 miners and struck for higher pay in the summer of 1849. But members of the “Bates Union” found themselves locked out of work and the movement quickly dissipated. In 1853, the Delaware and Hudson Canal Company’s miners struck for a 2½ cent per ton increase in their piece rate. This strike was successful, but failed to produce any lasting union presence in the D&H’s operations. Reports of disturbances in the bituminous fields of western Pennsylvania and Ohio follow the same pattern, as antebellum strikes tended to be localized and short-lived. Production levels thus remained high, and consumers of mineral fuel could count upon a steady supply reaching market.

Use of Anthracite in the Iron Industry

The most important technological development in the antebellum American coal industry was the successful adoption of anthracite coal to iron making techniques. Since the 1780s, bituminous coal or coke — which is bituminous coal with the impurities burned away — had been the preferred fuel for British iron makers. Once anthracite had nearly successfully entered American hearths, there seemed to be no reason why stone coal could not be used to make iron. As with its domestic use, however, the industrial potential of anthracite coal faced major technological barriers. In British and American iron furnaces of the early nineteenth century, the high heat needed to smelt iron ore required a blast of excess air to aid the combustion of the fuel, whether it was coal, wood, or charcoal. While British iron makers in the 1820s attempted to increase the efficiency of the process by using superheated air, known commonly as a “hot blast,” American iron makers still used a “cold blast” to stoke their furnaces. The density of anthracite coal resisted attempts to ignite it through the cold blast and therefore appeared to be an inappropriate fuel for most American iron furnaces.

Anthracite iron first appeared in Pennsylvania in 1840, when David Thomas brought Welsh hot blast technology into practice at the Lehigh Crane Iron Company. The firm had been chartered in 1839 under the general incorporation act. The Allentown firm’s innovation created a stir in iron making circles, and iron furnaces for smelting ore with anthracite began to appear across eastern and central Pennsylvania. In 1841, only a year after the Lehigh Crane Iron Company’s success, Walter Johnson found no less than eleven anthracite iron furnaces in operation. That same year, an American correspondent of London bankers cited savings on iron making of up to twenty-five percent after the conversion to anthracite and noted that “wherever the coal can be procured the proprietors are changing to the new plan; and it is generally believed that the quality of the iron is much improved where the entire process is affected with anthracite coal.” Pennsylvania’s investment in anthracite iron paid dividends for the industrial economy of the state and proved that coal could be adapted to a number of industrial pursuits. By 1854, forty-six percent of all American pig iron had been smelted with anthracite coal as a fuel, and by 1860 anthracite’s share of pig iron was more than fifty-six percent.

Rising Levels of Coal Output and Falling Prices

The antebellum decades saw the coal industry emerge as a critical component of America’s industrial revolution. Anthracite coal became a fixture in seaboard cities up and down the east coast of North America — as cities grew, so did the demand for coal. To the west, Pittsburgh and Ohio colliers shipped their coal as far as Louisville, Cincinnati, or New Orleans. As wood, animal, and waterpower became scarcer, mineral fuel usually took their place in domestic consumption and small-scale manufacturing. The structure of the industry, many small-scale firms working on short-term leases, meant that production levels remained high throughout the antebellum period, even in the face of falling prices. In 1840, American miners raised 2.5 million tons of coal to serve these growing markets and by 1850 increased annual production to 8.4 million tons. Although prices tended to fluctuate with the season, in the long run, they fell throughout the antebellum period. For example, in 1830 anthracite coal sold for about $11 per ton. Ten years later, the price had dropped to $7 per ton and by 1860 anthracite sold for about $5.50 a ton in New York City. Annual production in 1860 also passed twenty million tons for the first time in history. Increasing production, intense competition, low prices, and quiet labor relations all were characteristics of the antebellum coal trade in the United States, but developments during and after the Civil War would dramatically alter the structure and character of this critical industrial pursuit.

Coal and the Civil War

The most dramatic expansion of the American coal industry occurred in the late antebellum decades but the outbreak of the Civil War led to some major changes. The fuel needs of the federal army and navy, along with their military suppliers, promised a significant increase in the demand for coal. Mine operators planned for rising, or at least stable, coal prices for the duration of the war. Their expectations proved accurate. Even when prices are adjusted for wartime inflation, they increased substantially over the course of the conflict. Over the years 1860 to 1863, the real (i.e., inflation-adjusted) price of a ton of anthracite rose by over thirty percent, and in 1864 the real price had increased to forty-five percent above its 1860 level. In response, the production of coal increased to over twelve million tons of anthracite and over twenty-four million tons nationwide by 1865.

The demand for mineral fuel in the Confederacy led to changes in southern coalfields as well. In 1862, the Confederate Congress organized the Niter and Mining Bureau within the War Department to supervise the collection of niter (also known as saltpeter) for the manufacture of gunpowder and the mining of copper, lead, iron, coal, and zinc. In addition to aiding the Richmond Basin’s production, the Niter and Mining Bureau opened new coalfields in North Carolina and Alabama and coordinated the flow of mineral fuel to Confederate naval stations along the coast. Although the Confederacy was not awash in coal during the conflict, the work of the Niter and Mining Bureau established the groundwork for the expansion of mining in the postbellum South.

In addition to increases in production, the Civil War years accelerated some qualitative changes in the structure of the industry. In the late 1850s, new railroads stretched to new bituminous coalfields in states like Maryland, Ohio, and Illinois. In the established anthracite coal regions of Pennsylvania, railroad companies profited immensely from the increased traffic spurred by the war effort. For example, the Philadelphia & Reading Railroad’s margin of profit increased from $0.88 per ton of coal in 1861 to $1.72 per ton in 1865. Railroad companies emerged from the Civil War as the most important actors in the nation’s coal trade.

The American Coal Trade after the Civil War

Railroads and the Expansion of the Coal Trade

In the years immediately following the Civil War, the expansion of the coal trade accelerated as railroads assumed the burden of carrying coal to market and opening up previously inaccessible fields. They did this by purchasing coal tracts directly and leasing them to subsidiary firms or by opening their own mines. In 1878, the Baltimore and Ohio Railroad shipped three million tons of bituminous coal from mines in Maryland and from the northern coalfields of the new state of West Virginia. When the Chesapeake and Ohio Railroad linked Huntington, West Virginia with Richmond, Virginia in 1873, the rich bituminous coal fields of southern West Virginia were open for development. The Norfolk and Western developed the coalfields of southwestern Virginia by completing their railroad from tidewater to remote Tazewell County in 1883. A network of smaller lines linking individual collieries to these large trunk lines facilitated the rapid development of Appalachian coal.

Railroads also helped open up the massive coal reserves west of the Mississippi. Small coal mines in Missouri and Illinois existed in the antebellum years, but were limited to the steamboat trade down the Mississippi River. As the nation’s web of railroad construction expanded across the Great Plains, coalfields in Colorado, New Mexico, and Wyoming witnessed significant development. Coal had truly become a national endeavor in the United States.

Technological Innovations

As the coal industry expanded, it also incorporated new mining methods. Early slope or drift mines intersected coal seams relatively close to the surface and needed only small capital investments to prepare. Most miners still used picks and shovels to extract the coal, but some miners used black powder to blast holes in the coal seams, then loaded the broken coal onto wagons by hand. But as miners sought to remove more coal, shafts were dug deeper below the water line. As a result, coal mining needed larger amounts of capital as new systems of pumping, ventilation, and extraction required the implementation of steam power in mines. By the 1890s, electric cutting machines replaced the blasting method of loosening the coal in some mines, and by 1900 a quarter of American coal was mined using these methods. As the century progressed, miners raised more and more coal by using new technology. Along with this productivity came the erosion of many traditional skills cherished by experienced miners.

The Coke Industry

Consumption patterns also changed. The late nineteenth century saw the emergence of coke — a form of processed bituminous coal in which impurities are “baked” out under high temperatures — as a powerful fuel in the iron and steel industry. The discovery of excellent coking coal in the Connellsville region of southwestern Pennsylvania spurred the aggressive growth of coke furnaces there. By 1880, the Connellsville region contained more than 4,200 coke ovens and the national production of coke in the United States stood at three million tons. Two decades later, the United States consumed over twenty million tons of coke fuel.

Competition and Profits

The successful incorporation of new mining methods and the emergence of coke as a major fuel source served as both a blessing and a curse to mining firms. With the new technology they raised more coal, but as more coalfields opened up and national production neared eighty million tons by 1880, coal prices remained relatively low. Cheap coal undoubtedly helped America’s rapidly industrializing economy, but it also created an industry structure characterized by boom and bust periods, low profit margins, and cutthroat competition among firms. But however it was raised, the United States became more and more dependent upon coal as the nineteenth century progressed, as demonstrated by Figure 2.

Figure 2: Coal as a Percentage of American Energy Consumption, 1850-1900

Source: Sam H. Schurr and Bruce C. Netschert, Energy in the American Economy, 1850-1975 (Baltimore: Johns Hopkins Press, 1960), 36-37.

The Rise of Labor Unions

As coal mines became more capital intensive over the course of the nineteenth century, the role of miners changed dramatically. Proprietary mines usually employed skilled miners as subcontractors in the years prior to the Civil War; by doing so they abdicated a great deal of control over the pace of mining. Corporate reorganization and the introduction of expensive machinery eroded the traditional authority of the skilled miner. By the 1870s, many mining firms employed managers to supervise the pace of work, but kept the old system of paying mine laborers per ton rather than an hourly wage. Falling piece rates quickly became a source of discontent in coal mining regions.

Miners responded to falling wages and the restructuring of mine labor by organizing into craft unions. The Workingmen’s Benevolent Association founded in Pennsylvania in 1868, united English, Irish, Scottish, and Welsh anthracite miners. The WBA won some concessions from coal companies until Franklin Gowen, acting president of the Philadelphia and Reading Railroad led a concerted effort to break the union in the winter of 1874-75. When sporadic violence plagued the anthracite fields, Gowen led the charge against the “Molly Maguires,” a clandestine organization supposedly led by Irish miners. After the breaking of the WBA, most coal mining unions served to organize skilled workers in specific regions. In 1890, a national mining union appeared when delegates from across the United States formed the United Mine Workers of America. The UMWA struggled to gain widespread acceptance until 1897, when widespread strikes pushed many workers into union membership. By 1903, the UMWA listed about a quarter of a million members, raised a treasury worth over one million dollars, and played a major role in industrial relations of the nation’s coal industry.

Coal at the Turn of the Century

By 1900, the American coal industry was truly a national endeavor that raised fifty-seven million tons of anthracite and 212 million tons of bituminous coal. (See Tables 1 and 2 for additional trends.) Some coal firms grew to immense proportions by nineteenth-century standards. The U.S. Coal and Oil Company, for example, was capitalized at six million dollars and owned the rights to 30,000 acres of coal-bearing land. But small mining concerns with one or two employees also persisted through the turn of the century. New developments in mine technology continued to revolutionize the trade as more and more coal fields across the United States became integrated into the national system of railroads. Industrial relations also assumed nationwide dimensions. John Mitchell, the leader of the UMWA, and L.M. Bowers of the Colorado Fuel and Iron Company, symbolized a new coal industry in which hard-line positions developed in both labor and capital’s respective camps. Since the bituminous coal industry alone employed over 300,000 workers by 1900, many Americans kept a close eye on labor relations in this critical trade. Although “King Coal” stood unchallenged as the nation’s leading supplier of domestic and industrial fuel, tension between managers and workers threatened the stability of the coal industry in the twentieth century.


Table 1: Coal Production in the United States, 1829-1899

Year Coal Production (thousands of tons) Percent Increase over Decade Tons per capita
Anthracite Bituminous
1829 138 102 0.02
1839 1008 552 550 0.09
1849 3995 2453 313 0.28
1859 9620 6013 142 0.50
1869 17,083 15,821 110 0.85
1879 30,208 37,898 107 1.36
1889 45,547 95,683 107 2.24
1899 60,418 193,323 80 3.34

Source: Fourteenth Census of the United States, Vol. XI, Mines and Quarries, 1922, Tables 8 and 9, pp. 258 and 260.

Table 2: Leading Coal Producing States, 1889

State Coal Production (thousands of tons)
Pennsylvania 81,719
Illinois 12,104
Ohio 9977
West Virginia 6232
Iowa 4095
Alabama 3573
Indiana 2845
Colorado 2544
Kentucky 2400
Kansas 2221
Tennessee 1926

Source: Thirteenth Census of the United States, Vol. XI, Mines and Quarries, 1913, Table 4, p. 187

Suggestions for Further Reading

Adams, Sean Patrick. “Different Charters, Different Paths: Corporations and Coal in Antebellum Pennsylvania and Virginia,” Business and Economic History 27 (Fall 1998): 78-90.

Binder, Frederick Moore. Coal Age Empire: Pennsylvania Coal and Its Utilization to 1860. Harrisburg: Pennsylvania Historical and Museum Commission, 1974.

Blatz, Perry. Democratic Miners: Work and Labor Relations in the Anthracite Coal Industry, 1875-1925. Albany: SUNY Press, 1994.

Broehl, Wayne G. The Molly Maguires. Cambridge, MA: Harvard University Press, 1964.

Bruce, Kathleen. Virginia Iron Manufacture in the Slave Era. New York: The Century Company, 1931.

Chandler, Alfred. “Anthracite Coal and the Beginnings of the ‘Industrial Revolution’ in the United States,” Business History Review 46 (1972): 141-181.

DiCiccio, Carmen. Coal and Coke in Pennsylvania. Harrisburg: Pennsylvania Historical and Museum Commission, 1996

Eavenson, Howard. The First Century and a Quarter of the American Coal Industry. Pittsburgh: Privately Printed, 1942.

Eller, Ronald. Miners, Millhands, and Mountaineers: Industrialization of the Appalachian South, 1880-1930. Knoxville: University of Tennessee Press, 1982.

Harvey, Katherine. The Best Dressed Miners: Life and Labor in the Maryland Coal Region, 1835-1910. Ithaca, NY: Cornell University Press, 1993.

Hoffman, John. “Anthracite in the Lehigh Valley of Pennsylvania, 1820-1845,” United States National Museum Bulletin 252 (1968): 91-141.

Laing, James T. “The Early Development of the Coal Industry in the Western Counties of Virginia,” West Virginia History 27 (January 1966): 144-155.

Laslett, John H.M. editor. The United Mine Workers: A Model of Industrial Solidarity? University Park: Penn State University Press, 1996.

Letwin, Daniel. The Challenge of Interracial Unionism: Alabama Coal Miners, 1878-1921 Chapel Hill: University of North Carolina Press, 1998.

Lewis, Ronald. Coal, Iron, and Slaves. Industrial Slavery in Maryland and Virginia, 1715-1865. Westport, Connecticut: Greenwood Press, 1979.

Long, Priscilla. Where the Sun Never Shines: A History of America’s Bloody Coal Industry. New York: Paragon, 1989.

Nye, David E.. Consuming Power: A Social History of American Energies. Cambridge: Massachusetts Institute of Technology Press, 1998.

Palladino, Grace. Another Civil War: Labor, Capital, and the State in the Anthracite Regions of Pennsylvania, 1840-1868. Urbana: University of Illinois Press, 1990.

Powell, H. Benjamin. Philadelphia’s First Fuel Crisis. Jacob Cist and the Developing Market for Pennsylvania Anthracite. University Park: The Pennsylvania State University Press, 1978.

Schurr, Sam H. and Bruce C. Netschert. Energy in the American Economy, 1850-1975: An Economic Study of Its History and Prospects. Baltimore: Johns Hopkins Press, 1960.

Stapleton, Darwin. The Transfer of Early Industrial Technologies to America. Philadelphia: American Philosophical Society, 1987.

Stealey, John E.. The Antebellum Kanawha Salt Business and Western Markets. Lexington: The University Press of Kentucky, 1993.

Wallace, Anthony F.C. St. Clair. A Nineteenth-Century Coal Town’s Experience with a Disaster-Prone Industry. New York: Alfred A. Knopf, 1981.

Warren, Kenneth. Triumphant Capitalism: Henry Clay Frick and the Industrial Transformation of America. Pittsburgh: University of Pittsburgh Press, 1996.

Woodworth, J. B.. “The History and Conditions of Mining in the Richmond Coal-Basin, Virginia.” Transactions of the American Institute of Mining Engineers 31 (1902): 477-484.

Yearley, Clifton K.. Enterprise and Anthracite: Economics and Democracy in Schuylkill County, 1820-1875. Baltimore: The Johns Hopkins University Press, 1961.