EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Cliometrics

John Lyons, Miami University

Lou Cain, Loyola University Chicago and Northwestern University

Sam Williamson, Miami University

Introduction

In the 1950s a small group of North American scholars adopted a revolutionary approach to investigating the economic past that soon spread to Great Britain and Ireland, the European mainland, Australia, New Zealand, and Japan. What was first called “The New Economic History,” then “Cliometrics,” was impelled by the promise of significant achievement, by the novelties of the recent (mathematical) formalization of economic theory, by the rapid spread of econometric methods, and by the introduction of computers into academia. Cliometrics has three obvious elements: use of quantifiable evidence, use of theoretical concepts and models, and use of statistical methods of estimation and inference, and an important fourth element, employment of the historian’s skills in judging provenance and quality of sources, in placing an investigation in institutional and social context, and in choosing subject matter of significance to history as well as economics. Although the term cliometrics is used to describe work in a variety of historical social and behavioral sciences, the discussion here focuses on economic history.

A quantitative-analytical approach to economic history developed in the interwar years through the work of such scholars as Simon Kuznets in the U.S. and Colin Clark in Britain. Characteristic elements of cliometrics were stimulated by events, by changes in economics, and by an intensification of what might be called the statistical impulse.

First, depression, war, the dissolution of empires, a renewal of widespread and more rapid growth in the Western world, and the challenge of Soviet-style economic planning combined to focus attention on the sources and mechanisms of economic growth and development.

Second, new intellectual currents in economics, spurred in part by contemporary economic problems, arose and came to dominate the profession. In the 1930s, and especially during the war, theoretical approaches to the aggregate economy and its capabilities grew out of the new Keynesian macroeconomics and the development of national income accounting. Explicit techniques for analyzing resource allocation in detail were introduced and employed in wartime planning. Econometrics, the statistical analysis of economic data, continued to grow apace.

Third, the gathering of facts – with an emphasis on systematic arrays of quantitative facts – became more important. By the nineteenth century governments, citizens and scholars had become preoccupied with fact-gathering, but their collations were ordinarily ad hoc and unsystematic. Thoroughness and system became the desideratum of scholarly fact-gathering in the twentieth century.

All these forces had an impact on the birth of a more rigorous way of examining our economic past.

The New Economic History in North America

Cliometrics was unveiled formally in Williamstown, Massachusetts, in the autumn of 1957 at an unusual four-day gathering sponsored by the Economic History Association and the Conference on Research in Income and Wealth. Most of the program was designed to showcase recent work by economists who had ventured into history.

Young scholars in the Income and Wealth group presented their contributions to the historical national accounts of the United States and Canada, spearheaded by Robert Gallman’s estimates of U.S. commodity output, 1839-1899. A pair of headline sessions dealt with method; the one on economic theory and economic history was headed by Walt Rostow, who recalled his undergraduate years in the 1930s at Yale, where he had been led to ask himself “why not see what happened if the machinery of economic theory was brought to bear on modern economic history?” He asserted “economic history is a less interesting field than it could be, because we do not remain sufficiently loyal to the problem approach, which in fact underlies and directs our efforts.”

Newcomers John R. Meyer and Alfred H. Conrad presented two papers. The first was “Economic Theory, Statistical Inference, and Economic History” (1957), a manifesto for using formal theory and econometric methods to examine historical questions. They argued that particular historical circumstances are instances of more general phenomena, suitable for theoretical analysis, and that that quantitative historical evidence, although relatively scarce, is much more abundant than many historians believed and can be analyzed using formal statistical methods. At another session Conrad and Meyer presented “The Economics of Slavery in the Antebellum South,” which incorporated their methodological views to refute a long-standing proposition that the slave system in the southern United States had become moribund by the 1850s and would have died out had there been no Civil War. Conrad and Meyer buttressed the point by showing that slaveholding, viewed as a business activity, had been at least as remunerative as other uses of financial and physical capital. More broadly they illustrated “the ways in which economic theory might be used in ordering and organizing historical facts.”

Two decades later Robert Gallman recalled that the Williamstown “conference did more than put the ball in motion … It also set the tone and style of the new economic history and even forecast the chief methodological and substantive interests that were to occupy cliometricians for the next twenty-one years.” What began in the late 1950s as a trickle of work in the new style grew to a freshet and then a flood, incorporating new methods, examining bodies of data previously too difficult to analyze without the aid of computers, and investigating a variety of questions of traditional importance, mostly in American economic history. The watershed was continent-wide, collecting the work of small clusters of scholars bound together in a ramifying intellectual and social network.

An important and continuing node in this network was at Purdue University in West Lafayette, Indiana. In the late 1950s a group of young historical economists assembled there, among whom the cross-pollination of historical interests and technical expertise was exceptional. In this group were Lance Davis and Jonathan Hughes and several others known primarily for their work in other fields. One was Stanley Reiter, a mathematical economist who traveled with Davis and Hughes to the meetings of the Economic History Association in September 1960 to present their paper explaining the new quantitative historical research being undertaken at Purdue – and to introduce the term “cliometrics” to the profession. The term was coined by Reiter as a whimsical combination of the words Clio, the muse of history, and metrics, from econometrics. As the years went by, the word stuck and became the name of the field.

To build on the enthusiasm aroused by that presentation, and to “consolidate Purdue’s position as the leader in this country of quantitative research in economic history,” Davis and Hughes (with Reiter’s aid) sought and received funds from Purdue for a meeting in December 1960 of about a dozen like-minded economic historians. They gave it the imposing title, “Conference on the Application of Economic Theory and Quantitative Methods to the Study of Problems of Economic History.” For obvious reasons the meetings were soon called “Clio” or the “Cliometrics Conference” by their familiars. Of the six presentations at the first meeting, none was more intriguing than Robert Fogel’s estimates of the “social saving” accruing from the expansion of the American railroad network to 1890.

Sessions were renowned from Clio’s early days as occasions for engaging in sharp debate and asking probing (and occasionally unanswerable) questions. Those who attended the first Clio conference established a tradition of rigorous and detailed analysis of the presenters’ work. In the early years at Purdue and elsewhere, cliometricians developed a research program with mutual support and encouragement and conducted an unusually large proportion of collaborative work, all the while believing in the progressiveness of their efforts.

Indeed, like Walt Rostow, other established economic historians felt that economic history was in need of renewal: Alexander Gerschenkron wrote in 1957 “Economic history is in a poor way. It is unable to attract good students, mainly because the discipline does not present any intellectual challenge …” Some cliometric young Turks were not so mild. While often relying heavily on the wealth of detail amassed in earlier research, they asserted a distinctive identity. The old economic history, it was said, was riddled with errors in economic reasoning and embodied an inadequate approach to causal explanation. The cliometricians insisted on a scientific approach to economic-historical questions, on careful specification of explicit models of the phenomena they were investigating. By implication and by declaration they said that much of conventional wisdom was based on unscientific and unsystematic historical scholarship, on occasion employing language not calculated to endear them to outsiders. The most vocal proponents declared a new order. Douglass North proclaimed that a “revolution is taking place in economic history in the United States … initiated by a new generation of economic historians” intent on reappraising “traditional interpretations of U.S. economic history.” Robert Fogel said that the “novel element in the work of the new economic historians is their approach to measurement and theory,” especially in their ability to find “methods of measuring economic phenomena that cannot be measured directly.” In 1993, these two were awarded the Nobel Memorial Prize in Economics for, in the words of the Nobel committee, being “pioneers in the branch of economic history that has been called the ‘new economic history,’ or cliometrics.”

The hallmark of the top rung of work done by the new economic historians was its integration of fact with theory. As Donald [Deirdre] McCloskey observed in a series of surveys, the theory was often simple. The facts, when not conveniently available, were dug up from surviving sources, whether published or not. Indeed the discipline imposed by the need to measure usually requires more data than would serve for a qualitative argument. Many new economic historians expended considerable effort in the 1960s to expand the American quantitative record. Thus, with eyebrow raised, so to speak, Albert Fishlow remarked in 1970, “It is ironic … to read that … most of the “New Economic History” only applies its ingenuity to analyzing convenient (usually published) data.'” Many cliometricians worked their magic not merely by relying on their predecessors’ compilations; as Scott Eddie comments, “one of the most significant contributions of cliometricians’ painstaking search for data has been the uncovering of vast treasure troves of useful data hitherto either unknown, unappreciated, or simply ignored.” Very early in the computer age they put such data into forms suitable for tabulation and statistical analysis.

William Parker and Robert Gallman, with their students, were pioneers in analyzing individual-level data from the United States Census manuscripts, a project arising from Parker’s earlier study of Southern plantations. From the 1860 agricultural census schedule they drew a carefully constructed sample of over 5,000 farms in the cotton counties of the American South and matched those farms with the two separate schedules for the free and slave populations. The Parker-Gallman sample was followed by Census samples for northern agriculture and for the post-bellum South.

The early practitioners of cliometrics applied their theoretical and quantitative skills to some issues well established in the more “traditional” economic historiography, none more important than asking when and how rapidly the North American economy began to experience “modern economic growth.” In the nineteenth century, economic growth in both the U.S. and Canada was punctuated by booms, recessions and financial crises, but the new work provided a better picture of the path of GNP and its components, revealing steady upward trends in aggregate output and in incomes per person and per worker. This last, it seemed clear from the work in the 1950s of Moses Abramovitz and Robert Solow, must have derived significantly from the introduction of new techniques, as well as from expansion of the scale and penetration of the market. Several scholars thus established a related objective, understanding – or at least accounting for – productivity growth.

Attempting to provide sound explanations for growth, productivity change, and numerous other developments in modern economic history, especially of the U.S. and Britain, was the objective of the cliometricians’ theory and quantification. They were much criticized from without for the very use of these technical tools, and within the movement there was much methodological dispute and considerable dissent. Nonetheless, the early cliometricians spawned a sustained intellectual tradition that diffused worldwide from its North American origins.

Historical Economics in Britain

Cliometrics arrived relatively slowly among British economic historians, but it did arrive. Some was homegrown; some was imported. When Jonathan Hughes expressed doubts in 1970 that the American style of cliometrics could ever be an “export product,” he was already wrong. Admittedly, by then the new style had been employed by only a tiny minority of those writing economic history in Britain. Introduction of a more formal style, in Britain as in North America, fell to those trained as economists, initially to Alec Cairncross, Brinley Thomas and Robin Matthews. Cairncross’s book on home and foreign investment and Thomas’s on migration and growth developed, or collected into one place, a great deal of quantitative information for theoretical analysis; their method, as David Landes noted in 1955, was “in the tradition of historical economics, as opposed to economic history.” Matthews’s Study in Trade Cycle History (1954), which examines the trade cycle of 1833-42, was written, he said, in a “quantitative-historical” mode, and contains theoretical reasoning, economic models, and statistical estimates.

Systematic use of national accounting methods to study British economic development was a task undertaken by Phyllis Deane at Cambridge. Her work resulted in two early papers on British income growth and capital formation and in two books of major importance and lasting value: British Economic Growth, 1688-1959 (1962), written with W. A. Cole, and a compendium of underlying data compiled with Brian Mitchell. Despite skeptical reviews, the basics of the Deane-Cole estimates of eighteenth- and early nineteenth-century aggregate growth were accepted widely for two decades and provided a quantitative basis for discussing living standards and the dispersion of technical progress in the new industrial era. Also at Cambridge, Charles Feinstein estimated the composition and magnitude of British investment flows and produced detailed national income estimates for the nineteenth and twentieth centuries, augmenting, refining and revising, as well as extending, the work of Deane and Cole.

All these studies belong to a decidedly British empirical tradition, despite the use of contemporary theoretical constructs, and contained nothing like the later claims of some American cliometricians about the virtues of using formal theory and statistical methods. Research in a consciously cliometric style was strongly encouraged in the 1960s at Oxford by Hrothgar Habakkuk and Max Hartwell, although neither saw himself as a cliometrician. Separately and together, they supported the movement, encouraging students to absorb both quantitative and formal analytical elements into their work.

The incursion of cliometrics into British economic history was – and has remained – neither so widespread nor so dominant as in North America, partly for reasons suggested by Hughes. Although economic history had been taught and practiced in British universities since the 1870s, after the first World War most faculty members were housed in separate departments of economic (and social) history that tended to require of their students only a modicum of economics and little of quantitative methods. With the establishment of new British universities and the rapid expansion of others, a dozen new departments of economic history were founded in the 1960s, staffed largely by people taught in history and economic history departments. The limited presence of cliometric types in Britain at the turn of the 1970s did not come from deficient demand, nor was it due to hostility or indifference. It was due to limited supply stemming from the small scale of the British academic labor market and an aversion to excessive specialization among young economists. Yet the situation was being rectified. On the demand side, British faculties of economics began to welcome more economic historians as colleagues, and, on the supply side, advanced students were being aided by post-graduate stipends and research support provided by the new Social Science Research Council.

During the 1970s a British version of new historical economics began to take shape. Its practitioners expanded their informal networks into formal institutional structures and scholarly ventures. The organized British movement opened in September 1970 at an Anglo-American “Conference on the New Economic History of Britain” in Cambridge (Massachusetts), followed by two others. From these meetings grew a project to re-write British economic history in a cliometric mode, which resulted in the publication in 1981 of a path-breaking two-volume work, The Economic History of Britain since 1700, edited by Roderick Floud and Donald [Deirdre] McCloskey.

Equally path-breaking, perhaps more so, was the outcome of parallel developments in English historical demography, whose practitioners had become progressively more quantitatively and theoretically adept since the 1950s, and for whom 1981 was also a banner year. Although portions of the book had been circulating for some time, E. A. Wrigley’s and R. S. Schofield’s Population History of England, 1541-1871: A Reconstruction and its striking revisions of English demographic history were now available in one massive document.

As in North America, after the first wave of “quanitifiers” invaded parts of British historiography, cliometrics was refined in the heat of scholarly debate.

Controversies

Cliometricians started or continued a series of debates about the nature and sources of economic growth and its welfare consequences that decidedly have altered our picture of modern economic history. The first was initiated by Walt Rostow, who argued that modern economic growth begins with a brief and well-defined period of “take-off,” with the necessary “preconditions” having already become the normal condition of a given national economy or society. His metaphor of a “take-off into self-sustained growth”, which first appeared in a journal article, was popularized in Rostow’s famous book, The Stages of Economic Growth (1960). Rostow asserted that “The introduction of the railroad has been historically the most powerful single initiator of take-offs.” To test this contention, Robert Fogel and Albert Fishlow both wrote Ph.D. dissertations dealing in part with Rostow’s view: Fogel’s Railroads and American Economic Growth (1964) and Fishlow’s American Railroads and the Transformation of the Antebellum Economy (1965). These books contain their estimates of the extent of resource saving that had accrued from the adoption of a new transport system, with costs lower than those of canals. Their results rejected Rostow’s view.

Until the cliometricians made a pair of disputatious incursions into its economic history, the American South was largely the province of regional historians – almost a footnote to the story of U.S. economic development. Sparked by Conrad and Meyer, for two decades cliometricians focused intently on the place of the South in the national economy and of slavery in the Southern economy. To what extent was early national economic growth driven by Southern cotton exports and how self-sufficient was the South as an economic region? Douglass North argued that the key to American economic development before 1860 was regional specialization, that Southern cotton was the economy’s staple product, and that much of Western and Northern economic growth derived from Southern demand for food and manufactures. Indeed, Conrad and Meyer had touched a nerve. Their demonstration of current profitability did not demonstrate long-run viability of the slave system; Yasukichi Yasuba was able to fill that gap by showing that slave prices were regularly more than enough to finance rearing slaves for future sale or employment. Many others tested and refined these early results. As a system of organizing production, American slavery was found to have been thriving on the eve of the Civil War; the sources of that prosperity, however, needed deeper examination.

In Time on the Cross (1974), Robert Fogel and Stanley Engerman not only reaffirmed the profitability and viability of Southern slavery, but they also made claims about the superior productivity of Southern versus Midwestern agriculture and about the relatively generous material comforts afforded to the slave population. Their book sparked a long-running controversy that extended beyond academia and prompted critical examinations and rebuttals by political and social historians and, above all, by their fellow cliometricians. A major critique was Reckoning with Slavery (by Paul David and others, 1976), as much a defense of cliometric method as a catalogue of what the authors saw as the method’s improper or incomplete application in Time on the Cross. Fogel subsequently published Without Consent or Contract (1989), a defense and extension of his and Engerman’s earlier work.

The remarkable antebellum prosperity of the Southern slave economy was followed by an equally remarkable relative decline in Southern per-capita income after the war. While the remainder of the American economy grew rapidly, the South stagnated, with a distinctively low-wage, low-productivity economy and a poorly educated labor force, both black and white. The next generation of cliometricians asked “Why?” Was it the legacy of the slave system, of the virtual absence of industrial development in the antebellum South, of post-Civil War Reconstruction and backlash, of continued reliance on cotton, of Jim Crow, or of racism and discrimination? Roger Ransom and Richard Sutch investigated share-tenancy, debt peonage and labor effort in maintaining cotton cultivation, using individual level data, some derived a la Parker and Gallman, from a sample of the manuscript U.S. Censuses. Gavin Wright focused on an effective separation of the Southern from the national labor market, and Robert Margo examined the region’s low level of educational investment and its consequences.

An entirely new line of investigation derived from the research on slavery, measuring the “biological standard of living” using anthropometric data. Richard Steckel’s paper on slave height profiles led directly to the discussion of “Anthropometric Indexes of Malnutrition” in Without Consent or Contract. In a corrective to the Fogel-Engerman interpretation of the slave diet, Steckel showed how stunted (and thus how poorly fed) slave children were before they came of working age. John Komlos discovered that heights (of West Point cadets) were declining even as American per capita income was rising in the years before the Civil War, what he called the “Antebellum Puzzle.” Elsewhere, Roderick Floud led a project employing anthropometric data from records of British military recruits, while Stephen Nicholas, Deborah Oxley and Steckel analyzed records for male and female convicts transported to Australia.

Industrialization and its new technologies in the U.S. long predate the Civil War. In writing about technological progress, economic historians had, before the 1960s, tended to concentrate on single industries or economies. Yet distinctive “national” technologies emerged in the early nineteenth century (e.g., contemporary British observers distinguished “The American System of Manufactures” from their own). Amid the early ferment of quantitative economic history in the United States, Hrothgar Habakkuk published American and British Technology in the Nineteenth Century: The Search for Labour-Saving Inventions, a truly comparative study. It was 1962, when, as Paul David writes, “economic historians’ interests in Anglo-American technological divergences were suddenly raised from a quiet simmer to a furious boil by the publication of … Habakkuk’s now celebrated book on the subject.” Habakkuk expanded on an idea that the apparent labor-saving bias of American manufacturing techniques was due to land so abundant that American workers were paid (relative to other factors) much more than what their British counterparts received, but he did not resolve whether the bias was due to more machines per worker, better machines, or more inventiveness.

One strand of the debate over what Peter Temin called Habakkuk’s “labor-scarcity paradox” left to one side the question of “better machines.” It fell to Nathan Rosenberg and Paul David to explore the distinctive technological trajectories of different economies. Rosenberg pointed to the emergence of “technologically convergent” production processes and to the importance of very low relative materials costs in American manufacturing. Paul David reviewed the debate, beginning to formulate a theoretical approach to explain sources of technical change (and divergence). He argued that an economy’s trajectory of technological development is conditioned, perhaps only initially, by relative factor prices, but then by opportunities for further progress based on localized learning from, or constrained by, existing techniques and their histories. David developed the concept of “path dependence,” which is “a dynamic process whose evolution is governed by its own history.”

The first systematic cliometric debate involving European economic history was over an alleged British technological and economic failure in the late nineteenth century. The slower growth of income and exports, the loss of markets even in the Empire, and an “invasion” of foreign manufactures (many American) alarmed businessmen and policymakers alike and led to opposition to a half-century of British “Free Trade.” Who was to blame for loss of competitiveness? Although some scholars attributed Britain’s “climacteric” to the maturation of the technologies underpinning her success during the Industrial Revolution, others attributed it to “entrepreneurial failure” and cited the inability or refusal of British business leaders to adopt the best available technologies. Cliometricians argued, by and large, that British businessmen made their investment and production decisions in a sensible, economically rational fashion, given the constraints they faced; they had made the best of a bad situation. Subsequent research has demonstrated the problem to be more complex, and it is yet to be resolved.

Many results of the cliometrics revolution come from the application of theory and measurement in the service of history; a converse case comes from the macro economists. Monetarists, in particular, have placed economic history in the service of theory, prominently in analyzing the Great Depression of the 1930s. In 1963, Milton Friedman and Anna Schwartz, in A Monetary History of the United States, 1867-1960, opened a discussion that has led to widespread, but not universal, acceptance among economists of a sophisticated version of the “quantity theory of money.” Their detailed examination of several episodes in American monetary development under varying institutional regimes allowed them to use a set of “natural experiments” to assess the economic impact of exogenous changes in the stock of money. The Friedman-Schwartz enterprise sought support for the general proposition that money is not simply a veil over real transactions – that money does matter. Their demonstration of that point for the Great Depression initiated an entire scholarly literature involving not only economic historians but also monetary and macro economists. Peter Temin was among the first of the economic historians to question their argument, in Did Monetary Forces Cause the Great Depression? (1976). His answer was essentially “No,” stressing declines in consumer spending and in investment in the late 1920s as initiating factors and discounting money stock reductions for the continued downturn. In a later book, Lessons from the Great Depression (1989), Temin in effect recanted his earlier position, impelled by a good deal of further research, especially on international finance. The present consensus is that what Friedman and Schwartz call “The Great Contraction, 1929-1933″ may have been initiated by real factors in the late 1920s, but it was faulty public policy and adherence to the Gold Standard that played major roles in turning an economic downturn into “The Great Depression.”

A broad new approach to economic change over time has emerged from the mind of Douglass North. Confronted in the later 1960s with European economic development in its variety and antiquity, North became dissatisfied with the limited modes of analysis that he had applied fruitfully to the American case and concluded that “we couldn’t make sense out of European economic history without explicitly modeling institutions, property rights, and government.” For that matter, making sense of a wider view of American economic history was similarly difficult, as exemplified in the Lance Davis and North venture, Institutional Change and American Economic Growth (1971). The core of North’s model, conceptual rather than formal, is that, when changes in underlying circumstances alter the cost-benefit calculus of existing arrangements, new institutions will arise if there is a net benefit to be realized. Although their approach arose from dissatisfaction with the static nature of economic theory in the 1960s, North and his colleagues nonetheless followed what most other economists would do in arguing that optimal institutional forms will arise dynamically from an essentially profit-maximizing response to changes in incentives. As Davis and North were quick to admit, their effort was “a first (and very primitive) attempt” at formulating a theory of institutional change and applying that theory to American institutional development. North recognized the limitations of his early work on institutional change and has endeavored to develop a more subtle and articulated approach. In Understanding the Process of Economic Change (2005), North stresses again that modeling institutional change is less than straightforward, and he continues to examine the persistence of “institutions that provided incentives for stagnation and decline.”

Retrospect and Prospect

In the 1960s, when the first cliometricians began to group themselves into a distinct intellectual and social movement, buoyed by their revisionist achievements, they (at least many of them) thought they could use their scientific approach to re-write history. This hope may not have been a vain one, but it is yet to be realized. The best efforts of cliometricians have merged with those in other traditions to develop a rather different understanding of the economic past from views maintained half a century ago.

As economic history has evolved, so have the environs economic historians inhabit. In the Anglophone world, economic history – and cliometrics within it – burgeoned with the growth of higher education, but it has recently suffered the effects of retrenchment in that sector. Elsewhere, a new multi-lingual generation of enthusiastic economic historians and historical economists has arisen, with English as the language of international discourse. Both history and economics have been transformed by dissatisfaction with old verities and values, by adoption of new methods and points of view, and by posing new or revived questions. Economic history has been beneficiary of and contributor to such changes.

Although this entry focuses on the development of historical economics in the United States and the United Kingdom, we note that the cliometric approach has diffused well beyond their boundaries. In France the economist’s quantitative approach was fostered when Kuznets’s historical national accounts project recruited scholars in the 1950s to amass and organize the agricultural, output and population data available, in a new histoire quantitative. Still, that movement was overshadowed by the Annales school, whose histoire totale involved much data collection but limited economic analysis. Economic history of France, produced in the cliometric mode by scholars trained there, did not arrive in force until the mid-1980s. French cliometrics was first written by economic historians from (or trained in) North America or Britain; the Gallic cliometrics revolution occurred gradually, for “peculiarly French” institutional and ideological reasons. In Germany similar institutional barriers were partially breached in the 1960s with the arrival of a “turnkey” cliometrics operation in the form of an American-trained American scholar, Richard Tilly, who went from Wisconsin to Munster. Tilly was joined later by a few central Europeans who received American degrees, and all have since taught younger German cliometricians. Leading cliometric scholars from Italy, Spain and Portugal likewise received their post-graduate educations in Britain or America. The foremost Japanese cliometrician, Yasukichi Yasuba, received his Ph.D. from Johns Hopkins, supervised by Simon Kuznets.

If cliometrics in and of continental Europe could trace its roots to North America and Britain, by the 1980s it had developed indigenous strength and identity. At the Tenth International Economic History Congress in Leuven, Belgium (1990), a new association of analytical economic historians was founded. Rejecting the use of “cliometrics” as descriptor, the participants endorsed the nascent European Historical Economics Society. Subsequently national associations and seminars have grown up under the umbrella of the EHES – for example, French historical economists have the Association Francaise de Cliometrie and a new international journal, Cliometrica, while the Portuguese and Spaniards have sponsored a series of “Iberometrics” Conferences.

Cliometrics has transformed itself over the past half-century, forging important links with other disciplines and continuing to broaden its compass, and interpreting “new” phenomena. They are showing, for example, that recent “globalization” has origins and manifestations going back half a millennium and, given the recent experience of the formerly Socialist “transitional” economies, they are showing that the deep historical roots of institutions, organizations, values and behavior in the developed economies cannot be duplicated by following simple formulae. Despite the presentism of contemporary society, economic history will continue to address essential questions of origins and consequences, and it seems likely that cliometricians will complement and sometimes lead their colleagues in providing the answers. Cliometrics is a well-established field of study and its practitioners continue to increase our understanding of how economies evolve.

Source Note: The bulk of this article is a condensed version of the introduction to Lyons, Cain, and Williamson, eds., Reflections on the Cliometrics Revolution: Conversations with Economic Historians (2008), copyright (c) The Cliometric Society, Inc., which receives the royalties; reproduced by permission. Readers should consult that book for a more complete presentation, notes, and a full bibliography.

Further Reading

Coats, A. W. “The Historical Context of the ‘New’ Economic History.” Journal of European Economic History 9, no. 1 (1980): 185-207.

“Cliometrics after 40 Years.” American Economic Review: Papers and Proceedings 87:2, (1997): 396-414 [commentary by Claudia Goldin, Avner Greif, James J. Heckman, John R. Meyer, and Douglass C. North].

Crafts, N. F. R. “Cliometrics, 1971-1986: A Survey.” Journal of Applied Econometrics 2, no. 3 (1987): 171-92.

Davis, Lance E., Jonathan R. T. Hughes and Duncan McDougall. American Economic History. Homewood, IL: Irwin, 1961. [The first textbook of U.S. economic history to make systematic use of economic theory to organize the exposition. Second edition, 1965; third edition, 1969.]

Davis, Lance E., Jonathan R. T. Hughes and Stanley Reiter. “Aspects of Quantitative Research in Economic History.” _Journal of Economic History_ 20:4 (1960): 539-47 [in which “cliometrics” first appeared in print].

Drukker, J. W. The Revolution That Bit Its Own Tail: How Economic History Has Changed Our Ideas about Economic Growth. Amsterdam: Aksant, 2006.

Engerman, Stanley L. “Cliometrics.” In The Social Science Encyclopedia, second edition, edited by Adam Kuper and Jessica Kuper, 96-98. New York: Routledge, 1996.

Field, Alexander J. “The Future of Economic History.” In The Future of Economic History, edited by Alexander J. Field, 1-41. Boston: Kluwer-Nijhoff, 1987.

Fishlow, Albert, and Robert W. Fogel. “Quantitative Economic History: An Interim Evaluation. Past Trends and Present Tendencies.” Journal of Economic History 31, no. 1 (1971): 15-42.

Floud, Roderick. “Cliometrics.” In The New Palgrave: A Dictionary of Economics, edited by John Eatwell, Murray Milgate and Peter Newman, vol. 1, 452-54. London: Macmillan, 1987.

Goldin, Claudia. “Cliometrics and the Nobel.” Journal of Economic Perspectives 9, no. 2 (1995): 191-208.

Grantham, George. “The French Cliometric Revolution: A Survey of Cliometric Contributions to French Economic History.” European Review of Economic History 1, no. 3 (1997): 353-405.

Lamoreaux, Naomi R. “Economic History and the Cliometric Revolution.” In Imagined Histories: American Historians Interpret the Past, edited by Anthony Molho and Gordon S. Wood, 59-84. Princeton: Princeton University Press, 1998

Lyons, John S., Louis P Cain, and Samuel H. Williamson, eds. Reflections on the Cliometrics Revolution: Conversations with Economic Historians. New York: Routledge, 2008.

McCloskey, Donald [Deirdre] N. Econometric History. London: Macmillan, 1987

Parker, William, editor. Trends in the American Economy in the Nineteenth Century. Princeton, N.J.: Princeton University Press, 1960. [Volume 24 in Studies in Income and Wealth, in which many of the papers presented at the 1957 Williamstown conference appear.]

Tilly, Richard. “German Economic History and Cliometrics: A Selective Survey of Recent Tendencies.” European Review of Economic History 5, vol. 2 (2001): 151-87.

Whaples, Robert. “A Quantitative History of the Journal of Economic History and the Cliometric Revolution.” Journal of Economic History 51, no. 2 (1991): 289-301.

Williamson, Samuel H. “The History of Cliometrics.” In The Vital One: Essays in Honor of Jonathan R. T. Hughes, edited by Joel Mokyr, 15-31. Greenwich, Conn.: JAI Press, 1991. [Research in Economic History, Supplement 6.]

Williamson, Samuel H., and Robert Whaples. “Cliometrics.” In The Oxford Encyclopedia of Economic History, vol. 1, edited by Joel Mokyr, 446-47. Oxford: Oxford University Press, 2003.

Wright, Gavin. “Economic History, Quantitative: United States.” In International Encyclopedia of the Social and Behavioral Sciences, edited by Neil J. Smelser and Paul B. Baltes, 4108-14. Amsterdam: Elsevier, 2001.

Citation: Lyons, John. “Cliometrics”. EH.Net Encyclopedia, edited by Robert Whaples. August 27, 2009. URL http://eh.net/encyclopedia/cliometrics/

The Fall of the House of Speyer: The Story of a Banking Dynasty

Author(s):Liebmann, George W.
Reviewer(s):Cook, Eli

Published by EH.Net (January 2017)

George W. Liebmann, The Fall of the House of Speyer: The Story of a Banking Dynasty. New York: I.B. Tauris, 2015. xii + 244 pp. $35 (hardcover), ISBN: 978-1-78453-176-8.

Reviewed for EH.Net by Eli Cook, Department of History, University of Haifa.

George Liebmann has written a richly detailed and highly useful history of the final fifty years of the German-Jewish Speyer family’s investment banking empire which spanned across England, the United States and Germany — if not the world. Deeply researched, the book paints a precise picture of the two Speyer brothers — James in United States and Edgar in England — who ran the investment bank on both sides of the Atlantic at the turn of the twentieth century. Hardly a one-track book, Liebmann delves not only into the Speyer’s business interests but also their philanthropic work, cultural milieu, spousal relations, musical talents, political leanings and personal problems. A declentionist narrative that laments the end of free trade and the rise of “nationalism,” Liebmann convincingly portrays how the Speyers suddenly found themselves torn between warring national allegiances. In tracing the fall of the House of Speyer through Edgar’s exile from England and James’s fall from grace in the United States, the book reminds us how difficult it was for cosmopolitan German-Jewish banking elites living in the Anglo-American world to maintain not only their banking business but their personal ties (which were indistinguishable from each other) following the outbreak of World War I. In the end, as Liebmann carefully shows, a banking empire built on linking German capital with American and English elites was unable to survive the belligerence of the early twentieth century. The rise of Hitler served as the final straw, and the Speyer’s empire collapsed in 1939.

True capitalist pioneers, Phillip Speyer opened the wealthy Frankfurt family’s New York branch in 1837 — well before the Schiffs, the Kuhns, the Goldmans, the Sachses or the Loebs ever made their names in the city. After marketing Union bonds to Europe during the Civil War, the family built much of its fortune and reputation as central architects in the financing of the transcontinental railroads in the 1870s. However, as he is interested mostly in the lives of Phillip’s grandsons, James and Edgar, between the 1890s and the 1920s, Leibmann does not focus much on the Speyers’ role in those great — and, as Richard White’s Railroaded (W.W. Norton, 2012) has taught us, highly corrupt — railroad adventures.

Liebmann’s own account of the Speyers’ business — like the family’s vast financial network — spans the globe. The book does an excellent job of covering James and Edgar’s wide-ranging investment portfolio, be it Los Angeles Aqueducts and London subways, Cuban and Weimar Republic bonds, Philippine and Bolivian railroads, or Hungarian and Bulgarian League of Nations loans. Thick descriptions of all these investment deals will likely be the most useful part of the book for the economic historian. Liebmann shows, for instance, that while Edgar Speyer made the London tube the only privately financed subway in the world, the project only became profitable after the Speyers also made sure to buy the biggest bus company in London. In covering the Speyer’s marketing of Cuban loans, Liebmann nicely demonstrates how this lucrative issuance was secured by Edgar and James’ ability to get the Pro-American Cuban government to commit to permanent taxes (known as the “Speyer Taxes”) on tobacco, alcohol and sugar following the Spanish-American War. Similarly detailed chapters examine the central role the Speyers played in Latin and Central American railroads as well as Eastern European sovereign debt.

Yet while Liebmann does an excellent job documenting the life and work of James and Edgar Speyer, he often misses the opportunity to use his vast knowledge of the Speyers as a lens into bigger historical questions. For example, Liebmann notes how Edgar Speyer was an influential free-market Liberal who railed against increased government and municipal expenditure. Yet he takes this position as a given, and does not link Edgar’s austerity politics with his own business interests. As owners of the London Underground, could it be that the Speyers — and other financial elites — rejected government investment because they were afraid to be priced out of the profitable infrastructure business?

The Speyers could have also served as a tremendous case study for understanding the rise of American imperialism in the early twentieth century, as they were central players in the “dollar diplomacy” taking place in Cuba and the Philippines — the two prize regions that the United States wrested away from Spain at the turn of the century. Their actions — such as the establishment of the Speyer Taxes — would have massive political repercussions. As Liebmann notes, up until 1940 the Cuban government had to devote a whopping 15 percent of its national revenue to service the Speyers’ debt, a development which surely played a role in the Cuban Revolution of the 1950s. Such investments, as well as the sovereign debt loans to Weimar Germany and Eastern Europe, belie Liebmann’s simplistic claim that the free-trade Speyers were victims of “nationalism.” If anything, nation-building and the Speyers’ close ties to government leaders helped make them rich.

Finally, the Speyers’ shifting investment patterns — from transcontinental American railways to global railroads and sovereign debt — could have also offered a fascinating look at the United States’ own epochal shift from a nation that attracts capital to a country that exports it. Such a transformation of American capitalism from debtor to creditor has been painfully overlooked by historians, and while Liebmann’s book offers rich material, his analysis of this transformation is lacking.

Despite these faults, Liebmann has written the best account yet of the House of Speyer, the third largest investment banking firm at its peak in the 1900s and one that managed in 1913 the 2015 equivalent of $52 billion in assets. Often overlooked by the Houses of Morgan, Lehman, Goldman and Sachs — perhaps because they are no longer a household name — this book will be an invaluable read for any historian interested in the biggest financial players of the era.

Eli Cook is an Assistant Professor of American History at the University of Haifa in Israel. He received his Ph.D. from Harvard University in 2013, where he was part of the Program for the Study of Capitalism. He is currently finishing up a book manuscript for Harvard University Press titled The Pricing of Progress: Economic Indicators and the Capitalization of American Life.

Copyright (c) 2017 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (January 2017). All EH.Net reviews are archived at http://eh.net/book-reviews/

Subject(s):Business History
Financial Markets, Financial Institutions, and Monetary History
Geographic Area(s):Europe
North America
Time Period(s):19th Century
20th Century: Pre WWII

Project 2000/2001

Project 2000

Each month during 2000, EH.NET published a review essay on a significant work in twentieth-century economic history. The purpose of these essays was to survey the works that have had the most influence on the field of economic history and to highlight the intellectual accomplishments of twentieth-century economic historians. Each review essay outlines the work’s argument and findings, discusses the author’s methods and sources, and examines the impact that the work has had since its publication.

Nominations were received from dozens of EH.Net’s users. P2K
selection committee members were: Stanley Engerman (University of
Rochester), Alan Heston (University of Pennsylvania), Paul
Hohenberg, chair (Rensselaer Polytechnic Institute), and Mary
Yeager (University of California-Los Angeles). Project Chair was
Robert Whaples (Wake Forest University).

The review essays are:

Braudel, Fernand
Civilization and Capitalism, 15th-18th Century Time
Reviewed by Alan Heston (University of Pennsylvania).

Chandler, Alfred D. Jr.
The Visible Hand: The Managerial Revolution in American Business
Reviewed by David S. Landes (Department of Economics and History, Harvard University).

Chaudhuri, K. N.
The Trading World of Asia and the English East India Company, 1660-1760
Reviewed by Santhi Hejeebu.

Davis, Lance E. and North, Douglass C. (with the assistance of Calla Smorodin)
Institutional Change and American Economic Growth.
Reviewed by Cynthia Taft Morris (Department of Economics, Smith College and American University).

Fogel, Robert W.
Railroads and American Economic Growth: Essays in Econometric History
Reviewed by Lance Davis (California Institute of Technology).

Friedman, Milton and Schwartz, Anna Jacobson
A Monetary History of the United States, 1867-1960
Reviewed by Hugh Rockoff (Rutgers University).

Heckscher, Eli F.
Mercantilism
Reviewed by John J. McCusker (Departments of History and Economics, Trinity University).

Landes, David S.
The Unbound Prometheus: Technological Change and Industrial Development in Western Europe from 1750 to the Present
Reviewed by Paul M. Hohenberg (Rensselaer Polytechnic Institute).

Pinchbeck, Ivy
Women Workers and the Industrial Revolution, 1750-1850 
Reviewed by Joyce Burnette (Wabash College).

Polanyi, Karl
The Great Transformation: The Political and Economic Origins of Our Time
Reviewed by Anne Mayhew (University of Tennessee).

Schumpeter, Joseph A.
Capitalism, Socialism and Democracy 
Reviewed by Thomas K. McCraw (Harvard Business School).

Weber, Max
The Protestant Ethic and the Spirit of Capitalism
Reviewed by Stanley Engerman.

Project 2001

Throughout 2001 and 2002, EH.Net published a second series
of review essays on important and influential works in economic
history. As with Project 2000, nominations for Project 2001 were
received from many EH.Net users and reviewed by the Selection
Committee: Lee Craig (North Carolina State University); Giovanni
Federico (University of Pisa); Anne McCants (MIT); Marvin McInnis
(Queen’s University); Albrecht Ritschl (University of Zurich);
Winifred Rothenberg (Tufts University); and Richard Salvucci
(Trinity College).

Project 2001 selections were:

Borah, Woodrow Wilson
New Spain’s Century of Depression
Reviewed by Richard Salvucci (Department of Economics, Trinity University).

Boserup, Ester
Conditions of Agricultural Growth: The Economics of Agrarian Change under Population Pressure
Reviewed by Giovanni Federico (Department of Modern History, University of Pisa).

Deane, Phyllis and W. A. Cole
British Economic Growth, 1688-1959: Trends and Structure
Reviewed by Knick Harley (Department of Economics, University of Western Ontario).

Fogel, Robert and Stanley Engerman
Time on the Cross: The Economics of American Negro Slavery
Reviewed by Thomas Weiss (Department of Economics, University of Kansas).

Gerschenkron, Alexander
Economic Backwardness in Historical Perspective
Review Essay by Albert Fishlow (International Affairs, Columbia University).

Horwitz, Morton
The Transformation of American Law, 1780-1860
Reviewed by Winifred B. Rothenberg (Department of Economics, Tufts University).

Kuznets, Simon
Modern Economic Growth: Rate, Structure and Spread
Reviewed by Richard A. Easterlin (Department of Economics, University of Southern California).

Le Roy Ladurie, Emmanuel
The Peasants of Languedoc
Reviewed by Anne E.C. McCants (Department of History, Massachusetts Institute of Technology).

North, Douglass and Robert Paul Thomas
The Rise of the Western World: A New Economic History
Reviewed by Philip R. P. Coelho (Department of Economics, Ball State University).

de Vries, Jan
The Economy of Europe in an Age of Crisis, 1600-1750
Review Essay by George Grantham (Department of Economics, McGill University).

Temin, Peter
The Jacksonian Economy
Reviewed by Richard Sylla (Department of Economics, Stern School of Business, New York University).

Wrigley, E. A. and R. S. Schofield
The Population History of England, 1541-1871: A Reconstruction

Project Coordinator and Editor: Robert Whaples (Wake Forest
University)

History of Workplace Safety in the United States, 1880-1970

Mark Aldrich, Smith College

The dangers of work are usually measured by the number of injuries or fatalities occurring to a group of workers, usually over a period of one year. 1 Over the past century such measures reveal a striking improvement in the safety of work in all the advanced countries. In part this has been the result of the gradual shift of jobs from relatively dangerous goods production such as farming, fishing, logging, mining, and manufacturing into such comparatively safe work as retail trade and services. But even the dangerous trades are now far safer than they were in 1900. To take but one example, mining today remains a comparatively risky activity. Its annual fatality rate is about nine for every one hundred thousand miners employed. A century ago in 1900 about three hundred out of every one hundred thousand miners were killed on the job each year. 2

The Nineteenth Century

Before the late nineteenth century we know little about the safety of American workplaces because contemporaries cared little about it. As a result, only fragmentary information exists prior to the 1880s. Pre-industrial laborers faced risks from animals and hand tools, ladders and stairs. Industrialization substituted steam engines for animals, machines for hand tools, and elevators for ladders. But whether these new technologies generally worsened the dangers of work is unclear. What is clear is that nowhere was the new work associated with the industrial revolution more dangerous than in America.

US Was Unusually Dangerous

Americans modified the path of industrialization that had been pioneered in Britain to fit the particular geographic and economic circumstances of the American continent. Reflecting the high wages and vast natural resources of a new continent, this American system encouraged use of labor saving machines and processes. These developments occurred within a legal and regulatory climate that diminished employer’s interest in safety. As a result, Americans developed production methods that were both highly productive and often very dangerous. 3

Accidents Were “Cheap”

While workers injured on the job or their heirs might sue employers for damages, winning proved difficult. Where employers could show that the worker had assumed the risk, or had been injured by the actions of a fellow employee, or had himself been partly at fault, courts would usually deny liability. A number or surveys taken about 1900 showed that only about half of all workers fatally injured recovered anything and their average compensation only amounted to about half a year’s pay. Because accidents were so cheap, American industrial methods developed with little reference to their safety. 4

Mining

Nowhere was the American system more dangerous than in early mining. In Britain, coal seams were deep and coal expensive. As a result, British mines used mining methods that recovered nearly all of the coal because they used waste rock to hold up the roof. British methods also concentrated the working, making supervision easy, and required little blasting. American coal deposits by contrast, were both vast and near the surface; they could be tapped cheaply using techniques known as “room and pillar” mining. Such methods used coal pillars and timber to hold up the roof, because timber and coal were cheap. Since miners worked in separate rooms, labor supervision was difficult and much blasting was required to bring down the coal. Miners themselves were by no means blameless; most were paid by the ton, and when safety interfered with production, safety often took a back seat. For such reasons, American methods yielded more coal per worker than did European techniques, but they were far more dangerous, and toward the end of the nineteenth century, the dangers worsened (see Table 1).5

Table 1

British and American Mine Safety, 1890 -1904

(Fatality rates per Thousand Workers per Year)

Years American Anthracite American Bituminous Great Britain
1890-1894 3.29 2.52 1.61
1900-1904 3.13 3.53 1.28

Source: British data from Great Britain, General Report. Other data from Aldrich, Safety First.

Railroads

Nineteenth century American railroads were also comparatively dangerous to their workers – and their passengers as well – and for similar reasons. Vast North American distances and low population density turned American carriers into predominantly freight haulers – and freight was far more dangerous to workers than passenger traffic, for men had to go in between moving cars for coupling and uncoupling and ride the cars to work brakes. The thin traffic and high wages also forced American carriers to economize on both capital and labor. Accordingly, American carriers were poorly built and used few signals, both of which resulted in many derailments and collisions. Such conditions made American railroad work far more dangerous than that in Britain (see Table 2).6

Table 2

Comparative Safety of British and American Railroad Workers, 1889 – 1901

(Fatality Rates per Thousand Workers per Year)

1889 1895 1901
British railroad workers

All causes
1.14 0.95 0.89
British trainmena

All causes
4.26 3.22 2.21
Coupling 0.94 0.83 0.74
American Railroad workers

All causes
2.67 2.31 2.50
American trainmen

All causes
8.52 6.45 7.35
Coupling 1.73c 1.20 0.78
Brakingb 3.25c 2.44 2.03

Source: Aldrich, Safety First, Table 1 and Great Britain Board of Trade, General Report.

1

Note: Death rates are per thousand employees.

a. Guards, brakemen, and shunters.

b. Deaths from falls from cars and striking overhead obstructions.

Manufacturing

American manufacturing also developed in a distinctively American fashion that substituted power and machinery for labor and manufactured products with interchangeable arts for ease in mass production. Whether American methods were less safe than those in Europe is unclear but by 1900 they were extraordinarily risky by modern standards, for machines and power sources were largely unguarded. And while competition encouraged factory managers to strive for ever-increased output, they showed little interest in improving safety.7

Worker and Employer Responses

Workers and firms responded to these dangers in a number of ways. Some workers simply left jobs they felt were too dangerous, and risky jobs may have had to offer higher pay to attract workers. After the Civil War life and accident insurance companies expanded, and some workers purchased insurance or set aside savings to offset the income risks from death or injury. Some unions and fraternal organizations also offered their members insurance. Railroads and some mines also developed hospital and insurance plans to care for injured workers while many carriers provided jobs for all their injured men. 8

Improving safety, 1910-1939

Public efforts to improve safety date from the very beginnings of industrialization. States established railroad regulatory commissions as early as the 1840s. But while most of the commissions were intended to improve safety, they had few powers and were rarely able to exert much influence on working conditions. Similarly, the first state mining commission began in Pennsylvania in 1869, and other states soon followed. Yet most of the early commissions were ineffectual and as noted safety actually deteriorated after the Civil War. Factory commissions also dated from but most were understaffed and they too had little power.9

Railroads

The most successful effort to improve work safety during the nineteenth century began on the railroads in the 1880s as a small band of railroad regulators, workers, and managers began to campaign for the development of better brakes and couplers for freight cars. In response George Westinghouse modified his passenger train air brake in about 1887 so it would work on long freights, while at roughly the same time Ely Janney developed an automatic car coupler. For the railroads such equipment meant not only better safety, but also higher productivity and after 1888 they began to deploy it. The process was given a boost in 1889-1890 when the newly-formed Interstate Commerce Commission (ICC) published its first accident statistics. They demonstrated conclusively the extraordinary risks to trainmen from coupling and riding freight (Table 2). In 1893 Congress responded, passing the Safety Appliance Act, which mandated use of such equipment. It was the first federal law intended primarily to improve work safety, and by 1900 when the new equipment was widely diffused, risks to trainmen had fallen dramatically.10

Federal Safety Regulation

In the years between 1900 and World War I, a rather strange band of Progressive reformers, muckraking journalists, businessmen, and labor unions pressed for changes in many areas of American life. These years saw the founding of the Federal Food and Drug Administration, the Federal Reserve System and much else. Work safety also became of increased public concern and the first important developments came once again on the railroads. Unions representing trainmen had been impressed by the safety appliance act of 1893 and after 1900 they campaigned for more of the same. In response Congress passed a host of regulations governing the safety of locomotives and freight cars. While most of these specific regulations were probably modestly beneficial, collectively their impact was small because unlike the rules governing automatic couplers and air brakes they addressed rather minor risks.11

In 1910 Congress also established the Bureau of Mines in response to a series of disastrous and increasingly frequent explosions. The Bureau was to be a scientific, not a regulatory body and it was intended to discover and disseminate new knowledge on ways to improve mine safety.12

Workers’ Compensation Laws Enacted

Far more important were new laws that raised the cost of accidents to employers. In 1908 Congress passed a federal employers’ liability law that applied to railroad workers in interstate commerce and sharply limited defenses an employee could claim. Worker fatalities that had once cost the railroads perhaps $200 now cost $2,000. Two years later in 1910, New York became the first state to pass a workmen’s compensation law. This was a European idea. Instead of requiring injured workers to sue for damages in court and prove the employer was negligent, the new law automatically compensated all injuries at a fixed rate. Compensation appealed to businesses because it made costs more predictable and reduced labor strife. To reformers and unions it promised greater and more certain benefits. Samuel Gompers, leader of the American Federation of Labor had studied the effects of compensation in Germany. He was impressed with how it stimulated business interest in safety, he said. Between 1911 and 1921 forty-four states passed compensation laws.13

Employers Become Interested in Safety

The sharp rise in accident costs that resulted from compensation laws and tighter employers’ liability initiated the modern concern with work safety and initiated the long-term decline in work accidents and injuries. Large firms in railroading, mining, manufacturing and elsewhere suddenly became interested in safety. Companies began to guard machines and power sources while machinery makers developed safer designs. Managers began to look for hidden dangers at work, and to require that workers wear hard hats and safety glasses. They also set up safety departments run by engineers and safety committees that included both workers and managers. In 1913 companies founded the National Safety Council to pool information. Government agencies such as the Bureau of Mines and National Bureau of Standards provided scientific support while universities also researched safety problems for firms and industries14

Accident Rates Begin to Fall Steadily

During the years between World War I and World War II the combination of higher accident costs along with the institutionalization of safety concerns in large firms began to show results. Railroad employee fatality rates declined steadily after 1910 and at some large companies such as DuPont and whole industries such as steel making (see Table 3) safety also improved dramatically. Largely independent changes in technology and labor markets also contributed to safety as well. The decline in labor turnover meant fewer new employees who were relatively likely to get hurt, while the spread of factory electrification not only improved lighting but reduced the dangers from power transmission as well. In coal mining the shift from underground work to strip mining also improved safety. Collectively these long-term forces reduced manufacturing injury rates about 38 percent between 1926 and 1939 (see Table 4).15

Table 3

Steel Industry fatality and Injury rates, 1910-1939

(Rates are per million manhours)

Period Fatality rate Injury Rate
1910-1913 0.40 44.1
1937-1939 0.13 11.7

Pattern of Improvement Was Uneven

Yet the pattern of improvement was uneven, both over time and among firms and industries. Safety still deteriorated in times of economic boon when factories mines and railroads were worked to the limit and labor turnover rose. Nor were small companies as successful in reducing risks, for they paid essentially the same compensation insurance premium irrespective of their accident rate, and so the new laws had little effect there. Underground coal mining accidents also showed only modest improvement. Safety was also expensive in coal and many firms were small and saw little payoff from a lower accident rate. The one source of danger that did decline was mine explosions, which diminished in response to technologies developed by the Bureau of Mines. Ironically, however, in 1940 six disastrous blasts that killed 276 men finally led to federal mine inspection in 1941.16

Table 4

Work Injury Rates, Manufacturing and Coal Mining, 1926-1970

(Per Million Manhours)

.

Year Manufacturing Coal Mining
1926 24.2
1931 18.9 89.9
1939 14.9 69.5
1945 18.6 60.7
1950 14.7 53.3
1960 12.0 43.4
1970 15.2 42.6

Source: U.S. Department of Commerce Bureau of the Census, Historical Statistics of the United States, Colonial Times to 1970 (Washington, 1975), Series D-1029 and D-1031.

Postwar Trends, 1945-1970

The economic boon and associated labor turnover during World War II worsened work safety in nearly all areas of the economy, but after 1945 accidents again declined as long-term forces reasserted themselves (Table 4). In addition, after World War II newly powerful labor unions played an increasingly important role in work safety. In the 1960s however economic expansion again led to rising injury rates and the resulting political pressures led Congress to establish the Occupational Safety and Health Administration (OSHA) and the Mine Safety and Health Administration in 1970. The continuing problem of mine explosions also led to the foundation of the Mine Safety and Health Administration (MSHA) that same year. The work of these agencies had been controversial but on balance they have contributed to the continuing reductions in work injuries after 1970.17

References and Further Reading

Aldrich, Mark. Safety First: Technology, Labor and Business in the Building of Work Safety, 1870-1939. Baltimore: Johns Hopkins University Press, 1997.

Aldrich, Mark. “Preventing ‘The Needless Peril of the Coal Mine': the Bureau of Mines and the Campaign Against Coal Mine Explosions, 1910-1940.” Technology and Culture 36, no. 3 (1995): 483-518.

Aldrich, Mark. “The Peril of the Broken Rail: the Carriers, the Steel Companies, and Rail Technology, 1900-1945.” Technology and Culture 40, no. 2 (1999): 263-291

Aldrich, Mark. “Train Wrecks to Typhoid Fever: The Development of Railroad Medicine Organizations, 1850 -World War I.” Bulletin of the History of Medicine, 75, no. 2 (Summer 2001): 254-89.

Derickson Alan. “Participative Regulation of Hazardous Working Conditions: Safety Committees of the United Mine Workers of America,” Labor Studies Journal 18, no. 2 (1993): 25-38.

Dix, Keith. Work Relations in the Coal Industry: The Hand Loading Era. Morgantown: University of West Virginia Press, 1977. The best discussion of coalmine work for this period.

Dix, Keith. What’s a Coal Miner to Do? Pittsburgh: University of Pittsburgh Press, 1988. The best discussion of coal mine labor during the era of mechanization.

Fairris, David. “From Exit to Voice in Shopfloor Governance: The Case of Company Unions.” Business History Review 69, no. 4 (1995): 494-529.

Fairris, David. “Institutional Change in Shopfloor Governance and the Trajectory of Postwar Injury Rates in U.S. Manufacturing, 1946-1970.” Industrial and Labor Relations Review 51, no. 2 (1998): 187-203.

Fishback, Price. Soft Coal Hard Choices: The Economic Welfare of Bituminous Coal Miners, 1890-1930. New York: Oxford University Press, 1992. The best economic analysis of the labor market for coalmine workers.

Fishback, Price and Shawn Kantor. A Prelude to the Welfare State: The Origins of Workers’ Compensation. Chicago: University of Chicago Press, 2000. The best discussions of how employers’ liability rules worked.

Graebner, William. Coal Mining Safety in the Progressive Period. Lexington: University of Kentucky Press, 1976.

Great Britain Board of Trade. General Report upon the Accidents that Have Occurred on Railways of the United Kingdom during the Year 1901. London, HMSO, 1902.

Great Britain Home Office Chief Inspector of Mines. General Report with Statistics for 1914, Part I. London: HMSO, 1915.

Hounshell, David. From the American System to Mass Production, 1800-1932: The Development of Manufacturing Technology in the United States. Baltimore: Johns Hopkins University Press, 1984.

Humphrey, H. B. “Historical Summary of Coal-Mine Explosions in the United States — 1810-1958.” United States Bureau of Mines Bulletin 586 (1960).

Kirkland, Edward. Men, Cities, and Transportation. 2 vols. Cambridge: Harvard University Press, 1948, Discusses railroad regulation and safety in New England.

Lankton, Larry. Cradle to Grave: Life, Work, and Death in Michigan Copper Mines. New York: Oxford University Press, 1991.

Licht, Walter. Working for the Railroad. Princeton: Princeton University Press, 1983.

Long, Priscilla. Where the Sun Never Shines. New York: Paragon, 1989. Covers coal mine safety at the end of the nineteenth century.

Mendeloff, John. Regulating Safety: An Economic and Political Analysis of Occupational Safety and Health Policy. Cambridge: MIT Press, 1979. An accessible modern discussion of safety under OSHA.

National Academy of Sciences. Toward Safer Underground Coal Mines. Washington, DC: NAS, 1982.

Rogers, Donald. “From Common Law to Factory Laws: The Transformation of Workplace Safety Law in Wisconsin before Progressivism.” American Journal of Legal History (1995): 177-213.

Root, Norman and Daley, Judy. “Are Women Safer Workers? A New Look at the Data.” Monthly Labor Review 103, no. 9 (1980): 3-10.

Rosenberg, Nathan. Technology and American Economic Growth. New York: Harper and Row, 1972. Analyzes the forces shaping American technology.

Rosner, David and Gerald Markowity, editors. Dying for Work. Blomington: Indiana University Press, 1987.

Shaw, Robert. Down Brakes: A History of Railroad Accidents, Safety Precautions, and Operating Practices in the United States of America. London: P. R. Macmillan. 1961.

Trachenberg, Alexander. The History of Legislation for the Protection of Coal Miners in Pennsylvania, 1824 – 1915. New York: International Publishers. 1942.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States, Colonial Times to 1970. Washington, DC, 1975.

Usselman, Steven. “Air Brakes for Freight Trains: Technological Innovation in the American Railroad Industry, 1869-1900.” Business History Review 58 (1984): 30-50.

Viscusi, W. Kip. Risk By Choice: Regulating Health and Safety in the Workplace. Cambridge: Harvard University Press, 1983. The most readable treatment of modern safety issues by a leading scholar.

Wallace, Anthony. Saint Clair. New York: Alfred A. Knopf, 1987. Provides a superb discussion of early anthracite mining and safety.

Whaples, Robert and David Buffum. “Fraternalism, Paternalism, the Family and the Market: Insurance a Century Ago.” Social Science History 15 (1991): 97-122.

White, John. The American Railroad Freight Car. Baltimore: Johns Hopkins University Press, 1993. The definitive history of freight car technology.

Whiteside, James. Regulating Danger: The Struggle for Mine Safety in the Rocky Mountain Coal Industry. Lincoln: University of Nebraska Press, 1990.

Wokutch, Richard. Worker Protection Japanese Style: Occupational Safety and Health in the Auto Industry. Ithaca, NY: ILR, 1992

Worrall, John, editor. Safety and the Work Force: Incentives and Disincentives in Workers’ Compensation. Ithaca, NY: ILR Press, 1983.

1 Injuries or fatalities are expressed as rates. For example, if ten workers are injured out of 450 workers during a year, the rate would be .006666. For readability it might be expressed as 6.67 per thousand or 666.7 per hundred thousand workers. Rates may also be expressed per million workhours. Thus if the average work year is 2000 hours, ten injuries in 450 workers results in [10/450×2000]x1,000,000 = 11.1 injuries per million hours worked.

2 For statistics on work injuries from 1922-1970 see U.S. Department of Commerce, Historical Statistics, Series 1029-1036. For earlier data are in Aldrich, Safety First, Appendix 1-3.

3 Hounshell, American System. Rosenberg, Technology,. Aldrich, Safety First.

4 On the workings of the employers’ liability system see Fishback and Kantor, A Prelude, chapter 2

5 Dix, Work Relations, and his What’s a Coal Miner to Do? Wallace, Saint Clair, is a superb discussion of early anthracite mining and safety. Long, Where the Sun, Fishback, Soft Coal, chapters 1, 2, and 7. Humphrey, “Historical Summary.” Aldrich, Safety First, chapter 2.

6 Aldrich, Safety First chapter 1.

7 Aldrich, Safety First chapter 3

8 Fishback and Kantor, A Prelude, chapter 3, discusses higher pay for risky jobs as well as worker savings and accident insurance See also Whaples and Buffum, “Fraternalism, Paternalism.” Aldrich, ” Train Wrecks to Typhoid Fever.”

9Kirkland, Men, Cities. Trachenberg, The History of Legislation Whiteside, Regulating Danger. An early discussion of factory legislation is in Susan Kingsbury, ed.,xxxxx. Rogers,” From Common Law.”

10 On the evolution of freight car technology see White, American Railroad Freight Car, Usselman “Air Brakes for Freight trains,” and Aldrich, Safety First, chapter 1. Shaw, Down Brakes, discusses causes of train accidents.

11 Details of these regulations may be found in Aldrich, Safety First, chapter 5.

12 Graebner, Coal-Mining Safety, Aldrich, “‘The Needless Peril.”

13 On the origins of these laws see Fishback and Kantor, A Prelude, and the sources cited therein.

14 For assessments of the impact of early compensation laws see Aldrich, Safety First, chapter 5 and Fishback and Kantor, A Prelude, chapter 3. Compensation in the modern economy is discussed in Worrall, Safety and the Work Force. Government and other scientific work that promoted safety on railroads and in coal mining are discussed in Aldrich, “‘The Needless Peril’,” and “The Broken Rail.”

15 Farris, “From Exit to Voice.”

16 Aldrich, “‘Needless Peril,” and Humphrey

17 Derickson, “Participative Regulation” and Fairris, “Institutional Change,” also emphasize the role of union and shop floor issues in shaping safety during these years. Much of the modern literature on safety is highly quantitative. For readable discussions see Mendeloff, Regulating Safety (Cambridge: MIT Press, 1979), and Viscusi, Risk by Choice

The US Coal Industry in the Nineteenth Century

Sean Patrick Adams, University of Central Florida

Introduction

The coal industry was a major foundation for American industrialization in the nineteenth century. As a fuel source, coal provided a cheap and efficient source of power for steam engines, furnaces, and forges across the United States. As an economic pursuit, coal spurred technological innovations in mine technology, energy consumption, and transportation. When mine managers brought increasing sophistication to the organization of work in the mines, coal miners responded by organizing into industrial trade unions. The influence of coal was so pervasive in the United States that by the advent of the twentieth century, it became a necessity of everyday life. In an era where smokestacks equaled progress, the smoky air and sooty landscape of industrial America owed a great deal to the growth of the nation’s coal industry. By the close of the nineteenth century, many Americans across the nation read about the latest struggle between coal companies and miners by the light of a coal-gas lamp and in the warmth of a coal-fueled furnace, in a house stocked with goods brought to them by coal-fired locomotives. In many ways, this industry served as a major factor of American industrial growth throughout the nineteenth century.

The Antebellum American Coal Trade

Although coal had served as a major source of energy in Great Britain for centuries, British colonists had little use for North America’s massive reserves of coal prior to American independence. With abundant supplies of wood, water, and animal fuel, there was little need to use mineral fuel in seventeenth and eighteenth-century America. But as colonial cities along the eastern seaboard grew in population and in prestige, coal began to appear in American forges and furnaces. Most likely this coal was imported from Great Britain, but a small domestic trade developed in the bituminous fields outside of Richmond, Virginia and along the Monongahela River near Pittsburgh, Pennsylvania.

The Richmond Basin

Following independence from Britain, imported coal became less common in American cities and the domestic trade became more important. Economic nationalists such as Tench Coxe, Albert Gallatin, and Alexander Hamilton all suggested that the nation’s coal trade — at that time centered in the Richmond coal basin of eastern Virginia — would serve as a strategic resource for the nation’s growth and independence. Although it labored under these weighty expectations, the coal trade of eastern Virginia was hampered by its existence on the margins of the Old Dominion’s plantation economy. Colliers of the Richmond Basin used slave labor effectively in their mines, but scrambled to fill out their labor force, especially during peak periods of agricultural activity. Transportation networks in the region also restricted the growth of coal mining. Turnpikes proved too expensive for the coal trade and the James River and Kanawha Canal failed to make necessary improvements in order to accommodate coal barge traffic and streamline the loading, conveyance, and distribution of coal at Richmond’s tidewater port. Although the Richmond Basin was nation’s first major coalfield, miners there found growth potential to be limited.

The Rise of Anthracite Coal

At the same time that the Richmond Basin’s coal trade declined in importance, a new type of mineral fuel entered urban markets of the American seaboard. Anthracite coal has higher carbon content and is much harder than bituminous coal, thus earning the nickname “stone coal” in its early years of use. In 1803, Philadelphians watched a load of anthracite coal actually squelch a fire during a trial run, and city officials used the load of “stone coal” as attractive gravel for sidewalks. Following the War of 1812, however, a series of events paved the way for anthracite coal’s acceptance in urban markets. Colliers like Jacob Cist saw the shortage of British and Virginia coal in urban communities as an opportunity to promote the use of “stone coal.” Philadelphia’s American Philosophical Society and Franklin Institute enlisted the aid of the area’s scientific community to disseminate information to consumers on the particular needs of anthracite. The opening of several links between Pennsylvania’s anthracite fields via the Lehigh Coal and Navigation Company (1820), the Schuylkill Navigation Company (1825), and the Delaware and Hudson (1829) insured that the flow of anthracite from mine to market would be cheap and fast. “Stone coal” became less a geological curiosity by the 1830s and instead emerged as a valuable domestic fuel for heating and cooking, as well as a powerful source of energy for urban blacksmiths, bakers, brewers, and manufacturers. As demonstrated in Figure 1, Pennsylvania anthracite dominated urban markets by the late 1830s. By 1840, annual production had topped one million tons, or about ten times the annual production of the Richmond bituminous field.

Figure One: Percentage of Seaboard Coal Consumption by Origin, 1822-1842

Sources:

Hunt’s Merchant’s Magazine and Commercial Review 8 (June 1843): 548;

Alfred Chandler, “Anthracite Coal and the Beginnings of the Industrial Revolution,” p. 154.

The Spread of Coalmining

The antebellum period also saw the expansion of coal mining into many more states than Pennsylvania and Virginia, as North America contains a variety of workable coalfields. Ohio’s bituminous fields employed 7,000 men and raised about 320,000 tons of coal in 1850 — only three years later the state’s miners had increased production to over 1,300,000 tons. In Maryland, the George’s Creek bituminous region began to ship coal to urban markets by the Baltimore and Ohio Railroad (1842) and the Chesapeake and Ohio Canal (1850). The growth of St. Louis provided a major boost to the coal industries of Illinois and Missouri, and by 1850 colliers in the two states raised about 350,000 tons of coal annually. By the advent of the Civil War, coal industries appeared in at least twenty states.

Organization of Antebellum Mines

Throughout the antebellum period, coal mining firms tended to be small and labor intensive. The seams that were first worked in the anthracite fields of eastern Pennsylvania or the bituminous fields in Virginia, western Pennsylvania, and Ohio tended to lie close to the surface. A skilled miner and a handful of laborers could easily raise several tons of coal a day through the use of a “drift” or “slope” mine that intersected a vein of coal along a hillside. In the bituminous fields outside of Pittsburgh, for example, coal seams were exposed along the banks of the Monongahela and colliers could simply extract the coal with a pickax or shovel and roll it down the riverbank via a handcart into a waiting barge. Once the coal left the mouth of the mine, however, the size of the business handling it varied. Proprietary colliers usually worked on land that was leased for five to fifteen years — often from a large landowner or corporation. The coal was often shipped to market via a large railroad or canal corporation such as the Baltimore and Ohio Railroad, or the Delaware and Hudson Canal. Competition between mining firms and increases in production kept prices and profit margins relatively low, and many colliers slipped in and out of bankruptcy. These small mining firms were typical of the “easy entry, easy exit” nature of American business competition in the antebellum period.

Labor Relations

Since most antebellum coal mining operations were often limited to a few skilled miners aided by lesser skilled laborers, the labor relations in American coal mining regions saw little extended conflict. Early coal miners also worked close to the surface, often in horizontal drift mines, which meant that work was not as dangerous in the era before deep shaft mining. Most mining operations were far-flung enterprises away from urban centers, which frustrated attempts to organize miners into a “critical mass” of collective power — even in the nation’s most developed anthracite fields. These factors, coupled with the mine operator’s belief that individual enterprise in the anthracite regions insured a harmonious system of independent producers, had inhibited the development of strong labor organizations in Pennsylvania’s antebellum mining industry. In less developed regions, proprietors often worked in the mines themselves, so the lines between ownership, management, and labor were often blurred.

Early Unions

Most disputes, when they did occur, were temporary affairs that focused upon the low wages spurred by the intense competition among colliers. The first such action in the anthracite industry occurred in July of 1842 when workers from Minersville in Schuylkill County marched on Pottsville to protest low wages. This short-lived strike was broken up by the Orwigsburgh Blues, a local militia company. In 1848 John Bates enrolled 5,000 miners and struck for higher pay in the summer of 1849. But members of the “Bates Union” found themselves locked out of work and the movement quickly dissipated. In 1853, the Delaware and Hudson Canal Company’s miners struck for a 2½ cent per ton increase in their piece rate. This strike was successful, but failed to produce any lasting union presence in the D&H’s operations. Reports of disturbances in the bituminous fields of western Pennsylvania and Ohio follow the same pattern, as antebellum strikes tended to be localized and short-lived. Production levels thus remained high, and consumers of mineral fuel could count upon a steady supply reaching market.

Use of Anthracite in the Iron Industry

The most important technological development in the antebellum American coal industry was the successful adoption of anthracite coal to iron making techniques. Since the 1780s, bituminous coal or coke — which is bituminous coal with the impurities burned away — had been the preferred fuel for British iron makers. Once anthracite had nearly successfully entered American hearths, there seemed to be no reason why stone coal could not be used to make iron. As with its domestic use, however, the industrial potential of anthracite coal faced major technological barriers. In British and American iron furnaces of the early nineteenth century, the high heat needed to smelt iron ore required a blast of excess air to aid the combustion of the fuel, whether it was coal, wood, or charcoal. While British iron makers in the 1820s attempted to increase the efficiency of the process by using superheated air, known commonly as a “hot blast,” American iron makers still used a “cold blast” to stoke their furnaces. The density of anthracite coal resisted attempts to ignite it through the cold blast and therefore appeared to be an inappropriate fuel for most American iron furnaces.

Anthracite iron first appeared in Pennsylvania in 1840, when David Thomas brought Welsh hot blast technology into practice at the Lehigh Crane Iron Company. The firm had been chartered in 1839 under the general incorporation act. The Allentown firm’s innovation created a stir in iron making circles, and iron furnaces for smelting ore with anthracite began to appear across eastern and central Pennsylvania. In 1841, only a year after the Lehigh Crane Iron Company’s success, Walter Johnson found no less than eleven anthracite iron furnaces in operation. That same year, an American correspondent of London bankers cited savings on iron making of up to twenty-five percent after the conversion to anthracite and noted that “wherever the coal can be procured the proprietors are changing to the new plan; and it is generally believed that the quality of the iron is much improved where the entire process is affected with anthracite coal.” Pennsylvania’s investment in anthracite iron paid dividends for the industrial economy of the state and proved that coal could be adapted to a number of industrial pursuits. By 1854, forty-six percent of all American pig iron had been smelted with anthracite coal as a fuel, and by 1860 anthracite’s share of pig iron was more than fifty-six percent.

Rising Levels of Coal Output and Falling Prices

The antebellum decades saw the coal industry emerge as a critical component of America’s industrial revolution. Anthracite coal became a fixture in seaboard cities up and down the east coast of North America — as cities grew, so did the demand for coal. To the west, Pittsburgh and Ohio colliers shipped their coal as far as Louisville, Cincinnati, or New Orleans. As wood, animal, and waterpower became scarcer, mineral fuel usually took their place in domestic consumption and small-scale manufacturing. The structure of the industry, many small-scale firms working on short-term leases, meant that production levels remained high throughout the antebellum period, even in the face of falling prices. In 1840, American miners raised 2.5 million tons of coal to serve these growing markets and by 1850 increased annual production to 8.4 million tons. Although prices tended to fluctuate with the season, in the long run, they fell throughout the antebellum period. For example, in 1830 anthracite coal sold for about $11 per ton. Ten years later, the price had dropped to $7 per ton and by 1860 anthracite sold for about $5.50 a ton in New York City. Annual production in 1860 also passed twenty million tons for the first time in history. Increasing production, intense competition, low prices, and quiet labor relations all were characteristics of the antebellum coal trade in the United States, but developments during and after the Civil War would dramatically alter the structure and character of this critical industrial pursuit.

Coal and the Civil War

The most dramatic expansion of the American coal industry occurred in the late antebellum decades but the outbreak of the Civil War led to some major changes. The fuel needs of the federal army and navy, along with their military suppliers, promised a significant increase in the demand for coal. Mine operators planned for rising, or at least stable, coal prices for the duration of the war. Their expectations proved accurate. Even when prices are adjusted for wartime inflation, they increased substantially over the course of the conflict. Over the years 1860 to 1863, the real (i.e., inflation-adjusted) price of a ton of anthracite rose by over thirty percent, and in 1864 the real price had increased to forty-five percent above its 1860 level. In response, the production of coal increased to over twelve million tons of anthracite and over twenty-four million tons nationwide by 1865.

The demand for mineral fuel in the Confederacy led to changes in southern coalfields as well. In 1862, the Confederate Congress organized the Niter and Mining Bureau within the War Department to supervise the collection of niter (also known as saltpeter) for the manufacture of gunpowder and the mining of copper, lead, iron, coal, and zinc. In addition to aiding the Richmond Basin’s production, the Niter and Mining Bureau opened new coalfields in North Carolina and Alabama and coordinated the flow of mineral fuel to Confederate naval stations along the coast. Although the Confederacy was not awash in coal during the conflict, the work of the Niter and Mining Bureau established the groundwork for the expansion of mining in the postbellum South.

In addition to increases in production, the Civil War years accelerated some qualitative changes in the structure of the industry. In the late 1850s, new railroads stretched to new bituminous coalfields in states like Maryland, Ohio, and Illinois. In the established anthracite coal regions of Pennsylvania, railroad companies profited immensely from the increased traffic spurred by the war effort. For example, the Philadelphia & Reading Railroad’s margin of profit increased from $0.88 per ton of coal in 1861 to $1.72 per ton in 1865. Railroad companies emerged from the Civil War as the most important actors in the nation’s coal trade.

The American Coal Trade after the Civil War

Railroads and the Expansion of the Coal Trade

In the years immediately following the Civil War, the expansion of the coal trade accelerated as railroads assumed the burden of carrying coal to market and opening up previously inaccessible fields. They did this by purchasing coal tracts directly and leasing them to subsidiary firms or by opening their own mines. In 1878, the Baltimore and Ohio Railroad shipped three million tons of bituminous coal from mines in Maryland and from the northern coalfields of the new state of West Virginia. When the Chesapeake and Ohio Railroad linked Huntington, West Virginia with Richmond, Virginia in 1873, the rich bituminous coal fields of southern West Virginia were open for development. The Norfolk and Western developed the coalfields of southwestern Virginia by completing their railroad from tidewater to remote Tazewell County in 1883. A network of smaller lines linking individual collieries to these large trunk lines facilitated the rapid development of Appalachian coal.

Railroads also helped open up the massive coal reserves west of the Mississippi. Small coal mines in Missouri and Illinois existed in the antebellum years, but were limited to the steamboat trade down the Mississippi River. As the nation’s web of railroad construction expanded across the Great Plains, coalfields in Colorado, New Mexico, and Wyoming witnessed significant development. Coal had truly become a national endeavor in the United States.

Technological Innovations

As the coal industry expanded, it also incorporated new mining methods. Early slope or drift mines intersected coal seams relatively close to the surface and needed only small capital investments to prepare. Most miners still used picks and shovels to extract the coal, but some miners used black powder to blast holes in the coal seams, then loaded the broken coal onto wagons by hand. But as miners sought to remove more coal, shafts were dug deeper below the water line. As a result, coal mining needed larger amounts of capital as new systems of pumping, ventilation, and extraction required the implementation of steam power in mines. By the 1890s, electric cutting machines replaced the blasting method of loosening the coal in some mines, and by 1900 a quarter of American coal was mined using these methods. As the century progressed, miners raised more and more coal by using new technology. Along with this productivity came the erosion of many traditional skills cherished by experienced miners.

The Coke Industry

Consumption patterns also changed. The late nineteenth century saw the emergence of coke — a form of processed bituminous coal in which impurities are “baked” out under high temperatures — as a powerful fuel in the iron and steel industry. The discovery of excellent coking coal in the Connellsville region of southwestern Pennsylvania spurred the aggressive growth of coke furnaces there. By 1880, the Connellsville region contained more than 4,200 coke ovens and the national production of coke in the United States stood at three million tons. Two decades later, the United States consumed over twenty million tons of coke fuel.

Competition and Profits

The successful incorporation of new mining methods and the emergence of coke as a major fuel source served as both a blessing and a curse to mining firms. With the new technology they raised more coal, but as more coalfields opened up and national production neared eighty million tons by 1880, coal prices remained relatively low. Cheap coal undoubtedly helped America’s rapidly industrializing economy, but it also created an industry structure characterized by boom and bust periods, low profit margins, and cutthroat competition among firms. But however it was raised, the United States became more and more dependent upon coal as the nineteenth century progressed, as demonstrated by Figure 2.

Figure 2: Coal as a Percentage of American Energy Consumption, 1850-1900

Source: Sam H. Schurr and Bruce C. Netschert, Energy in the American Economy, 1850-1975 (Baltimore: Johns Hopkins Press, 1960), 36-37.

The Rise of Labor Unions

As coal mines became more capital intensive over the course of the nineteenth century, the role of miners changed dramatically. Proprietary mines usually employed skilled miners as subcontractors in the years prior to the Civil War; by doing so they abdicated a great deal of control over the pace of mining. Corporate reorganization and the introduction of expensive machinery eroded the traditional authority of the skilled miner. By the 1870s, many mining firms employed managers to supervise the pace of work, but kept the old system of paying mine laborers per ton rather than an hourly wage. Falling piece rates quickly became a source of discontent in coal mining regions.

Miners responded to falling wages and the restructuring of mine labor by organizing into craft unions. The Workingmen’s Benevolent Association founded in Pennsylvania in 1868, united English, Irish, Scottish, and Welsh anthracite miners. The WBA won some concessions from coal companies until Franklin Gowen, acting president of the Philadelphia and Reading Railroad led a concerted effort to break the union in the winter of 1874-75. When sporadic violence plagued the anthracite fields, Gowen led the charge against the “Molly Maguires,” a clandestine organization supposedly led by Irish miners. After the breaking of the WBA, most coal mining unions served to organize skilled workers in specific regions. In 1890, a national mining union appeared when delegates from across the United States formed the United Mine Workers of America. The UMWA struggled to gain widespread acceptance until 1897, when widespread strikes pushed many workers into union membership. By 1903, the UMWA listed about a quarter of a million members, raised a treasury worth over one million dollars, and played a major role in industrial relations of the nation’s coal industry.

Coal at the Turn of the Century

By 1900, the American coal industry was truly a national endeavor that raised fifty-seven million tons of anthracite and 212 million tons of bituminous coal. (See Tables 1 and 2 for additional trends.) Some coal firms grew to immense proportions by nineteenth-century standards. The U.S. Coal and Oil Company, for example, was capitalized at six million dollars and owned the rights to 30,000 acres of coal-bearing land. But small mining concerns with one or two employees also persisted through the turn of the century. New developments in mine technology continued to revolutionize the trade as more and more coal fields across the United States became integrated into the national system of railroads. Industrial relations also assumed nationwide dimensions. John Mitchell, the leader of the UMWA, and L.M. Bowers of the Colorado Fuel and Iron Company, symbolized a new coal industry in which hard-line positions developed in both labor and capital’s respective camps. Since the bituminous coal industry alone employed over 300,000 workers by 1900, many Americans kept a close eye on labor relations in this critical trade. Although “King Coal” stood unchallenged as the nation’s leading supplier of domestic and industrial fuel, tension between managers and workers threatened the stability of the coal industry in the twentieth century.

 

Table 1: Coal Production in the United States, 1829-1899

Year Coal Production (thousands of tons) Percent Increase over Decade Tons per capita
Anthracite Bituminous
1829 138 102 0.02
1839 1008 552 550 0.09
1849 3995 2453 313 0.28
1859 9620 6013 142 0.50
1869 17,083 15,821 110 0.85
1879 30,208 37,898 107 1.36
1889 45,547 95,683 107 2.24
1899 60,418 193,323 80 3.34

Source: Fourteenth Census of the United States, Vol. XI, Mines and Quarries, 1922, Tables 8 and 9, pp. 258 and 260.

Table 2: Leading Coal Producing States, 1889

State Coal Production (thousands of tons)
Pennsylvania 81,719
Illinois 12,104
Ohio 9977
West Virginia 6232
Iowa 4095
Alabama 3573
Indiana 2845
Colorado 2544
Kentucky 2400
Kansas 2221
Tennessee 1926

Source: Thirteenth Census of the United States, Vol. XI, Mines and Quarries, 1913, Table 4, p. 187

Suggestions for Further Reading

Adams, Sean Patrick. “Different Charters, Different Paths: Corporations and Coal in Antebellum Pennsylvania and Virginia,” Business and Economic History 27 (Fall 1998): 78-90.

Binder, Frederick Moore. Coal Age Empire: Pennsylvania Coal and Its Utilization to 1860. Harrisburg: Pennsylvania Historical and Museum Commission, 1974.

Blatz, Perry. Democratic Miners: Work and Labor Relations in the Anthracite Coal Industry, 1875-1925. Albany: SUNY Press, 1994.

Broehl, Wayne G. The Molly Maguires. Cambridge, MA: Harvard University Press, 1964.

Bruce, Kathleen. Virginia Iron Manufacture in the Slave Era. New York: The Century Company, 1931.

Chandler, Alfred. “Anthracite Coal and the Beginnings of the ‘Industrial Revolution’ in the United States,” Business History Review 46 (1972): 141-181.

DiCiccio, Carmen. Coal and Coke in Pennsylvania. Harrisburg: Pennsylvania Historical and Museum Commission, 1996

Eavenson, Howard. The First Century and a Quarter of the American Coal Industry. Pittsburgh: Privately Printed, 1942.

Eller, Ronald. Miners, Millhands, and Mountaineers: Industrialization of the Appalachian South, 1880-1930. Knoxville: University of Tennessee Press, 1982.

Harvey, Katherine. The Best Dressed Miners: Life and Labor in the Maryland Coal Region, 1835-1910. Ithaca, NY: Cornell University Press, 1993.

Hoffman, John. “Anthracite in the Lehigh Valley of Pennsylvania, 1820-1845,” United States National Museum Bulletin 252 (1968): 91-141.

Laing, James T. “The Early Development of the Coal Industry in the Western Counties of Virginia,” West Virginia History 27 (January 1966): 144-155.

Laslett, John H.M. editor. The United Mine Workers: A Model of Industrial Solidarity? University Park: Penn State University Press, 1996.

Letwin, Daniel. The Challenge of Interracial Unionism: Alabama Coal Miners, 1878-1921 Chapel Hill: University of North Carolina Press, 1998.

Lewis, Ronald. Coal, Iron, and Slaves. Industrial Slavery in Maryland and Virginia, 1715-1865. Westport, Connecticut: Greenwood Press, 1979.

Long, Priscilla. Where the Sun Never Shines: A History of America’s Bloody Coal Industry. New York: Paragon, 1989.

Nye, David E.. Consuming Power: A Social History of American Energies. Cambridge: Massachusetts Institute of Technology Press, 1998.

Palladino, Grace. Another Civil War: Labor, Capital, and the State in the Anthracite Regions of Pennsylvania, 1840-1868. Urbana: University of Illinois Press, 1990.

Powell, H. Benjamin. Philadelphia’s First Fuel Crisis. Jacob Cist and the Developing Market for Pennsylvania Anthracite. University Park: The Pennsylvania State University Press, 1978.

Schurr, Sam H. and Bruce C. Netschert. Energy in the American Economy, 1850-1975: An Economic Study of Its History and Prospects. Baltimore: Johns Hopkins Press, 1960.

Stapleton, Darwin. The Transfer of Early Industrial Technologies to America. Philadelphia: American Philosophical Society, 1987.

Stealey, John E.. The Antebellum Kanawha Salt Business and Western Markets. Lexington: The University Press of Kentucky, 1993.

Wallace, Anthony F.C. St. Clair. A Nineteenth-Century Coal Town’s Experience with a Disaster-Prone Industry. New York: Alfred A. Knopf, 1981.

Warren, Kenneth. Triumphant Capitalism: Henry Clay Frick and the Industrial Transformation of America. Pittsburgh: University of Pittsburgh Press, 1996.

Woodworth, J. B.. “The History and Conditions of Mining in the Richmond Coal-Basin, Virginia.” Transactions of the American Institute of Mining Engineers 31 (1902): 477-484.

Yearley, Clifton K.. Enterprise and Anthracite: Economics and Democracy in Schuylkill County, 1820-1875. Baltimore: The Johns Hopkins University Press, 1961.

History of Workplace Safety in the United States, 1880-1970

Mark Aldrich, Smith College

The dangers of work are usually measured by the number of injuries or fatalities occurring to a group of workers, usually over a period of one year. 1 Over the past century such measures reveal a striking improvement in the safety of work in all the advanced countries. In part this has been the result of the gradual shift of jobs from relatively dangerous goods production such as farming, fishing, logging, mining, and manufacturing into such comparatively safe work as retail trade and services. But even the dangerous trades are now far safer than they were in 1900. To take but one example, mining today remains a comparatively risky activity. Its annual fatality rate is about nine for every one hundred thousand miners employed. A century ago in 1900 about three hundred out of every one hundred thousand miners were killed on the job each year. 2

The Nineteenth Century

Before the late nineteenth century we know little about the safety of American workplaces because contemporaries cared little about it. As a result, only fragmentary information exists prior to the 1880s. Pre-industrial laborers faced risks from animals and hand tools, ladders and stairs. Industrialization substituted steam engines for animals, machines for hand tools, and elevators for ladders. But whether these new technologies generally worsened the dangers of work is unclear. What is clear is that nowhere was the new work associated with the industrial revolution more dangerous than in America.

US Was Unusually Dangerous

Americans modified the path of industrialization that had been pioneered in Britain to fit the particular geographic and economic circumstances of the American continent. Reflecting the high wages and vast natural resources of a new continent, this American system encouraged use of labor saving machines and processes. These developments occurred within a legal and regulatory climate that diminished employer’s interest in safety. As a result, Americans developed production methods that were both highly productive and often very dangerous. 3

Accidents Were “Cheap”

While workers injured on the job or their heirs might sue employers for damages, winning proved difficult. Where employers could show that the worker had assumed the risk, or had been injured by the actions of a fellow employee, or had himself been partly at fault, courts would usually deny liability. A number or surveys taken about 1900 showed that only about half of all workers fatally injured recovered anything and their average compensation only amounted to about half a year’s pay. Because accidents were so cheap, American industrial methods developed with little reference to their safety. 4

Mining

Nowhere was the American system more dangerous than in early mining. In Britain, coal seams were deep and coal expensive. As a result, British mines used mining methods that recovered nearly all of the coal because they used waste rock to hold up the roof. British methods also concentrated the working, making supervision easy, and required little blasting. American coal deposits by contrast, were both vast and near the surface; they could be tapped cheaply using techniques known as “room and pillar” mining. Such methods used coal pillars and timber to hold up the roof, because timber and coal were cheap. Since miners worked in separate rooms, labor supervision was difficult and much blasting was required to bring down the coal. Miners themselves were by no means blameless; most were paid by the ton, and when safety interfered with production, safety often took a back seat. For such reasons, American methods yielded more coal per worker than did European techniques, but they were far more dangerous, and toward the end of the nineteenth century, the dangers worsened (see Table 1).5

Table 1
British and American Mine Safety, 1890 -1904
(Fatality rates per Thousand Workers per Year)

Years American Anthracite American Bituminous Great Britain
1890-1894 3.29 2.52 1.61
1900-1904 3.13 3.53 1.28

Source: British data from Great Britain, General Report. Other data from Aldrich, Safety First.

Railroads

Nineteenth century American railroads were also comparatively dangerous to their workers – and their passengers as well – and for similar reasons. Vast North American distances and low population density turned American carriers into predominantly freight haulers – and freight was far more dangerous to workers than passenger traffic, for men had to go in between moving cars for coupling and uncoupling and ride the cars to work brakes. The thin traffic and high wages also forced American carriers to economize on both capital and labor. Accordingly, American carriers were poorly built and used few signals, both of which resulted in many derailments and collisions. Such conditions made American railroad work far more dangerous than that in Britain (see Table 2).6

Table 2
Comparative Safety of British and American Railroad Workers, 1889 – 1901
(Fatality Rates per Thousand Workers per Year)

1889 1895 1901
British railroad workers
All causes
1.14 0.95 0.89
British trainmena
All causes
4.26 3.22 2.21
Coupling 0.94 0.83 0.74
American Railroad workers
All causes
2.67 2.31 2.50
American trainmen
All causes
8.52 6.45 7.35
Coupling 1.73c 1.20 0.78
Brakingb 3.25c 2.44 2.03

Source: Aldrich, Safety First, Table 1 and Great Britain Board of Trade, General Report.

1

Note: Death rates are per thousand employees.
a. Guards, brakemen, and shunters.
b. Deaths from falls from cars and striking overhead obstructions.

Manufacturing

American manufacturing also developed in a distinctively American fashion that substituted power and machinery for labor and manufactured products with interchangeable arts for ease in mass production. Whether American methods were less safe than those in Europe is unclear but by 1900 they were extraordinarily risky by modern standards, for machines and power sources were largely unguarded. And while competition encouraged factory managers to strive for ever-increased output, they showed little interest in improving safety.7

Worker and Employer Responses

Workers and firms responded to these dangers in a number of ways. Some workers simply left jobs they felt were too dangerous, and risky jobs may have had to offer higher pay to attract workers. After the Civil War life and accident insurance companies expanded, and some workers purchased insurance or set aside savings to offset the income risks from death or injury. Some unions and fraternal organizations also offered their members insurance. Railroads and some mines also developed hospital and insurance plans to care for injured workers while many carriers provided jobs for all their injured men. 8

Improving safety, 1910-1939

Public efforts to improve safety date from the very beginnings of industrialization. States established railroad regulatory commissions as early as the 1840s. But while most of the commissions were intended to improve safety, they had few powers and were rarely able to exert much influence on working conditions. Similarly, the first state mining commission began in Pennsylvania in 1869, and other states soon followed. Yet most of the early commissions were ineffectual and as noted safety actually deteriorated after the Civil War. Factory commissions also dated from but most were understaffed and they too had little power.9

Railroads

The most successful effort to improve work safety during the nineteenth century began on the railroads in the 1880s as a small band of railroad regulators, workers, and managers began to campaign for the development of better brakes and couplers for freight cars. In response George Westinghouse modified his passenger train air brake in about 1887 so it would work on long freights, while at roughly the same time Ely Janney developed an automatic car coupler. For the railroads such equipment meant not only better safety, but also higher productivity and after 1888 they began to deploy it. The process was given a boost in 1889-1890 when the newly-formed Interstate Commerce Commission (ICC) published its first accident statistics. They demonstrated conclusively the extraordinary risks to trainmen from coupling and riding freight (Table 2). In 1893 Congress responded, passing the Safety Appliance Act, which mandated use of such equipment. It was the first federal law intended primarily to improve work safety, and by 1900 when the new equipment was widely diffused, risks to trainmen had fallen dramatically.10

Federal Safety Regulation

In the years between 1900 and World War I, a rather strange band of Progressive reformers, muckraking journalists, businessmen, and labor unions pressed for changes in many areas of American life. These years saw the founding of the Federal Food and Drug Administration, the Federal Reserve System and much else. Work safety also became of increased public concern and the first important developments came once again on the railroads. Unions representing trainmen had been impressed by the safety appliance act of 1893 and after 1900 they campaigned for more of the same. In response Congress passed a host of regulations governing the safety of locomotives and freight cars. While most of these specific regulations were probably modestly beneficial, collectively their impact was small because unlike the rules governing automatic couplers and air brakes they addressed rather minor risks.11

In 1910 Congress also established the Bureau of Mines in response to a series of disastrous and increasingly frequent explosions. The Bureau was to be a scientific, not a regulatory body and it was intended to discover and disseminate new knowledge on ways to improve mine safety.12

Workers’ Compensation Laws Enacted

Far more important were new laws that raised the cost of accidents to employers. In 1908 Congress passed a federal employers’ liability law that applied to railroad workers in interstate commerce and sharply limited defenses an employee could claim. Worker fatalities that had once cost the railroads perhaps $200 now cost $2,000. Two years later in 1910, New York became the first state to pass a workmen’s compensation law. This was a European idea. Instead of requiring injured workers to sue for damages in court and prove the employer was negligent, the new law automatically compensated all injuries at a fixed rate. Compensation appealed to businesses because it made costs more predictable and reduced labor strife. To reformers and unions it promised greater and more certain benefits. Samuel Gompers, leader of the American Federation of Labor had studied the effects of compensation in Germany. He was impressed with how it stimulated business interest in safety, he said. Between 1911 and 1921 forty-four states passed compensation laws.13

Employers Become Interested in Safety

The sharp rise in accident costs that resulted from compensation laws and tighter employers’ liability initiated the modern concern with work safety and initiated the long-term decline in work accidents and injuries. Large firms in railroading, mining, manufacturing and elsewhere suddenly became interested in safety. Companies began to guard machines and power sources while machinery makers developed safer designs. Managers began to look for hidden dangers at work, and to require that workers wear hard hats and safety glasses. They also set up safety departments run by engineers and safety committees that included both workers and managers. In 1913 companies founded the National Safety Council to pool information. Government agencies such as the Bureau of Mines and National Bureau of Standards provided scientific support while universities also researched safety problems for firms and industries14

Accident Rates Begin to Fall Steadily

During the years between World War I and World War II the combination of higher accident costs along with the institutionalization of safety concerns in large firms began to show results. Railroad employee fatality rates declined steadily after 1910 and at some large companies such as DuPont and whole industries such as steel making (see Table 3) safety also improved dramatically. Largely independent changes in technology and labor markets also contributed to safety as well. The decline in labor turnover meant fewer new employees who were relatively likely to get hurt, while the spread of factory electrification not only improved lighting but reduced the dangers from power transmission as well. In coal mining the shift from underground work to strip mining also improved safety. Collectively these long-term forces reduced manufacturing injury rates about 38 percent between 1926 and 1939 (see Table 4).15

Table 3
Steel Industry fatality and Injury rates, 1910-1939
(Rates are per million manhours)

Period Fatality rate Injury Rate
1910-1913 0.40 44.1
1937-1939 0.13 11.7

Pattern of Improvement Was Uneven

Yet the pattern of improvement was uneven, both over time and among firms and industries. Safety still deteriorated in times of economic boon when factories mines and railroads were worked to the limit and labor turnover rose. Nor were small companies as successful in reducing risks, for they paid essentially the same compensation insurance premium irrespective of their accident rate, and so the new laws had little effect there. Underground coal mining accidents also showed only modest improvement. Safety was also expensive in coal and many firms were small and saw little payoff from a lower accident rate. The one source of danger that did decline was mine explosions, which diminished in response to technologies developed by the Bureau of Mines. Ironically, however, in 1940 six disastrous blasts that killed 276 men finally led to federal mine inspection in 1941.16

Table 4
Work Injury Rates, Manufacturing and Coal Mining, 1926-1970
(Per Million Manhours)

.

Year Manufacturing Coal Mining
1926 24.2
1931 18.9 89.9
1939 14.9 69.5
1945 18.6 60.7
1950 14.7 53.3
1960 12.0 43.4
1970 15.2 42.6

Source: U.S. Department of Commerce Bureau of the Census, Historical Statistics of the United States, Colonial Times to 1970 (Washington, 1975), Series D-1029 and D-1031.

Postwar Trends, 1945-1970

The economic boon and associated labor turnover during World War II worsened work safety in nearly all areas of the economy, but after 1945 accidents again declined as long-term forces reasserted themselves (Table 4). In addition, after World War II newly powerful labor unions played an increasingly important role in work safety. In the 1960s however economic expansion again led to rising injury rates and the resulting political pressures led Congress to establish the Occupational Safety and Health Administration (OSHA) and the Mine Safety and Health Administration in 1970. The continuing problem of mine explosions also led to the foundation of the Mine Safety and Health Administration (MSHA) that same year. The work of these agencies had been controversial but on balance they have contributed to the continuing reductions in work injuries after 1970.17

References and Further Reading

Aldrich, Mark. Safety First: Technology, Labor and Business in the Building of Work Safety, 1870-1939. Baltimore: Johns Hopkins University Press, 1997.

Aldrich, Mark. “Preventing ‘The Needless Peril of the Coal Mine': the Bureau of Mines and the Campaign Against Coal Mine Explosions, 1910-1940.” Technology and Culture 36, no. 3 (1995): 483-518.

Aldrich, Mark. “The Peril of the Broken Rail: the Carriers, the Steel Companies, and Rail Technology, 1900-1945.” Technology and Culture 40, no. 2 (1999): 263-291

Aldrich, Mark. “Train Wrecks to Typhoid Fever: The Development of Railroad Medicine Organizations, 1850 -World War I.” Bulletin of the History of Medicine, 75, no. 2 (Summer 2001): 254-89.

Derickson Alan. “Participative Regulation of Hazardous Working Conditions: Safety Committees of the United Mine Workers of America,” Labor Studies Journal 18, no. 2 (1993): 25-38.

Dix, Keith. Work Relations in the Coal Industry: The Hand Loading Era. Morgantown: University of West Virginia Press, 1977. The best discussion of coalmine work for this period.

Dix, Keith. What’s a Coal Miner to Do? Pittsburgh: University of Pittsburgh Press, 1988. The best discussion of coal mine labor during the era of mechanization.

Fairris, David. “From Exit to Voice in Shopfloor Governance: The Case of Company Unions.” Business History Review 69, no. 4 (1995): 494-529.

Fairris, David. “Institutional Change in Shopfloor Governance and the Trajectory of Postwar Injury Rates in U.S. Manufacturing, 1946-1970.” Industrial and Labor Relations Review 51, no. 2 (1998): 187-203.

Fishback, Price. Soft Coal Hard Choices: The Economic Welfare of Bituminous Coal Miners, 1890-1930. New York: Oxford University Press, 1992. The best economic analysis of the labor market for coalmine workers.

Fishback, Price and Shawn Kantor. A Prelude to the Welfare State: The Origins of Workers’ Compensation. Chicago: University of Chicago Press, 2000. The best discussions of how employers’ liability rules worked.

Graebner, William. Coal Mining Safety in the Progressive Period. Lexington: University of Kentucky Press, 1976.

Great Britain Board of Trade. General Report upon the Accidents that Have Occurred on Railways of the United Kingdom during the Year 1901. London, HMSO, 1902.

Great Britain Home Office Chief Inspector of Mines. General Report with Statistics for 1914, Part I. London: HMSO, 1915.

Hounshell, David. From the American System to Mass Production, 1800-1932: The Development of Manufacturing Technology in the United States. Baltimore: Johns Hopkins University Press, 1984.

Humphrey, H. B. “Historical Summary of Coal-Mine Explosions in the United States — 1810-1958.” United States Bureau of Mines Bulletin 586 (1960).

Kirkland, Edward. Men, Cities, and Transportation. 2 vols. Cambridge: Harvard University Press, 1948, Discusses railroad regulation and safety in New England.

Lankton, Larry. Cradle to Grave: Life, Work, and Death in Michigan Copper Mines. New York: Oxford University Press, 1991.

Licht, Walter. Working for the Railroad. Princeton: Princeton University Press, 1983.

Long, Priscilla. Where the Sun Never Shines. New York: Paragon, 1989. Covers coal mine safety at the end of the nineteenth century.

Mendeloff, John. Regulating Safety: An Economic and Political Analysis of Occupational Safety and Health Policy. Cambridge: MIT Press, 1979. An accessible modern discussion of safety under OSHA.

National Academy of Sciences. Toward Safer Underground Coal Mines. Washington, DC: NAS, 1982.

Rogers, Donald. “From Common Law to Factory Laws: The Transformation of Workplace Safety Law in Wisconsin before Progressivism.” American Journal of Legal History (1995): 177-213.

Root, Norman and Daley, Judy. “Are Women Safer Workers? A New Look at the Data.” Monthly Labor Review 103, no. 9 (1980): 3-10.

Rosenberg, Nathan. Technology and American Economic Growth. New York: Harper and Row, 1972. Analyzes the forces shaping American technology.

Rosner, David and Gerald Markowity, editors. Dying for Work. Blomington: Indiana University Press, 1987.

Shaw, Robert. Down Brakes: A History of Railroad Accidents, Safety Precautions, and Operating Practices in the United States of America. London: P. R. Macmillan. 1961.

Trachenberg, Alexander. The History of Legislation for the Protection of Coal Miners in Pennsylvania, 1824 – 1915. New York: International Publishers. 1942.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States, Colonial Times to 1970. Washington, DC, 1975.

Usselman, Steven. “Air Brakes for Freight Trains: Technological Innovation in the American Railroad Industry, 1869-1900.” Business History Review 58 (1984): 30-50.

Viscusi, W. Kip. Risk By Choice: Regulating Health and Safety in the Workplace. Cambridge: Harvard University Press, 1983. The most readable treatment of modern safety issues by a leading scholar.

Wallace, Anthony. Saint Clair. New York: Alfred A. Knopf, 1987. Provides a superb discussion of early anthracite mining and safety.

Whaples, Robert and David Buffum. “Fraternalism, Paternalism, the Family and the Market: Insurance a Century Ago.” Social Science History 15 (1991): 97-122.

White, John. The American Railroad Freight Car. Baltimore: Johns Hopkins University Press, 1993. The definitive history of freight car technology.

Whiteside, James. Regulating Danger: The Struggle for Mine Safety in the Rocky Mountain Coal Industry. Lincoln: University of Nebraska Press, 1990.

Wokutch, Richard. Worker Protection Japanese Style: Occupational Safety and Health in the Auto Industry. Ithaca, NY: ILR, 1992

Worrall, John, editor. Safety and the Work Force: Incentives and Disincentives in Workers’ Compensation. Ithaca, NY: ILR Press, 1983.

1 Injuries or fatalities are expressed as rates. For example, if ten workers are injured out of 450 workers during a year, the rate would be .006666. For readability it might be expressed as 6.67 per thousand or 666.7 per hundred thousand workers. Rates may also be expressed per million workhours. Thus if the average work year is 2000 hours, ten injuries in 450 workers results in [10/450×2000]x1,000,000 = 11.1 injuries per million hours worked.

2 For statistics on work injuries from 1922-1970 see U.S. Department of Commerce, Historical Statistics, Series 1029-1036. For earlier data are in Aldrich, Safety First, Appendix 1-3.

3 Hounshell, American System. Rosenberg, Technology,. Aldrich, Safety First.

4 On the workings of the employers’ liability system see Fishback and Kantor, A Prelude, chapter 2

5 Dix, Work Relations, and his What’s a Coal Miner to Do? Wallace, Saint Clair, is a superb discussion of early anthracite mining and safety. Long, Where the Sun, Fishback, Soft Coal, chapters 1, 2, and 7. Humphrey, “Historical Summary.” Aldrich, Safety First, chapter 2.

6 Aldrich, Safety First chapter 1.

7 Aldrich, Safety First chapter 3

8 Fishback and Kantor, A Prelude, chapter 3, discusses higher pay for risky jobs as well as worker savings and accident insurance See also Whaples and Buffum, “Fraternalism, Paternalism.” Aldrich, ” Train Wrecks to Typhoid Fever.”

9Kirkland, Men, Cities. Trachenberg, The History of Legislation Whiteside, Regulating Danger. An early discussion of factory legislation is in Susan Kingsbury, ed.,xxxxx. Rogers,” From Common Law.”

10 On the evolution of freight car technology see White, American Railroad Freight Car, Usselman “Air Brakes for Freight trains,” and Aldrich, Safety First, chapter 1. Shaw, Down Brakes, discusses causes of train accidents.

11 Details of these regulations may be found in Aldrich, Safety First, chapter 5.

12 Graebner, Coal-Mining Safety, Aldrich, “‘The Needless Peril.”

13 On the origins of these laws see Fishback and Kantor, A Prelude, and the sources cited therein.

14 For assessments of the impact of early compensation laws see Aldrich, Safety First, chapter 5 and Fishback and Kantor, A Prelude, chapter 3. Compensation in the modern economy is discussed in Worrall, Safety and the Work Force. Government and other scientific work that promoted safety on railroads and in coal mining are discussed in Aldrich, “‘The Needless Peril’,” and “The Broken Rail.”

15 Farris, “From Exit to Voice.”

16 Aldrich, “‘Needless Peril,” and Humphrey

17 Derickson, “Participative Regulation” and Fairris, “Institutional Change,” also emphasize the role of union and shop floor issues in shaping safety during these years. Much of the modern literature on safety is highly quantitative. For readable discussions see Mendeloff, Regulating Safety (Cambridge: MIT Press, 1979), and

Citation: Aldrich, Mark. “History of Workplace Safety in the United States, 1880-1970″. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/history-of-workplace-safety-in-the-united-states-1880-1970/

A History of Futures Trading in the United States

Joseph Santos, South Dakota State University

Many contemporary [nineteenth century] critics were suspicious of a form of business in which one man sold what he did not own to another who did not want it… Morton Rothstein (1966)

Anatomy of a Futures Market

The Futures Contract

A futures contract is a standardized agreement between a buyer and a seller to exchange an amount and grade of an item at a specific price and future date. The item or underlying asset may be an agricultural commodity, a metal, mineral or energy commodity, a financial instrument or a foreign currency. Because futures contracts are derived from these underlying assets, they belong to a family of financial instruments called derivatives.

Traders buy and sell futures contracts on an exchange – a marketplace that is operated by a voluntary association of members. The exchange provides buyers and sellers the infrastructure (trading pits or their electronic equivalent), legal framework (trading rules, arbitration mechanisms), contract specifications (grades, standards, time and method of delivery, terms of payment) and clearing mechanisms (see section titled The Clearinghouse) necessary to facilitate futures trading. Only exchange members are allowed to trade on the exchange. Nonmembers trade through commission merchants – exchange members who service nonmember trades and accounts for a fee.

The September 2004 light sweet crude oil contract is an example of a petroleum (mineral) future. It trades on the New York Mercantile exchange (NYM). The contract is standardized – every one is an agreement to trade 1,000 barrels of grade light sweet crude in September, on a day of the seller’s choosing. As of May 25, 2004 the contract sold for $40,120=$40.12x1000 and debits Member S’s margin account the same amount.

The Clearinghouse

The clearinghouse is the counterparty to every trade – its members buy every contract that traders sell on the exchange and sell every contract that traders buy on the exchange. Absent a clearinghouse, traders would interact directly, and this would introduce two problems. First, traders. concerns about their counterparty’s credibility would impede trading. For example, Trader A might refuse to sell to Trader B, who is supposedly untrustworthy.

Second, traders would lose track of their counterparties. This would occur because traders typically settle their contractual obligations by offset – traders buy/sell the contracts that they sold/bought earlier. For example, Trader A sells a contract to Trader B, who sells a contract to Trader C to offset her position, and so on.

The clearinghouse eliminates both of these problems. First, it is a guarantor of all trades. If a trader defaults on a futures contract, the clearinghouse absorbs the loss. Second, clearinghouse members, and not outside traders, reconcile offsets at the end of trading each day. Margin accounts and a process called marking-to-market all but assure the clearinghouse’s solvency.

A margin account is a balance that a trader maintains with a commission merchant in order to offset the trader’s daily unrealized loses in the futures markets. Commission merchants also maintain margins with clearinghouse members, who maintain them with the clearinghouse. The margin account begins as an initial lump sum deposit, or original margin.

To understand the mechanics and merits of marking-to-market, consider that the values of the long and short positions of an existing futures contract change daily, even though futures trading is a zero-sum game – a buyer’s gain/loss equals a seller’s loss/gain. So, the clearinghouse breaks even on every trade, while its individual members. positions change in value daily.

With this in mind, suppose Trader B buys a 5,000 bushel soybean contract for $9.70 from Trader S. Technically, Trader B buys the contract from Clearinghouse Member S and Trader S sells the contract to Clearinghouse Member B. Now, suppose that at the end of the day the contract is priced at $9.71. That evening the clearinghouse marks-to-market each member’s account. That is to say, the clearinghouse credits Member B’s margin account $50 and debits Member S’s margin account the same amount.

Member B is now in a position to draw on the clearinghouse $50, while Member S must pay the clearinghouse a $50 variation margin – incremental margin equal to the difference between a contract’s price and its current market value. In turn, clearinghouse members debit and credit accordingly the margin accounts of their commission merchants, who do the same to the margin accounts of their clients (i.e., traders). This iterative process all but assures the clearinghouse a sound financial footing. In the unlikely event that a trader defaults, the clearinghouse closes out the position and loses, at most, the trader’s one day loss.

Active Futures Markets

Futures exchanges create futures contracts. And, because futures exchanges compete for traders, they must create contracts that appeal to the financial community. For example, the New York Mercantile Exchange created its light sweet crude oil contract in order to fill an unexploited niche in the financial marketplace.

Not all contracts are successful and those that are may, at times, be inactive – the contract exists, but traders are not trading it. For example, of all contracts introduced by U.S. exchanges between 1960 and 1977, only 32% traded in 1980 (Stein 1986, 7). Consequently, entire exchanges can become active – e.g., the New York Futures Exchange opened in 1980 – or inactive – e.g., the New Orleans Exchange closed in 1983 (Leuthold 1989, 18). Government price supports or other such regulation can also render trading inactive (see Carlton 1984, 245).

Futures contracts succeed or fail for many reasons, but successful contracts do share certain basic characteristics (see for example, Baer and Saxon 1949, 110-25; Hieronymus 1977, 19-22). To wit, the underlying asset is homogeneous, reasonably durable, and standardized (easily describable); its supply and demand is ample, its price is unfettered, and all relevant information is available to all traders. For example, futures contracts have never derived from, say, artwork (heterogeneous and not standardized) or rent-controlled housing rights (supply, and hence price is fettered by regulation).

Purposes and Functions

Futures markets have three fundamental purposes. The first is to enable hedgers to shift price risk – asset price volatility – to speculators in return for basis risk – changes in the difference between a futures price and the cash, or current spot price of the underlying asset. Because basis risk is typically less than asset price risk, the financial community views hedging as a form of risk management and speculating as a form of risk taking.

Generally speaking, to hedge is to take opposing positions in the futures and cash markets. Hedgers include (but are not restricted to) farmers, feedlot operators, grain elevator operators, merchants, millers, utilities, export and import firms, refiners, lenders, and hedge fund managers (see Peck 1985, 13-21). Meanwhile, to speculate is to take a position in the futures market with no counter-position in the cash market. Speculators may not be affiliated with the underlying cash markets.

To demonstrate how a hedge works, assume Hedger A buys, or longs, 5,000 bushels of corn, which is currently worth $2.40 per bushel, or $12,000=$2.40×5000; the date is May 1st and Hedger A wishes to preserve the value of his corn inventory until he sells it on June 1st. To do so, he takes a position in the futures market that is exactly opposite his position in the spot – current cash – market. For example, Hedger A sells, or shorts, a July futures contract for 5,000 bushels of corn at a price of $2.50 per bushel; put differently, Hedger A commits to sell in July 5,000 bushels of corn for $12,500=$2.50×5000. Recall that to sell (buy) a futures contract means to commit to sell (buy) an amount and grade of an item at a specific price and future date.

Absent basis risk, Hedger A’s spot and futures markets positions will preserve the value of the 5,000 bushels of corn that he owns, because a fall in the spot price of corn will be matched penny for penny by a fall in the futures price of corn. For example, suppose that by June 1st the spot price of corn has fallen five cents to $2.35 per bushel. Absent basis risk, the July futures price of corn has also fallen five cents to $2.45 per bushel.

So, on June 1st, Hedger A sells his 5,000 bushels of corn and loses $250=($2.35-$2.40)x5000 in the spot market. At the same time, he buys a July futures contract for 5,000 bushels of corn and gains $250=($2.50-$2.45)x5000 in the futures market. Notice, because Hedger A has both sold and bought a July futures contract for 5,000 bushels of corn, he has offset his commitment in the futures market.

This example of a textbook hedge – one that eliminates price risk entirely – is instructive but it is also a bit misleading because: basis risk exists; hedgers may choose to hedge more or less than 100% of their cash positions; and hedgers may cross hedge – trade futures contracts whose underlying assets are not the same as the assets that the hedger owns. So, in reality hedgers cannot immunize entirely their cash positions from market fluctuations and in some cases they may not wish to do so. Again, the purpose of a hedge is not to avoid risk, but rather to manage or even profit from it.

The second fundamental purpose of a futures market is to facilitate firms’ acquisitions of operating capital – short term loans that finance firms’ purchases of intermediate goods such as inventories of grain or petroleum. For example, lenders are relatively more likely to finance, at or near prime lending rates, hedged (versus non-hedged) inventories. The futures contact is an efficient form of collateral because it costs only a fraction of the inventory’s value, or the margin on a short position in the futures market.

Speculators make the hedge possible because they absorb the inventory’s price risk; for example, the ultimate counterparty to the inventory dealer’s short position is a speculator. In the absence of futures markets, hedgers could only engage in forward contracts – unique agreements between private parties, who operate independently of an exchange or clearinghouse. Hence, the collateral value of a forward contract is less than that of a futures contract.3

The third fundamental purpose of a futures market is to provide information to decision makers regarding the market’s expectations of future economic events. So long as a futures market is efficient – the market forms expectations by taking into proper consideration all available information – its forecasts of future economic events are relatively more reliable than an individual’s. Forecast errors are expensive, and well informed, highly competitive, profit-seeking traders have a relatively greater incentive to minimize them.

The Evolution of Futures Trading in the U.S.

Early Nineteenth Century Grain Production and Marketing

Into the early nineteenth century, the vast majority of American grains – wheat, corn, barley, rye and oats – were produced throughout the hinterlands of the United States by producers who acted primarily as subsistence farmers – agricultural producers whose primary objective was to feed themselves and their families. Although many of these farmers sold their surplus production on the market, most lacked access to large markets, as well as the incentive, affordable labor supply, and myriad technologies necessary to practice commercial agriculture – the large scale production and marketing of surplus agricultural commodities.

At this time, the principal trade route to the Atlantic seaboard was by river through New Orleans4; though the South was also home to terminal markets – markets of final destination – for corn, provisions and flour. Smaller local grain markets existed along the tributaries of the Ohio and Mississippi Rivers and east-west overland routes. The latter were used primarily to transport manufactured (high valued and nonperishable) goods west.

Most farmers, and particularly those in the East North Central States – the region consisting today of Illinois, Indiana, Michigan, Ohio and Wisconsin – could not ship bulk grains to market profitably (Clark 1966, 4, 15).5 Instead, most converted grains into relatively high value flour, livestock, provisions and whiskies or malt liquors and shipped them south or, in the case of livestock, drove them east (14).6 Oats traded locally, if at all; their low value-to-weight ratios made their shipment, in bulk or otherwise, prohibitive (15n).

The Great Lakes provided a natural water route east to Buffalo but, in order to ship grain this way, producers in the interior East North Central region needed local ports to receive their production. Although the Erie Canal connected Lake Erie to the port of New York by 1825, water routes that connected local interior ports throughout northern Ohio to the Canal were not operational prior to the mid-1830s. Indeed, initially the Erie aided the development of the Old Northwest, not because it facilitated eastward grain shipments, but rather because it allowed immigrants and manufactured goods easy access to the West (Clark 1966, 53).

By 1835 the mouths of rivers and streams throughout the East North Central States had become the hubs, or port cities, from which farmers shipped grain east via the Erie. By this time, shippers could also opt to go south on the Ohio River and then upriver to Pittsburgh and ultimately to Philadelphia, or north on the Ohio Canal to Cleveland, Buffalo and ultimately, via the Welland Canal, to Lake Ontario and Montreal (19).

By 1836 shippers carried more grain north on the Great Lakes and through Buffalo, than south on the Mississippi through New Orleans (Odle 1964, 441). Though, as late as 1840 Ohio was the only state/region who participated significantly in the Great Lakes trade. Illinois, Indiana, Michigan, and the region of modern day Wisconsin either produced for their respective local markets or relied upon Southern demand. As of 1837 only 4,107 residents populated the “village” of Chicago, which became an official city in that year (Hieronymus 1977, 72).7

Antebellum Grain Trade Finance in the Old Northwest

Before the mid-1860s, a network of banks, grain dealers, merchants, millers and commission houses – buying and selling agents located in the central commodity markets – employed an acceptance system to finance the U.S. grain trade (see Clark 1966, 119; Odle 1964, 442). For example, a miller who required grain would instruct an agent in, say, New York to establish, on the miller’s behalf, a line of credit with a merchant there. The merchant extended this line of credit in the form of sight drafts, which the merchant made payable, in sixty or ninety days, up to the amount of the line of credit.

With this credit line established, commission agents in the hinterland would arrange with grain dealers to acquire the necessary grain. The commission agent would obtain warehouse receipts – dealer certified negotiable titles to specific lots and quantities of grain in store – from dealers, attach these to drafts that he drew on the merchant’s line of credit, and discount these drafts at his local bank in return for banknotes; the local bank would forward these drafts on to the New York merchant’s bank for redemption. The commission agents would use these banknotes to advance – lend – grain dealers roughly three quarters of the current market value of the grain. The commission agent would pay dealers the remainder (minus finance and commission fees) when the grain was finally sold in the East. That is, commission agents and grain dealers entered into consignment contracts.

Unfortunately, this approach linked banks, grain dealers, merchants, millers and commission agents such that the “entire procedure was attended by considerable risk and speculation, which was assumed by both the consignee and consignor” (Clark 1966, 120). The system was reasonably adequate if grain prices went unchanged between the time the miller procured the credit and the time the grain (bulk or converted) was sold in the East, but this was rarely the case. The fundamental problem with this system of finance was that commission agents were effectively asking banks to lend them money to purchase as yet unsold grain. To be sure, this inadequacy was most apparent during financial panics, when many banks refused to discount these drafts (Odle 1964, 447).

Grain Trade Finance in Transition: Forward Contracts and Commodity Exchanges

In 1848 the Illinois-Michigan Canal connected the Illinois River to Lake Michigan. The canal enabled farmers in the hinterlands along the Illinois River to ship their produce to merchants located along the river. These merchants accumulated, stored and then shipped grain to Chicago, Milwaukee and Racine. At first, shippers tagged deliverables according to producer and region, while purchasers inspected and chose these tagged bundles upon delivery. Commercial activity at the three grain ports grew throughout the 1850s. Chicago emerged as a dominant grain (primarily corn) hub later that decade (Pierce 1957, 66).8

Amidst this growth of Lake Michigan commerce, a confluence of innovations transformed the grain trade and its method of finance. By the 1840s, grain elevators and railroads facilitated high volume grain storage and shipment, respectively. Consequently, country merchants and their Chicago counterparts required greater financing in order to store and ship this higher volume of grain.9 And, high volume grain storage and shipment required that inventoried grains be fungible – of such a nature that one part or quantity could be replaced by another equal part or quantity in the satisfaction of an obligation. For example, because a bushel of grade No. 2 Spring Wheat was fungible, its price did not depend on whether it came from Farmer A, Farmer B, Grain Elevator C, or Train Car D.

Merchants could secure these larger loans more easily and at relatively lower rates if they obtained firm price and quantity commitments from their buyers. So, merchants began to engage in forward (not futures) contracts. According to Hieronymus (1977), the first such “time contract” on record was made on March 13, 1851. It specified that 3,000 bushels of corn were to be delivered to Chicago in June at a price of one cent below the March 13th cash market price (74).10

Meanwhile, commodity exchanges serviced the trade’s need for fungible grain. In the 1840s and 1850s these exchanges emerged as associations for dealing with local issues such as harbor infrastructure and commercial arbitration (e.g., Detroit in 1847, Buffalo, Cleveland and Chicago in 1848 and Milwaukee in 1849) (see Odle 1964). By the 1850s they established a system of staple grades, standards and inspections, all of which rendered inventory grain fungible (Baer and Saxon 1949, 10; Chandler 1977, 211). As collection points for grain, cotton, and provisions, they weighed, inspected and classified commodity shipments that passed from west to east. They also facilitated organized trading in spot and forward markets (Chandler 1977, 211; Odle 1964, 439).11

The largest and most prominent of these exchanges was the Board of Trade of the City of Chicago, a grain and provisions exchange established in 1848 by a State of Illinois corporate charter (Boyle 1920, 38; Lurie 1979, 27); the exchange is known today as the Chicago Board of Trade (CBT). For at least its first decade, the CBT functioned as a meeting place for merchants to resolve contract disputes and discuss commercial matters of mutual concern. Participation was part-time at best. The Board’s first directorate of 25 members included “a druggist, a bookseller, a tanner, a grocer, a coal dealer, a hardware merchant, and a banker” and attendance was often encouraged by free lunches (Lurie 1979, 25).

However, in 1859 the CBT became a state- (of Illinois) chartered private association. As such, the exchange requested and received from the Illinois legislature sanction to establish rules “for the management of their business and the mode in which it shall be transacted, as they may think proper;” to arbitrate over and settle disputes with the authority as “if it were a judgment rendered in the Circuit Court;” and to inspect, weigh and certify grain and grain trades such that these certifications would be binding upon all CBT members (Lurie 1979, 27).

Nineteenth Century Futures Trading

By the 1850s traders sold and resold forward contracts prior to actual delivery (Hieronymus 1977, 75). A trader could not offset, in the futures market sense of the term, a forward contact. Nonetheless, the existence of a secondary market – market for extant, as opposed to newly issued securities – in forward contracts suggests, if nothing else, speculators were active in these early time contracts.

On March 27, 1863, the Chicago Board of Trade adopted its first rules and procedures for trade in forwards on the exchange (Hieronymus 1977, 76). The rules addressed contract settlement, which was (and still is) the fundamental challenge associated with a forward contract – finding a trader who was willing to take a position in a forward contract was relatively easy to do; finding that trader at the time of contract settlement was not.

The CBT began to transform actively traded and reasonably homogeneous forward contracts into futures contracts in May, 1865. At this time, the CBT: restricted trade in time contracts to exchange members; standardized contract specifications; required traders to deposit margins; and specified formally contract settlement, including payments and deliveries, and grievance procedures (Hieronymus 1977, 76).

The inception of organized futures trading is difficult to date. This is due, in part, to semantic ambiguities – e.g., was a “to arrive” contract a forward contract or a futures contract or neither? However, most grain trade historians agree that storage (grain elevators), shipment (railroad), and communication (telegraph) technologies, a system of staple grades and standards, and the impetus to speculation provided by the Crimean and U.S. Civil Wars enabled futures trading to ripen by about 1874, at which time the CBT was the U.S.’s premier organized commodities (grain and provisions) futures exchange (Baer and Saxon 1949, 87; Chandler 1977, 212; CBT 1936, 18; Clark 1966, 120; Dies 1925, 15; Hoffman 1932, 29; Irwin 1954, 77, 82; Rothstein 1966, 67).

Nonetheless, futures exchanges in the mid-1870s lacked modern clearinghouses, with which most exchanges began to experiment only in the mid-1880s. For example, the CBT’s clearinghouse got its start in 1884, and a complete and mandatory clearing system was in place at the CBT by 1925 (Hoffman 1932, 199; Williams 1982, 306). The earliest formal clearing and offset procedures were established by the Minneapolis Grain Exchange in 1891 (Peck 1985, 6).

Even so, rudiments of a clearing system – one that freed traders from dealing directly with one another – were in place by the 1870s (Hoffman 1920, 189). That is to say, brokers assumed the counter-position to every trade, much as clearinghouse members would do decades later. Brokers settled offsets between one another, though in the absence of a formal clearing procedure these settlements were difficult to accomplish.

Direct settlements were simple enough. Here, two brokers would settle in cash their offsetting positions between one another only. Nonetheless, direct settlements were relatively uncommon because offsetting purchases and sales between brokers rarely balanced with respect to quantity. For example, B1 might buy a 5,000 bushel corn future from B2, who then might buy a 6,000 bushel corn future from B1; in this example, 1,000 bushels of corn remain unsettled between B1 and B2. Of course, the two brokers could offset the remaining 1,000 bushel contract if B2 sold a 1,000 bushel corn future to B1. But what if B2 had already sold a 1,000 bushel corn future to B3, who had sold a 1,000 bushel corn future to B1? In this case, each broker’s net futures market position is offset, but all three must meet in order to settle their respective positions. Brokers referred to such a meeting as a ring settlement. Finally, if, in this example, B1 and B3 did not have positions with each other, B2 could settle her position if she transferred her commitment (which she has with B1) to B3. Brokers referred to this method as a transfer settlement. In either ring or transfer settlements, brokers had to find other brokers who held and wished to settle open counter-positions. Often brokers used runners to search literally the offices and corridors for the requisite counter-parties (see Hoffman 1932, 185-200).

Finally, the transformation in Chicago grain markets from forward to futures trading occurred almost simultaneously in New York cotton markets. Forward contracts for cotton traded in New York (and Liverpool, England) by the 1850s. And, like Chicago, organized trading in cotton futures began on the New York Cotton Exchange in about 1870; rules and procedures formalized the practice in 1872. Futures trading on the New Orleans Cotton Exchange began around 1882 (Hieronymus 1977, 77).

Other successful nineteenth century futures exchanges include the New York Produce Exchange, the Milwaukee Chamber of Commerce, the Merchant’s Exchange of St. Louis, the Chicago Open Board of Trade, the Duluth Board of Trade, and the Kansas City Board of Trade (Hoffman 1920, 33; see Peck 1985, 9).

Early Futures Market Performance

Volume

Data on grain futures volume prior to the 1880s are not available (Hoffman 1932, 30). Though in the 1870s “[CBT] officials openly admitted that there was no actual delivery of grain in more than ninety percent of contracts” (Lurie 1979, 59). Indeed, Chart 1 demonstrates that trading was relatively voluminous in the nineteenth century.

An annual average of 23,600 million bushels of grain futures traded between 1884 and 1888, or eight times the annual average amount of crops produced during that period. By comparison, an annual average of 25,803 million bushels of grain futures traded between 1966 and 1970, or four times the annual average amount of crops produced during that period. In 2002, futures volume outnumbered crop production by a factor of eleven.

The comparable data for cotton futures are presented in Chart 2. Again here, trading in the nineteenth century was significant. To wit, by 1879 futures volume had outnumbered production by a factor of five, and by 1896 this factor had reached eight.

Price of Storage

Nineteenth century observers of early U.S. futures markets either credited them for stabilizing food prices, or discredited them for wagering on, and intensifying, the economic hardships of Americans (Baer and Saxon 1949, 12-20, 56; Chandler 1977, 212; Ferris 1988, 88; Hoffman 1932, 5; Lurie 1979, 53, 115). To be sure, the performance of early futures markets remains relatively unexplored. The extant research on the subject has generally examined this performance in the context of two perspectives on the theory of efficiency: the price of storage and futures price efficiency more generally.

Holbrook Working pioneered research into the price of storage – the relationship, at a point in time, between prices (of storable agricultural commodities) applicable to different future dates (Working 1949, 1254).12 For example, what is the relationship between the current spot price of wheat and the current September 2004 futures price of wheat? Or, what is the relationship between the current September 2004 futures price of wheat and the current May 2005 futures price of wheat?

Working reasoned that these prices could not differ because of events that were expected to occur between these dates. For example, if the May 2004 wheat futures price is less than the September 2004 price, this cannot be due to, say, the expectation of a small harvest between May 2004 and September 2004. On the contrary, traders should factor such an expectation into both May and September prices. And, assuming that they do, then this difference can only reflect the cost of carrying – storing – these commodities over time.13 Though this strict interpretation has since been modified somewhat (see Peck 1985, 44).

So, for example, the September 2004 price equals the May 2004 price plus the cost of storing wheat between May 2004 and September 2004. If the difference between these prices is greater or less than the cost of storage, and the market is efficient, arbitrage will bring the difference back to the cost of storage – e.g., if the difference in prices exceeds the cost of storage, then traders can profit if they buy the May 2004 contract, sell the September 2004 contract, take delivery in May and store the wheat until September. Working (1953) demonstrated empirically that the theory of the price of storage could explain quite satisfactorily these inter-temporal differences in wheat futures prices at the CBT as early as the late 1880s (Working 1953, 556).

Futures Price Efficiency

Many contemporary economists tend to focus on futures price efficiency more generally (for example, Beck 1994; Kahl and Tomek 1986; Kofi 1973; McKenzie, et al. 2002; Tomek and Gray, 1970). That is to say, do futures prices shadow consistently (but not necessarily equal) traders’ rational expectations of future spot prices? Here, the research focuses on the relationship between, say, the cash price of wheat in September 2004 and the September 2004 futures price of wheat quoted two months earlier in July 2004.

Figure 1illustrates the behavior of corn futures prices and their corresponding spot prices between 1877 and 1890. The data consist of the average month t futures price in the last full week of month t-2 and the average cash price in the first full week of month t.

The futures price and its corresponding spot price need not be equal; futures price efficiency does not mean that the futures market is clairvoyant. But, a difference between the two series should exist only because of an unpredictable forecast error and a risk premium – futures prices may be, say, consistently below the expected future spot price if long speculators require an inducement, or premium, to enter the futures market. Recent work finds strong evidence that these early corn (and corresponding wheat) futures prices are, in the long run, efficient estimates of their underlying spot prices (Santos 2002, 35). Although these results and Working’s empirical studies on the price of storage support, to some extent, the notion that early U.S. futures markets were efficient, this question remains largely unexplored by economic historians.

The Struggle for Legitimacy

Nineteenth century America was both fascinated and appalled by futures trading. This is apparent from the litigation and many public debates surrounding its legitimacy (Baer and Saxon 1949, 55; Buck 1913, 131, 271; Hoffman 1932, 29, 351; Irwin 1954, 80; Lurie 1979, 53, 106). Many agricultural producers, the lay community and, at times, legislatures and the courts, believed trading in futures was tantamount to gambling. The difference between the latter and speculating, which required the purchase or sale of a futures contract but not the shipment or delivery of the commodity, was ostensibly lost on most Americans (Baer and Saxon 1949, 56; Ferris 1988, 88; Hoffman 1932, 5; Lurie 1979, 53, 115).

Many Americans believed that futures traders frequently manipulated prices. From the end of the Civil War until 1879 alone, corners – control of enough of the available supply of a commodity to manipulate its price – allegedly occurred with varying degrees of success in wheat (1868, 1871, 1878/9), corn (1868), oats (1868, 1871, 1874), rye (1868) and pork (1868) (Boyle 1920, 64-65). This manipulation continued throughout the century and culminated in the Three Big Corners – the Hutchinson (1888), the Leiter (1898), and the Patten (1909). The Patten corner was later debunked (Boyle 1920, 67-74), while the Leiter corner was the inspiration for Frank Norris’s classic The Pit: A Story of Chicago (Norris 1903; Rothstein 1982, 60).14 In any case, reports of market corners on America’s early futures exchanges were likely exaggerated (Boyle 1920, 62-74; Hieronymus 1977, 84), as were their long term effects on prices and hence consumer welfare (Rothstein 1982, 60).

By 1892 thousands of petitions to Congress called for the prohibition of “speculative gambling in grain” (Lurie, 1979, 109). And, attacks from state legislatures were seemingly unrelenting: in 1812 a New York act made short sales illegal (the act was repealed in 1858); in 1841 a Pennsylvania law made short sales, where the position was not covered in five days, a misdemeanor (the law was repealed in 1862); in 1882 an Ohio law and a similar one in Illinois tried unsuccessfully to restrict cash settlement of futures contracts; in 1867 the Illinois constitution forbade dealing in futures contracts (this was repealed by 1869); in 1879 California’s constitution invalidated futures contracts (this was effectively repealed in 1908); and, in 1882, 1883 and 1885, Mississippi, Arkansas, and Texas, respectively, passed laws that equated futures trading with gambling, thus making the former a misdemeanor (Peterson 1933, 68-69).

Two nineteenth century challenges to futures trading are particularly noteworthy. The first was the so-called Anti-Option movement. According to Lurie (1979), the movement was fueled by agrarians and their sympathizers in Congress who wanted to end what they perceived as wanton speculative abuses in futures trading (109). Although options were (are) not futures contracts, and were nonetheless already outlawed on most exchanges by the 1890s, the legislation did not distinguish between the two instruments and effectively sought to outlaw both (Lurie 1979, 109).

In 1890 the Butterworth Anti-Option Bill was introduced in Congress but never came to a vote. However, in 1892 the Hatch (and Washburn) Anti-Option bills passed both houses of Congress, and failed only on technicalities during reconciliation between the two houses. Had either bill become law, it would have effectively ended options and futures trading in the United States (Lurie 1979, 110).

A second notable challenge was the bucket shop controversy, which challenged the legitimacy of the CBT in particular. A bucket shop was essentially an association of gamblers who met outside the CBT and wagered on the direction of futures prices. These associations had legitimate-sounding names such as the Christie Grain and Stock Company and the Public Grain Exchange. To most Americans, these “exchanges” were no less legitimate than the CBT. That some CBT members were guilty of “bucket shopping” only made matters worse!

The bucket shop controversy was protracted and colorful (see Lurie 1979, 138-167). Between 1884 and 1887 Illinois, Iowa, Missouri and Ohio passed anti-bucket shop laws (Lurie 1979, 95). The CBT believed these laws entitled them to restrict bucket shops access to CBT price quotes, without which the bucket shops could not exist. Bucket shops argued that they were competing exchanges, and hence immune to extant anti-bucket shop laws. As such, they sued the CBT for access to these price quotes.15

The two sides and the telegraph companies fought in the courts for decades over access to these price quotes; the CBT’s very survival hung in the balance. After roughly twenty years of litigation, the Supreme Court of the U.S. effectively ruled in favor of the Chicago Board of Trade and against bucket shops (Board of Trade of the City of Chicago v. Christie Grain & Stock Co., 198 U.S. 236, 25 Sup. Ct. (1905)). Bucket shops disappeared completely by 1915 (Hieronymus 1977, 90).

Regulation

The anti-option movement, the bucket shop controversy and the American public’s discontent with speculation masks an ironic reality of futures trading: it escaped government regulation until after the First World War; though early exchanges did practice self-regulation or administrative law.16 The absence of any formal governmental oversight was due in large part to two factors. First, prior to 1895, the opposition tried unsuccessfully to outlaw rather than regulate futures trading. Second, strong agricultural commodity prices between 1895 and 1920 weakened the opposition, who blamed futures markets for low agricultural commodity prices (Hieronymus 1977, 313).

Grain prices fell significantly by the end of the First World War, and opposition to futures trading grew once again (Hieronymus 1977, 313). In 1922 the U.S. Congress enacted the Grain Futures Act, which required exchanges to be licensed, limited market manipulation and publicized trading information (Leuthold 1989, 369).17 However, regulators could rarely enforce the act because it enabled them to discipline exchanges, rather than individual traders. To discipline an exchange was essentially to suspend it, a punishment unfit (too harsh) for most exchange-related infractions.

The Commodity Exchange Act of 1936 enabled the government to deal directly with traders rather than exchanges. It established the Commodity Exchange Authority (CEA), a bureau of the U.S. Department of Agriculture, to monitor and investigate trading activities and prosecute price manipulation as a criminal offense. The act also: limited speculators’ trading activities and the sizes of their positions; regulated futures commission merchants; banned options trading on domestic agricultural commodities; and restricted futures trading – designated which commodities were to be traded on which licensed exchanges (see Hieronymus 1977; Leuthold, et al. 1989).

Although Congress amended the Commodity Exchange Act in 1968 in order to increase the regulatory powers of the Commodity Exchange Authority, the latter was ill-equipped to handle the explosive growth in futures trading in the 1960s and 1970s. So, in 1974 Congress passed the Commodity Futures Trading Act, which created far-reaching federal oversight of U.S. futures trading and established the Commodity Futures Trading Commission (CFTC).

Like the futures legislation before it, the Commodity Futures Trading Act seeks “to ensure proper execution of customer orders and to prevent unlawful manipulation, price distortion, fraud, cheating, fictitious trades, and misuse of customer funds” (Leuthold, et al. 1989, 34). Unlike the CEA, the CFTC was given broad regulator powers over all futures trading and related exchange activities throughout the U.S. The CFTC oversees and approves modifications to extant contracts and the creation and introduction of new contracts. The CFTC consists of five presidential appointees who are confirmed by the U.S. Senate.

The Futures Trading Act of 1982 amended the Commodity Futures Trading Act of 1974. The 1982 act legalized options trading on agricultural commodities and identified more clearly the jurisdictions of the CFTC and Securities and Exchange Commission (SEC). The regulatory overlap between the two organizations arose because of the explosive popularity during the 1970s of financial futures contracts. Today, the CFTC regulates all futures contracts and options on futures contracts traded on U.S. futures exchanges; the SEC regulates all financial instrument cash markets as well as all other options markets.

Finally, in 2000 Congress passed the Commodity Futures Modernization Act, which reauthorized the Commodity Futures Trading Commission for five years and repealed an 18-year old ban on trading single stock futures. The bill also sought to increase competition and “reduce systematic risk in markets for futures and over-the-counter derivatives” (H.R. 5660, 106th Congress 2nd Session).

Modern Futures Markets

The growth in futures trading has been explosive in recent years (Chart 3).

Futures trading extended beyond physical commodities in the 1970s and 1980s – currency futures in 1972; interest rate futures in 1975; and stock index futures in 1982 (Silber 1985, 83). The enormous growth of financial futures at this time was likely because of the breakdown of the Bretton Woods exchange rate regime, which essentially fixed the relative values of industrial economies’ exchange rates to the American dollar (see Bordo and Eichengreen 1993), and relatively high inflation from the late 1960s to the early 1980s. Flexible exchange rates and inflation introduced, respectively, exchange and interest rate risks, which hedgers sought to mitigate through the use of financial futures. Finally, although futures contracts on agricultural commodities remain popular, financial futures and options dominate trading today. Trading volume in metals, minerals and energy remains relatively small.

Trading volume in agricultural futures contracts first dropped below 50% in 1982. By 1985 this volume had dropped to less than one fourth all trading. In the same year the volume of futures trading in the U.S. Treasury bond contract alone exceeded trading volume in all agricultural commodities combined (Leuthold et al. 1989, 2). Today exchanges in the U.S. actively trade contracts on several underlying assets (Table 1). These range from the traditional – e.g., agriculture and metals – to the truly innovative – e.g. the weather. The latter’s payoff varies with the number of degree-days by which the temperature in a particular region deviates from 65 degrees Fahrenheit.

Table 1: Select Futures Contracts Traded as of 2002

Agriculture Currencies Equity Indexes Interest Rates Metals & Energy
Corn British pound S&P 500 index Eurodollars Copper
Oats Canadian dollar Dow Jones Industrials Euroyen Aluminum
Soybeans Japanese yen S&P Midcap 400 Euro-denominated bond Gold
Soybean meal Euro Nasdaq 100 Euroswiss Platinum
Soybean oil Swiss franc NYSE index Sterling Palladium
Wheat Australian dollar Russell 2000 index British gov. bond (gilt) Silver
Barley Mexican peso Nikkei 225 German gov. bond Crude oil
Flaxseed Brazilian real FTSE index Italian gov. bond Heating oil
Canola CAC-40 Canadian gov. bond Gas oil
Rye DAX-30 Treasury bonds Natural gas
Cattle All ordinary Treasury notes Gasoline
Hogs Toronto 35 Treasury bills Propane
Pork bellies Dow Jones Euro STOXX 50 LIBOR CRB index
Cocoa EURIBOR Electricity
Coffee Municipal bond index Weather
Cotton Federal funds rate
Milk Bankers’ acceptance
Orange juice
Sugar
Lumber
Rice

Source: Bodie, Kane and Marcus (2005), p. 796.

Table 2 provides a list of today’s major futures exchanges.

Table 2: Select Futures Exchanges as of 2002

Exchange Exchange
Chicago Board of Trade CBT Montreal Exchange ME
Chicago Mercantile Exchange CME Minneapolis Grain Exchange MPLS
Coffee, Sugar & Cocoa Exchange, New York CSCE Unit of Euronext.liffe NQLX
COMEX, a division of the NYME CMX New York Cotton Exchange NYCE
European Exchange EUREX New York Futures Exchange NYFE
Financial Exchange, a division of the NYCE FINEX New York Mercantile Exchange NYME
International Petroleum Exchange IPE OneChicago ONE
Kansas City Board of Trade KC Sydney Futures Exchange SFE
London International Financial Futures Exchange LIFFE Singapore Exchange Ltd. SGX
Marche a Terme International de France MATIF

Source: Wall Street Journal, 5/12/2004, C16.

Modern trading differs from its nineteenth century counterpart in other respects as well. First, the popularity of open outcry trading is waning. For example, today the CBT executes roughly half of all trades electronically. And, electronic trading is the rule, rather than the exception throughout Europe. Second, today roughly 99% of all futures contracts are settled prior to maturity. Third, in 1982 the Commodity Futures Trading Commission approved cash settlement – delivery that takes the form of a cash balance – on financial index and Eurodollar futures, whose underlying assets are not deliverable, as well as on several non-financial contracts including lean hog, feeder cattle and weather (Carlton 1984, 253). And finally, on Dec. 6, 2002, the Chicago Mercantile Exchange became the first publicly traded financial exchange in the U.S.

References and Further Reading

Baer, Julius B. and Olin. G. Saxon. Commodity Exchanges and Futures Trading. New York: Harper & Brothers, 1949.

Bodie, Zvi, Alex Kane and Alan J. Marcus. Investments. New York: McGraw-Hill/Irwin, 2005.

Bordo, Michael D. and Barry Eichengreen, editors. A Retrospective on the Bretton Woods System: Lessons for International Monetary Reform. Chicago: University of Chicago Press, 1993.

Boyle, James. E. Speculation and the Chicago Board of Trade. New York: MacMillan Company, 1920.

Buck, Solon. J. The Granger Movement: A Study of Agricultural Organization and Its Political,

Carlton, Dennis W. “Futures Markets: Their Purpose, Their History, Their Growth, Their Successes and Failures.” Journal of Futures Markets 4, no. 3 (1984): 237-271.

Chicago Board of Trade Bulletin. The Development of the Chicago Board of Trade. Chicago: Chicago Board of Trade, 1936.

Chandler, Alfred. D. The Visible Hand: The Managerial Revolution in American Business. Cambridge: Harvard University Press, 1977.

Clark, John. G. The Grain Trade in the Old Northwest. Urbana: University of Illinois Press, 1966.

Commodity Futures Trading Commission. Annual Report. Washington, D.C. 2003.

Dies, Edward. J. The Wheat Pit. Chicago: The Argyle Press, 1925.

Ferris, William. G. The Grain Traders: The Story of the Chicago Board of Trade. East Lansing, MI: Michigan State University Press, 1988.

Hieronymus, Thomas A. Economics of Futures Trading for Commercial and Personal Profit. New York: Commodity Research Bureau, Inc., 1977.

Hoffman, George W. Futures Trading upon Organized Commodity Markets in the United States. Philadelphia: University of Pennsylvania Press, 1932.

Irwin, Harold. S. Evolution of Futures Trading. Madison, WI: Mimir Publishers, Inc., 1954

Leuthold, Raymond M., Joan C. Junkus and Jean E. Cordier. The Theory and Practice of Futures Markets. Champaign, IL: Stipes Publishing L.L.C., 1989.

Lurie, Jonathan. The Chicago Board of Trade 1859-1905. Urbana: University of Illinois Press, 1979.

National Agricultural Statistics Service. “Historical Track Records.” Agricultural Statistics Board, U.S. Department of Agriculture, Washington, D.C. April 2004.

Norris, Frank. The Pit: A Story of Chicago. New York, NY: Penguin Group, 1903.

Odle, Thomas. “Entrepreneurial Cooperation on the Great Lakes: The Origin of the Methods of American Grain Marketing.” Business History Review 38, (1964): 439-55.

Peck, Anne E., editor. Futures Markets: Their Economic Role. Washington D.C.: American Enterprise Institute for Public Policy Research, 1985.

Peterson, Arthur G. “Futures Trading with Particular Reference to Agricultural Commodities.” Agricultural History 8, (1933): 68-80.

Pierce, Bessie L. A History of Chicago: Volume III, the Rise of a Modern City. New York: Alfred A. Knopf, 1957.

Rothstein, Morton. “The International Market for Agricultural Commodities, 1850-1873.” In Economic Change in the Civil War Era, edited by David. T. Gilchrist and W. David Lewis, 62-71. Greenville DE: Eleutherian Mills-Hagley Foundation, 1966.

Rothstein, Morton. “Frank Norris and Popular Perceptions of the Market.” Agricultural History 56, (1982): 50-66.

Santos, Joseph. “Did Futures Markets Stabilize U.S. Grain Prices?” Journal of Agricultural Economics 53, no. 1 (2002): 25-36.

Silber, William L. “The Economic Role of Financial Futures.” In Futures Markets: Their Economic Role, edited by Anne E. Peck, 83-114. Washington D.C.: American Enterprise Institute for Public Policy Research, 1985.

Stein, Jerome L. The Economics of Futures Markets. Oxford: Basil Blackwell Ltd, 1986.

Taylor, Charles. H. History of the Board of Trade of the City of Chicago. Chicago: R. O. Law, 1917.

Werner, Walter and Steven T. Smith. Wall Street. New York: Columbia University Press, 1991.

Williams, Jeffrey C. “The Origin of Futures Markets.” Agricultural History 56, (1982): 306-16.

Working, Holbrook. “The Theory of the Price of Storage.” American Economic Review 39, (1949): 1254-62.

Working, Holbrook. “Hedging Reconsidered.” Journal of Farm Economics 35, (1953): 544-61.

1 The clearinghouse is typically a corporation owned by a subset of exchange members. For details regarding the clearing arrangements of a specific exchange, go to www.cftc.gov and click on “Clearing Organizations.”

2 The vast majority of contracts are offset. Outright delivery occurs when the buyer receives from, or the seller “delivers” to the exchange a title of ownership, and not the actual commodity or financial security – the urban legend of the trader who neglected to settle his long position and consequently “woke up one morning to find several car loads of a commodity dumped on his front yard” is indeed apocryphal (Hieronymus 1977, 37)!

3 Nevertheless, forward contracts remain popular today (see Peck 1985, 9-12).

4 The importance of New Orleans as a point of departure for U.S. grain and provisions prior to the Civil War is unquestionable. According to Clark (1966), “New Orleans was the leading export center in the nation in terms of dollar volume of domestic exports, except for 1847 and a few years during the 1850s, when New York’s domestic exports exceeded those of the Crescent City” (36).

5 This area was responsible for roughly half of U.S. wheat production and a third of U.S. corn production just prior to 1860. Southern planters dominated corn output during the early to mid- 1800s.

6 Millers milled wheat into flour; pork producers fed corn to pigs, which producers slaughtered for provisions; distillers and brewers converted rye and barley into whiskey and malt liquors, respectively; and ranchers fed grains and grasses to cattle, which were then driven to eastern markets.

7 Significant advances in transportation made the grain trade’s eastward expansion possible, but the strong and growing demand for grain in the East made the trade profitable. The growth in domestic grain demand during the early to mid-nineteenth century reflected the strong growth in eastern urban populations. Between 1820 and 1860, the populations of Baltimore, Boston, New York and Philadelphia increased by over 500% (Clark 1966, 54). Moreover, as the 1840’s approached, foreign demand for U.S. grain grew. Between 1845 and 1847, U.S. exports of wheat and flour rose from 6.3 million bushels to 26.3 million bushels and corn exports grew from 840,000 bushels to 16.3 million bushels (Clark 1966, 55).

8 Wheat production was shifting to the trans-Mississippi West, which produced 65% of the nation’s wheat by 1899 and 90% by 1909, and railroads based in the Lake Michigan port cities intercepted the Mississippi River trade that would otherwise have headed to St. Louis (Clark 1966, 95). Lake Michigan port cities also benefited from a growing concentration of corn production in the West North Central region – Iowa, Kansas, Minnesota, Missouri, Nebraska, North Dakota and South Dakota, which by 1899 produced 40% percent of the country’s corn (Clark 1966, 4).

9 Corn had to be dried immediately after it was harvested and could only be shipped profitably by water to Chicago, but only after rivers and lakes had thawed; so, country merchants stored large quantities of corn. On the other hand, wheat was more valuable relative to its weight, and it could be shipped to Chicago by rail or road immediately after it was harvested; so, Chicago merchants stored large quantities of wheat.

10 This is consistent with Odle (1964), who adds that “the creators of the new system of marketing [forward contracts] were the grain merchants of the Great Lakes” (439). However, Williams (1982) presents evidence of such contracts between Buffalo and New York City as early as 1847 (309). To be sure, Williams proffers an intriguing case that forward and, in effect, future trading was active and quite sophisticated throughout New York by the late 1840s. Moreover, he argues that this trading grew not out of activity in Chicago, whose trading activities were quite primitive at this early date, but rather trading in London and ultimately Amsterdam. Indeed, “time bargains” were common in London and New York securities markets in the mid- and late 1700s, respectively. A time bargain was essentially a cash-settled financial forward contract that was unenforceable by law, and as such “each party was forced to rely on the integrity and credit of the other” (Werner and Smith 1991, 31). According to Werner and Smith, “time bargains prevailed on Wall Street until 1840, and were gradually replaced by margin trading by 1860” (68). They add that, “margin trading … had an advantage over time bargains, in which there was little protection against default beyond the word of another broker. Time bargains also technically violated the law as wagering contracts; margin trading did not” (135). Between 1818 and 1840 these contracts comprised anywhere from 0.7% (49-day average in 1830) to 34.6% (78-day average in 1819) of daily exchange volume on the New York Stock & Exchange Board (Werner and Smith 1991, 174).

11 Of course, forward markets could and indeed did exist in the absence of both grading standards and formal exchanges, though to what extent they existed is unclear (see Williams 1982).

12 In the parlance of modern financial futures, the term cost of carry is used instead of the term storage. For example, the cost of carrying a bond is comprised of the cost of acquiring and holding (or storing) it until delivery minus the return earned during the carry period.

13 More specifically, the price of storage is comprised of three components: (1) physical costs such as warehouse and insurance; (2) financial costs such as borrowing rates of interest; and (3) the convenience yield – the return that the merchant, who stores the commodity, derives from maintaining an inventory in the commodity. The marginal costs of (1) and (2) are increasing functions of the amount stored; the more the merchant stores, the greater the marginal costs of warehouse use, insurance and financing. Whereas the marginal benefit of (3) is a decreasing function of the amount stored; put differently, the smaller the merchant’s inventory, the more valuable each additional unit of inventory becomes. Working used this convenience yield to explain a negative price of storage – the nearby contract is priced higher than the faraway contract; an event that is likely to occur when supplies are exceptionally low. In this instance, there is little for inventory dealers to store. Hence, dealers face extremely low physical and financial storage costs, but extremely high convenience yields. The price of storage turns negative; essentially, inventory dealers are willing to pay to store the commodity.

14 Norris’ protagonist, Curtis Jadwin, is a wheat speculator emotionally consumed and ultimately destroyed, while the welfare of producers and consumers hang in the balance, when a nineteenth century CBT wheat futures corner backfires on him.

15 One particularly colorful incident in the controversy came when the Supreme Court of Illinois ruled that the CBT had to either make price quotes public or restrict access to everyone. When the Board opted for the latter, it found it needed to “prevent its members from running (often literally) between the [CBT and a bucket shop next door], but with minimal success. Board officials at first tried to lock the doors to the exchange…However, after one member literally battered down the door to the east side of the building, the directors abandoned this policy as impracticable if not destructive” (Lurie 1979, 140).

16 Administrative law is “a body of rules and doctrines which deals with the powers and actions of administrative agencies” that are organizations other than the judiciary or legislature. These organizations affect the rights of private parties “through either adjudication, rulemaking, investigating, prosecuting, negotiating, settling, or informally acting” (Lurie 1979, 9).

17 In 1921 Congress passed The Futures Trading Act, which was declared unconstitutional.

Citation: Santos, Joseph. “A History of Futures Trading in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/a-history-of-futures-trading-in-the-united-states/

The Depression of 1893

David O. Whitten, Auburn University

The Depression of 1893 was one of the worst in American history with the unemployment rate exceeding ten percent for half a decade. This article describes economic developments in the decades leading up to the depression; the performance of the economy during the 1890s; domestic and international causes of the depression; and political and social responses to the depression.

The Depression of 1893 can be seen as a watershed event in American history. It was accompanied by violent strikes, the climax of the Populist and free silver political crusades, the creation of a new political balance, the continuing transformation of the country’s economy, major changes in national policy, and far-reaching social and intellectual developments. Business contraction shaped the decade that ushered out the nineteenth century.

Unemployment Estimates

One way to measure the severity of the depression is to examine the unemployment rate. Table 1 provides estimates of unemployment, which are derived from data on output — annual unemployment was not directly measured until 1929, so there is no consensus on the precise magnitude of the unemployment rate of the 1890s. Despite the differences in the two series, however, it is obvious that the Depression of 1893 was an important event. The unemployment rate exceeded ten percent for five or six consecutive years. The only other time this occurred in the history of the US economy was during the Great Depression of the 1930s.

Timing and Depth of the Depression

The National Bureau of Economic Research estimates that the economic contraction began in January 1893 and continued until June 1894. The economy then grew until December 1895, but it was then hit by a second recession that lasted until June 1897. Estimates of annual real gross national product (which adjust for this period’s deflation) are fairly crude, but they generally suggest that real GNP fell about 4% from 1892 to 1893 and another 6% from 1893 to 1894. By 1895 the economy had grown past its earlier peak, but GDP fell about 2.5% from 1895 to 1896. During this period population grew at about 2% per year, so real GNP per person didn’t surpass its 1892 level until 1899. Immigration, which had averaged over 500,000 people per year in the 1880s and which would surpass one million people per year in the first decade of the 1900s, averaged only 270,000 from 1894 to 1898.

Table 1
Estimates of Unemployment during the 1890s

Year Lebergott Romer
1890 4.0% 4.0%
1891 5.4 4.8
1892 3.0 3.7
1893 11.7 8.1
1894 18.4 12.3
1895 13.7 11.1
1896 14.5 12.0
1897 14.5 12.4
1898 12.4 11.6
1899 6.5 8,7
1900 5.0 5.0

Source: Romer, 1984

The depression struck an economy that was more like the economy of 1993 than that of 1793. By 1890, the US economy generated one of the highest levels of output per person in the world — below that in Britain, but higher than the rest of Europe. Agriculture no longer dominated the economy, producing only about 19 percent of GNP, well below the 30 percent produced in manufacturing and mining. Agriculture’s share of the labor force, which had been about 74% in 1800, and 60% in 1860, had fallen to roughly 40% in 1890. As Table 2 shows, only the South remained a predominantly agricultural region. Throughout the country few families were self-sufficient, most relied on selling their output or labor in the market — unlike those living in the country one hundred years earlier.

Table 2
Agriculture’s Share of the Labor Force by Region, 1890

Northeast 15%
Middle Atlantic 17%
Midwest 43%
South Atlantic 63%
South Central 67%
West 29%

Economic Trends Preceding the 1890s

Between 1870 and 1890 the number of farms in the United States rose by nearly 80 percent, to 4.5 million, and increased by another 25 percent by the end of the century. Farm property value grew by 75 percent, to $16.5 billion, and by 1900 had increased by another 25 percent. The advancing checkerboard of tilled fields in the nation’s heartland represented a vast indebtedness. Nationwide about 29% of farmers were encumbered by mortgages. One contemporary observer estimated 2.3 million farm mortgages nationwide in 1890 worth over $2.2 billion. But farmers in the plains were much more likely to be in debt. Kansas croplands were mortgaged to 45 percent of their true value, those in South Dakota to 46 percent, in Minnesota to 44, in Montana 41, and in Colorado 34 percent. Debt covered a comparable proportion of all farmlands in those states. Under favorable conditions the millions of dollars of annual charges on farm mortgages could be borne, but a declining economy brought foreclosures and tax sales.

Railroads opened new areas to agriculture, linking these to rapidly changing national and international markets. Mechanization, the development of improved crops, and the introduction of new techniques increased productivity and fueled a rapid expansion of farming operations. The output of staples skyrocketed. Yields of wheat, corn, and cotton doubled between 1870 and 1890 though the nation’s population rose by only two-thirds. Grain and fiber flooded the domestic market. Moreover, competition in world markets was fierce: Egypt and India emerged as rival sources of cotton; other areas poured out a growing stream of cereals. Farmers in the United States read the disappointing results in falling prices. Over 1870-73, corn and wheat averaged $0.463 and $1.174 per bushel and cotton $0.152 per pound; twenty years later they brought but $0.412 and $0.707 a bushel and $0.078 a pound. In 1889 corn fell to ten cents in Kansas, about half the estimated cost of production. Some farmers in need of cash to meet debts tried to increase income by increasing output of crops whose overproduction had already demoralized prices and cut farm receipts.

Railroad construction was an important spur to economic growth. Expansion peaked between 1879 and 1883, when eight thousand miles a year, on average, were built including the Southern Pacific, Northern Pacific and Santa Fe. An even higher peak was reached in the late 1880s, and the roads provided important markets for lumber, coal, iron, steel, and rolling stock.

The post-Civil War generation saw an enormous growth of manufacturing. Industrial output rose by some 296 percent, reaching in 1890 a value of almost $9.4 billion. In that year the nation’s 350,000 industrial firms employed nearly 4,750,000 workers. Iron and steel paced the progress of manufacturing. Farm and forest continued to provide raw materials for such established enterprises as cotton textiles, food, and lumber production. Heralding the machine age, however, was the growing importance of extractives — raw materials for a lengthening list of consumer goods and for producing and fueling locomotives, railroad cars, industrial machinery and equipment, farm implements, and electrical equipment for commerce and industry. The swift expansion and diversification of manufacturing allowed a growing independence from European imports and was reflected in the prominence of new goods among US exports. Already the value of American manufactures was more than half the value of European manufactures and twice that of Britain.

Onset and Causes of the Depression

The depression, which was signaled by a financial panic in 1893, has been blamed on the deflation dating back to the Civil War, the gold standard and monetary policy, underconsumption (the economy was producing goods and services at a higher rate than society was consuming and the resulting inventory accumulation led firms to reduce employment and cut back production), a general economic unsoundness (a reference less to tangible economic difficulties and more to a feeling that the economy was not running properly), and government extravagance .

Economic indicators signaling an 1893 business recession in the United States were largely obscured. The economy had improved during the previous year. Business failures had declined, and the average liabilities of failed firms had fallen by 40 percent. The country’s position in international commerce was improved. During the late nineteenth century, the United States had a negative net balance of payments. Passenger and cargo fares paid to foreign ships that carried most American overseas commerce, insurance charges, tourists’ expenditures abroad, and returns to foreign investors ordinarily more than offset the effect of a positive merchandise balance. In 1892, however, improved agricultural exports had reduced the previous year’s net negative balance from $89 million to $20 million. Moreover, output of non-agricultural consumer goods had risen by more than 5 percent, and business firms were believed to have an ample backlog of unfilled orders as 1893 opened. The number checks cleared between banks in the nation at large and outside New York, factory employment, wholesale prices, and railroad freight ton mileage advanced through the early months of the new year.

Yet several monthly series of indicators showed that business was falling off. Building construction had peaked in April 1892, later moving irregularly downward, probably in reaction to over building. The decline continued until the turn of the century, when construction volume finally turned up again. Weakness in building was transmitted to the rest of the economy, dampening general activity through restricted investment opportunities and curtailed demand for construction materials. Meanwhile, a similar uneven downward drift in business activity after spring 1892 was evident from a composite index of cotton takings (cotton turned into yarn, cloth, etc.) and raw silk consumption, rubber imports, tin and tin plate imports, pig iron manufactures, bituminous and anthracite coal production, crude oil output, railroad freight ton mileage, and foreign trade volume. Pig iron production had crested in February, followed by stock prices and business incorporations six months later.

The economy exhibited other weaknesses as the March 1893 date for Grover Cleveland’s inauguration to the presidency drew near. One of the most serious was in agriculture. Storm, drought, and overproduction during the preceding half-dozen years had reversed the remarkable agricultural prosperity and expansion of the early 1880s in the wheat, corn, and cotton belts. Wheat prices tumbled twenty cents per bushel in 1892. Corn held steady, but at a low figure and on a fall of one-eighth in output. Twice as great a decline in production dealt a severe blow to the hopes of cotton growers: the season’s short crop canceled gains anticipated from a recovery of one cent in prices to 8.3 cents per pound, close to the average level of recent years. Midwestern and Southern farming regions seethed with discontent as growers watched staple prices fall by as much as two-thirds after 1870 and all farm prices by two-fifths; meanwhile, the general wholesale index fell by one-fourth. The situation was grave for many. Farmers’ terms of trade had worsened, and dollar debts willingly incurred in good times to permit agricultural expansion were becoming unbearable burdens. Debt payments and low prices restricted agrarian purchasing power and demand for goods and services. Significantly, both output and consumption of farm equipment began to fall as early as 1891, marking a decline in agricultural investment. Moreover, foreclosure of farm mortgages reduced the ability of mortgage companies, banks, and other lenders to convert their earning assets into cash because the willingness of investors to buy mortgage paper was reduced by the declining expectation that they would yield a positive return.

Slowing investment in railroads was an additional deflationary influence. Railroad expansion had long been a potent engine of economic growth, ranging from 15 to 20 percent of total national investment in the 1870s and 1880s. Construction was a rough index of railroad investment. The amount of new track laid yearly peaked at 12,984 miles in 1887, after which it fell off steeply. Capital outlays rose through 1891 to provide needed additions to plant and equipment, but the rate of growth could not be sustained. Unsatisfactory earnings and a low return for investors indicated the system was over built and overcapitalized, and reports of mismanagement were common. In 1892, only 44 percent of rail shares outstanding returned dividends, although twice that proportion of bonds paid interest. In the meantime, the completion of trunk lines dried up local capital sources. Political antagonism toward railroads, spurred by the roads’ immense size and power and by real and imagined discrimination against small shippers, made the industry less attractive to investors. Declining growth reduced investment opportunity even as rail securities became less appealing. Capital outlays fell in 1892 despite easy credit during much of the year. The markets for ancillary industries, like iron and steel, felt the impact of falling railroad investment as well; at times in the 1880s rails had accounted for 90 percent of the country’s rolled steel output. In an industry whose expansion had long played a vital role in creating new markets for suppliers, lagging capital expenditures loomed large in the onset of depression.

European Influences

European depression was a further source of weakness as 1893 began. Recession struck France in 1889, and business slackened in Germany and England the following year. Contemporaries dated the English downturn from a financial panic in November. Monetary stringency was a base cause of economic hard times. Because specie — gold and silver — was regarded as the only real money, and paper money was available in multiples of the specie supply, when people viewed the future with doubt they stockpiled specie and rejected paper. The availability of specie was limited, so the longer hard times prevailed the more difficult it was for anyone to secure hard money. In addition to monetary stringency, the collapse of extensive speculations in Australian, South African, and Argentine properties; and a sharp break in securities prices marked the advent of severe contraction. The great banking house of Baring and Brothers, caught with excessive holdings of Argentine securities in a falling market, shocked the financial world by suspending business on November 20, 1890. Within a year of the crisis, commercial stagnation had settled over most of Europe. The contraction was severe and long-lived. In England many indices fell to 80 percent of capacity; wholesale prices overall declined nearly 6 percent in two years and had declined 15 percent by 1894. An index of the prices of principal industrial products declined by almost as much. In Germany, contraction lasted three times as long as the average for the period 1879-1902. Not until mid-1895 did Europe begin to revive. Full prosperity returned a year or more later.

Panic in the United Kingdom and falling trade in Europe brought serious repercussions in the United States. The immediate result was near panic in New York City, the nation’s financial center, as British investors sold their American stocks to obtain funds. Uneasiness spread through the country, fostered by falling stock prices, monetary stringency, and an increase in business failures. Liabilities of failed firms during the last quarter of 1890 were $90 million — twice those in the preceding quarter. Only the normal year’s end grain exports, destined largely for England, averted a gold outflow.

Circumstances moderated during the early months of 1891, although gold flowed to Europe, and business failures remained high. Credit eased, if slowly: in response to pleas for relief, the federal treasury began the premature redemption of government bonds to put additional money into circulation, and the end of the harvest trade reduced demand for credit. Commerce quickened in the spring. Perhaps anticipation of brisk trade during the harvest season stimulated the revival of investment and business; in any event, the harvest of 1891 buoyed the economy. A bumper American wheat crop coincided with poor yields in Europe increase exports and the inflow of specie: US exports in fiscal 1892 were $150 million greater than in the preceding year, a full 1 percent of gross national product. The improved market for American crops was primarily responsible for a brief cycle of prosperity in the United States that Europe did not share. Business thrived until signs of recession began to appear in late 1892 and early 1893.

The business revival of 1891-92 only delayed an inevitable reckoning. While domestic factors led in precipitating a major downturn in the United States, the European contraction operated as a powerful depressant. Commercial stagnation in Europe decisively affected the flow of foreign investment funds to the United States. Although foreign investment in this country and American investment abroad rose overall during the 1890s, changing business conditions forced American funds going abroad and foreign funds flowing into the United States to reverse as Americans sold off foreign holdings and foreigners sold off their holdings of American assets. Initially, contraction abroad forced European investors to sell substantial holdings of American securities, then the rate of new foreign investment fell off. The repatriation of American securities prompted gold exports, deflating the money stock and depressing prices. A reduced inflow of foreign capital slowed expansion and may have exacerbated the declining growth of the railroads; undoubtedly, it dampened aggregate demand.

As foreign investors sold their holdings of American stocks for hard money, specie left the United States. Funds secured through foreign investment in domestic enterprise were important in helping the country meet its usual balance of payments deficit. Fewer funds invested during the 1890s was one of the factors that, with a continued negative balance of payments, forced the United States to export gold almost continuously from 1892 to 1896. The impact of depression abroad on the flow of capital to this country can be inferred from the history of new capital issues in Britain, the source of perhaps 75 percent of overseas investment in the United States. British issues varied as shown in Table 3.

Table 3
British New Capital Issues, 1890-1898 (millions of pounds, sterling)

1890 142.6
1891 104.6
1892 81.1
1893 49.1
1894 91.8
1895 104.7
1896 152.8
1897 157.3
1898 150.2

Source: Hoffmann, p. 193

Simultaneously, the share of new British investment sent abroad fell from one-fourth in 1891 to one-fifth two years later. Over that same period, British net capital flows abroad declined by about 60 percent; not until 1896 and 1897 did they resume earlier levels.

Thus, the recession that began in 1893 had deep roots. The slowdown in railroad expansion, decline in building construction, and foreign depression had reduced investment opportunities, and, following the brief upturn effected by the bumper wheat crop of 1891, agricultural prices fell as did exports and commerce in general. By the end of 1893, business failures numbering 15,242 averaging $22,751 in liabilities, had been reported. Plagued by successive contractions of credit, many essentially sound firms failed which would have survived under ordinary circumstances. Liabilities totaled a staggering $357 million. This was the crisis of 1893.

Response to the Depression

The financial crises of 1893 accelerated the recession that was evident early in the year into a major contraction that spread throughout the economy. Investment, commerce, prices, employment, and wages remained depressed for several years. Changing circumstances and expectations, and a persistent federal deficit, subjected the treasury gold reserve to intense pressure and generated sharp counterflows of gold. The treasury was driven four times between 1894 and 1896 to resort to bond issues totaling $260 million to obtain specie to augment the reserve. Meanwhile, restricted investment, income, and profits spelled low consumption, widespread suffering, and occasionally explosive labor and political struggles. An extensive but incomplete revival occurred in 1895. The Democratic nomination of William Jennings Bryan for the presidency on a free silver platform the following year amid an upsurge of silverite support contributed to a second downturn peculiar to the United States. Europe, just beginning to emerge from depression, was unaffected. Only in mid-1897 did recovery begin in this country; full prosperity returned gradually over the ensuing year and more.

The economy that emerged from the depression differed profoundly from that of 1893. Consolidation and the influence of investment bankers were more advanced. The nation’s international trade position was more advantageous: huge merchandise exports assured a positive net balance of payments despite large tourist expenditures abroad, foreign investments in the United States, and a continued reliance on foreign shipping to carry most of America’s overseas commerce. Moreover, new industries were rapidly moving to ascendancy, and manufactures were coming to replace farm produce as the staple products and exports of the country. The era revealed the outlines of an emerging industrial-urban economic order that portended great changes for the United States.

Hard times intensified social sensitivity to a wide range of problems accompanying industrialization, by making them more severe. Those whom depression struck hardest as well as much of the general public and major Protestant churches, shored up their civic consciousness about currency and banking reform, regulation of business in the public interest, and labor relations. Although nineteenth century liberalism and the tradition of administrative nihilism that it favored remained viable, public opinion began to slowly swing toward governmental activism and interventionism associated with modern, industrial societies, erecting in the process the intellectual foundation for the reform impulse that was to be called Progressivism in twentieth century America. Most important of all, these opposed tendencies in thought set the boundaries within which Americans for the next century debated the most vital questions of their shared experience. The depression was a reminder of business slumps, commonweal above avarice, and principle above principal.

Government responses to depression during the 1890s exhibited elements of complexity, confusion, and contradiction. Yet they also showed a pattern that confirmed the transitional character of the era and clarified the role of the business crisis in the emergence of modern America. Hard times, intimately related to developments issuing in an industrial economy characterized by increasingly vast business units and concentrations of financial and productive power, were a major influence on society, thought, politics, and thus, unavoidably, government. Awareness of, and proposals of means for adapting to, deep-rooted changes attending industrialization, urbanization, and other dimensions of the current transformation of the United States long antedated the economic contraction of the nineties.

Selected Bibliography

*I would like to thank Douglas Steeples, retired dean of the College of Liberal Arts and professor of history, emeritus, Mercer University. Much of this article has been taken from Democracy in Desperation: The Depression of 1893 by Douglas Steeples and David O. Whitten, which was declared an Exceptional Academic Title by Choice. Democracy in Desperation includes the most recent and extensive bibliography for the depression of 1893.

Clanton, Gene. Populism: The Humane Preference in America, 1890-1900. Boston: Twayne, 1991.

Friedman, Milton, and Anna Jacobson Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Goodwyn, Lawrence. Democratic Promise: The Populist Movement in America. New York: Oxford University Press, 1976.

Grant, H. Roger. Self Help in the 1890s Depression. Ames: Iowa State University Press, 1983.

Higgs, Robert. The Transformation of the American Economy, 1865-1914. New York: Wiley, 1971.

Himmelberg, Robert F. The Rise of Big Business and the Beginnings of Antitrust and Railroad Regulation, 1870-1900. New York: Garland, 1994.

Hoffmann, Charles. The Depression of the Nineties: An Economic History. Westport, CT: Greenwood Publishing, 1970.

Jones, Stanley L. The Presidential Election of 1896. Madison: University of Wisconsin Press, 1964.

Kindleberger, Charles Poor. Manias, Panics, and Crashes: A History of Financial Crises. Revised Edition. New York: Basic Books, 1989.

Kolko, Gabriel. Railroads and Regulation, 1877-1916. Princeton: Princeton University Press, 1965.

Lamoreaux, Naomi R. The Great Merger Movement in American Business, 1895-1904. New York: Cambridge University Press, 1985.

Rees, Albert. Real Wages in Manufacturing, 1890-1914. Princeton, NJ: Princeton University Press, 1961.

Ritter, Gretchen. Goldbugs and Greenbacks: The Antimonopoly Tradition and the Politics of Finance in America. New York: Cambridge University Press, 1997.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” Journal of Political Economy 94, no. 1. (1986): 1-37.

Schwantes, Carlos A. Coxey’s Army: An American Odyssey. Lincoln: University of Nebraska Press, 1985.

Steeples, Douglas, and David Whitten. Democracy in Desperation: The Depression of 1893. Westport, CT: Greenwood Press, 1998.

Timberlake, Richard. “Panic of 1893.” In Business Cycles and Depressions: An Encyclopedia, edited by David Glasner. New York: Garland, 1997.

White, Gerald Taylor. Years of Transition: The United States and the Problems of Recovery after 1893. University, AL: University of Alabama Press, 1982.

Citation: Whitten, David. “Depression of 1893″. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/the-depression-of-1893/

The US Coal Industry in the Nineteenth Century

Sean Patrick Adams, University of Florida

Introduction

The coal industry was a major foundation for American industrialization in the nineteenth century. As a fuel source, coal provided a cheap and efficient source of power for steam engines, furnaces, and forges across the United States. As an economic pursuit, coal spurred technological innovations in mine technology, energy consumption, and transportation. When mine managers brought increasing sophistication to the organization of work in the mines, coal miners responded by organizing into industrial trade unions. The influence of coal was so pervasive in the United States that by the advent of the twentieth century, it became a necessity of everyday life. In an era where smokestacks equaled progress, the smoky air and sooty landscape of industrial America owed a great deal to the growth of the nation’s coal industry. By the close of the nineteenth century, many Americans across the nation read about the latest struggle between coal companies and miners by the light of a coal-gas lamp and in the warmth of a coal-fueled furnace, in a house stocked with goods brought to them by coal-fired locomotives. In many ways, this industry served as a major factor of American industrial growth throughout the nineteenth century.

The Antebellum American Coal Trade

Although coal had served as a major source of energy in Great Britain for centuries, British colonists had little use for North America’s massive reserves of coal prior to American independence. With abundant supplies of wood, water, and animal fuel, there was little need to use mineral fuel in seventeenth and eighteenth-century America. But as colonial cities along the eastern seaboard grew in population and in prestige, coal began to appear in American forges and furnaces. Most likely this coal was imported from Great Britain, but a small domestic trade developed in the bituminous fields outside of Richmond, Virginia and along the Monongahela River near Pittsburgh, Pennsylvania.

The Richmond Basin

Following independence from Britain, imported coal became less common in American cities and the domestic trade became more important. Economic nationalists such as Tench Coxe, Albert Gallatin, and Alexander Hamilton all suggested that the nation’s coal trade — at that time centered in the Richmond coal basin of eastern Virginia — would serve as a strategic resource for the nation’s growth and independence. Although it labored under these weighty expectations, the coal trade of eastern Virginia was hampered by its existence on the margins of the Old Dominion’s plantation economy. Colliers of the Richmond Basin used slave labor effectively in their mines, but scrambled to fill out their labor force, especially during peak periods of agricultural activity. Transportation networks in the region also restricted the growth of coal mining. Turnpikes proved too expensive for the coal trade and the James River and Kanawha Canal failed to make necessary improvements in order to accommodate coal barge traffic and streamline the loading, conveyance, and distribution of coal at Richmond’s tidewater port. Although the Richmond Basin was nation’s first major coalfield, miners there found growth potential to be limited.

The Rise of Anthracite Coal

At the same time that the Richmond Basin’s coal trade declined in importance, a new type of mineral fuel entered urban markets of the American seaboard. Anthracite coal has higher carbon content and is much harder than bituminous coal, thus earning the nickname “stone coal” in its early years of use. In 1803, Philadelphians watched a load of anthracite coal actually squelch a fire during a trial run, and city officials used the load of “stone coal” as attractive gravel for sidewalks. Following the War of 1812, however, a series of events paved the way for anthracite coal’s acceptance in urban markets. Colliers like Jacob Cist saw the shortage of British and Virginia coal in urban communities as an opportunity to promote the use of “stone coal.” Philadelphia’s American Philosophical Society and Franklin Institute enlisted the aid of the area’s scientific community to disseminate information to consumers on the particular needs of anthracite. The opening of several links between Pennsylvania’s anthracite fields via the Lehigh Coal and Navigation Company (1820), the Schuylkill Navigation Company (1825), and the Delaware and Hudson (1829) insured that the flow of anthracite from mine to market would be cheap and fast. “Stone coal” became less a geological curiosity by the 1830s and instead emerged as a valuable domestic fuel for heating and cooking, as well as a powerful source of energy for urban blacksmiths, bakers, brewers, and manufacturers. As demonstrated in Figure 1, Pennsylvania anthracite dominated urban markets by the late 1830s. By 1840, annual production had topped one million tons, or about ten times the annual production of the Richmond bituminous field.

Figure One: Percentage of Seaboard Coal Consumption by Origin, 1822-1842

Sources:
Hunt’s Merchant’s Magazine and Commercial Review 8 (June 1843): 548;
Alfred Chandler, “Anthracite Coal and the Beginnings of the Industrial Revolution,” p. 154.

The Spread of Coalmining

The antebellum period also saw the expansion of coal mining into many more states than Pennsylvania and Virginia, as North America contains a variety of workable coalfields. Ohio’s bituminous fields employed 7,000 men and raised about 320,000 tons of coal in 1850 — only three years later the state’s miners had increased production to over 1,300,000 tons. In Maryland, the George’s Creek bituminous region began to ship coal to urban markets by the Baltimore and Ohio Railroad (1842) and the Chesapeake and Ohio Canal (1850). The growth of St. Louis provided a major boost to the coal industries of Illinois and Missouri, and by 1850 colliers in the two states raised about 350,000 tons of coal annually. By the advent of the Civil War, coal industries appeared in at least twenty states.

Organization of Antebellum Mines

Throughout the antebellum period, coal mining firms tended to be small and labor intensive. The seams that were first worked in the anthracite fields of eastern Pennsylvania or the bituminous fields in Virginia, western Pennsylvania, and Ohio tended to lie close to the surface. A skilled miner and a handful of laborers could easily raise several tons of coal a day through the use of a “drift” or “slope” mine that intersected a vein of coal along a hillside. In the bituminous fields outside of Pittsburgh, for example, coal seams were exposed along the banks of the Monongahela and colliers could simply extract the coal with a pickax or shovel and roll it down the riverbank via a handcart into a waiting barge. Once the coal left the mouth of the mine, however, the size of the business handling it varied. Proprietary colliers usually worked on land that was leased for five to fifteen years — often from a large landowner or corporation. The coal was often shipped to market via a large railroad or canal corporation such as the Baltimore and Ohio Railroad, or the Delaware and Hudson Canal. Competition between mining firms and increases in production kept prices and profit margins relatively low, and many colliers slipped in and out of bankruptcy. These small mining firms were typical of the “easy entry, easy exit” nature of American business competition in the antebellum period.

Labor Relations

Since most antebellum coal mining operations were often limited to a few skilled miners aided by lesser skilled laborers, the labor relations in American coal mining regions saw little extended conflict. Early coal miners also worked close to the surface, often in horizontal drift mines, which meant that work was not as dangerous in the era before deep shaft mining. Most mining operations were far-flung enterprises away from urban centers, which frustrated attempts to organize miners into a “critical mass” of collective power — even in the nation’s most developed anthracite fields. These factors, coupled with the mine operator’s belief that individual enterprise in the anthracite regions insured a harmonious system of independent producers, had inhibited the development of strong labor organizations in Pennsylvania’s antebellum mining industry. In less developed regions, proprietors often worked in the mines themselves, so the lines between ownership, management, and labor were often blurred.

Early Unions

Most disputes, when they did occur, were temporary affairs that focused upon the low wages spurred by the intense competition among colliers. The first such action in the anthracite industry occurred in July of 1842 when workers from Minersville in Schuylkill County marched on Pottsville to protest low wages. This short-lived strike was broken up by the Orwigsburgh Blues, a local militia company. In 1848 John Bates enrolled 5,000 miners and struck for higher pay in the summer of 1849. But members of the “Bates Union” found themselves locked out of work and the movement quickly dissipated. In 1853, the Delaware and Hudson Canal Company’s miners struck for a 2½ cent per ton increase in their piece rate. This strike was successful, but failed to produce any lasting union presence in the D&H’s operations. Reports of disturbances in the bituminous fields of western Pennsylvania and Ohio follow the same pattern, as antebellum strikes tended to be localized and short-lived. Production levels thus remained high, and consumers of mineral fuel could count upon a steady supply reaching market.

Use of Anthracite in the Iron Industry

The most important technological development in the antebellum American coal industry was the successful adoption of anthracite coal to iron making techniques. Since the 1780s, bituminous coal or coke — which is bituminous coal with the impurities burned away — had been the preferred fuel for British iron makers. Once anthracite had nearly successfully entered American hearths, there seemed to be no reason why stone coal could not be used to make iron. As with its domestic use, however, the industrial potential of anthracite coal faced major technological barriers. In British and American iron furnaces of the early nineteenth century, the high heat needed to smelt iron ore required a blast of excess air to aid the combustion of the fuel, whether it was coal, wood, or charcoal. While British iron makers in the 1820s attempted to increase the efficiency of the process by using superheated air, known commonly as a “hot blast,” American iron makers still used a “cold blast” to stoke their furnaces. The density of anthracite coal resisted attempts to ignite it through the cold blast and therefore appeared to be an inappropriate fuel for most American iron furnaces.

Anthracite iron first appeared in Pennsylvania in 1840, when David Thomas brought Welsh hot blast technology into practice at the Lehigh Crane Iron Company. The firm had been chartered in 1839 under the general incorporation act. The Allentown firm’s innovation created a stir in iron making circles, and iron furnaces for smelting ore with anthracite began to appear across eastern and central Pennsylvania. In 1841, only a year after the Lehigh Crane Iron Company’s success, Walter Johnson found no less than eleven anthracite iron furnaces in operation. That same year, an American correspondent of London bankers cited savings on iron making of up to twenty-five percent after the conversion to anthracite and noted that “wherever the coal can be procured the proprietors are changing to the new plan; and it is generally believed that the quality of the iron is much improved where the entire process is affected with anthracite coal.” Pennsylvania’s investment in anthracite iron paid dividends for the industrial economy of the state and proved that coal could be adapted to a number of industrial pursuits. By 1854, forty-six percent of all American pig iron had been smelted with anthracite coal as a fuel, and by 1860 anthracite’s share of pig iron was more than fifty-six percent.

Rising Levels of Coal Output and Falling Prices

The antebellum decades saw the coal industry emerge as a critical component of America’s industrial revolution. Anthracite coal became a fixture in seaboard cities up and down the east coast of North America — as cities grew, so did the demand for coal. To the west, Pittsburgh and Ohio colliers shipped their coal as far as Louisville, Cincinnati, or New Orleans. As wood, animal, and waterpower became scarcer, mineral fuel usually took their place in domestic consumption and small-scale manufacturing. The structure of the industry, many small-scale firms working on short-term leases, meant that production levels remained high throughout the antebellum period, even in the face of falling prices. In 1840, American miners raised 2.5 million tons of coal to serve these growing markets and by 1850 increased annual production to 8.4 million tons. Although prices tended to fluctuate with the season, in the long run, they fell throughout the antebellum period. For example, in 1830 anthracite coal sold for about $11 per ton. Ten years later, the price had dropped to $7 per ton and by 1860 anthracite sold for about $5.50 a ton in New York City. Annual production in 1860 also passed twenty million tons for the first time in history. Increasing production, intense competition, low prices, and quiet labor relations all were characteristics of the antebellum coal trade in the United States, but developments during and after the Civil War would dramatically alter the structure and character of this critical industrial pursuit.

Coal and the Civil War

The most dramatic expansion of the American coal industry occurred in the late antebellum decades but the outbreak of the Civil War led to some major changes. The fuel needs of the federal army and navy, along with their military suppliers, promised a significant increase in the demand for coal. Mine operators planned for rising, or at least stable, coal prices for the duration of the war. Their expectations proved accurate. Even when prices are adjusted for wartime inflation, they increased substantially over the course of the conflict. Over the years 1860 to 1863, the real (i.e., inflation-adjusted) price of a ton of anthracite rose by over thirty percent, and in 1864 the real price had increased to forty-five percent above its 1860 level. In response, the production of coal increased to over twelve million tons of anthracite and over twenty-four million tons nationwide by 1865.

The demand for mineral fuel in the Confederacy led to changes in southern coalfields as well. In 1862, the Confederate Congress organized the Niter and Mining Bureau within the War Department to supervise the collection of niter (also known as saltpeter) for the manufacture of gunpowder and the mining of copper, lead, iron, coal, and zinc. In addition to aiding the Richmond Basin’s production, the Niter and Mining Bureau opened new coalfields in North Carolina and Alabama and coordinated the flow of mineral fuel to Confederate naval stations along the coast. Although the Confederacy was not awash in coal during the conflict, the work of the Niter and Mining Bureau established the groundwork for the expansion of mining in the postbellum South.

In addition to increases in production, the Civil War years accelerated some qualitative changes in the structure of the industry. In the late 1850s, new railroads stretched to new bituminous coalfields in states like Maryland, Ohio, and Illinois. In the established anthracite coal regions of Pennsylvania, railroad companies profited immensely from the increased traffic spurred by the war effort. For example, the Philadelphia & Reading Railroad’s margin of profit increased from $0.88 per ton of coal in 1861 to $1.72 per ton in 1865. Railroad companies emerged from the Civil War as the most important actors in the nation’s coal trade.

The American Coal Trade after the Civil War

Railroads and the Expansion of the Coal Trade

In the years immediately following the Civil War, the expansion of the coal trade accelerated as railroads assumed the burden of carrying coal to market and opening up previously inaccessible fields. They did this by purchasing coal tracts directly and leasing them to subsidiary firms or by opening their own mines. In 1878, the Baltimore and Ohio Railroad shipped three million tons of bituminous coal from mines in Maryland and from the northern coalfields of the new state of West Virginia. When the Chesapeake and Ohio Railroad linked Huntington, West Virginia with Richmond, Virginia in 1873, the rich bituminous coal fields of southern West Virginia were open for development. The Norfolk and Western developed the coalfields of southwestern Virginia by completing their railroad from tidewater to remote Tazewell County in 1883. A network of smaller lines linking individual collieries to these large trunk lines facilitated the rapid development of Appalachian coal.

Railroads also helped open up the massive coal reserves west of the Mississippi. Small coal mines in Missouri and Illinois existed in the antebellum years, but were limited to the steamboat trade down the Mississippi River. As the nation’s web of railroad construction expanded across the Great Plains, coalfields in Colorado, New Mexico, and Wyoming witnessed significant development. Coal had truly become a national endeavor in the United States.

Technological Innovations

As the coal industry expanded, it also incorporated new mining methods. Early slope or drift mines intersected coal seams relatively close to the surface and needed only small capital investments to prepare. Most miners still used picks and shovels to extract the coal, but some miners used black powder to blast holes in the coal seams, then loaded the broken coal onto wagons by hand. But as miners sought to remove more coal, shafts were dug deeper below the water line. As a result, coal mining needed larger amounts of capital as new systems of pumping, ventilation, and extraction required the implementation of steam power in mines. By the 1890s, electric cutting machines replaced the blasting method of loosening the coal in some mines, and by 1900 a quarter of American coal was mined using these methods. As the century progressed, miners raised more and more coal by using new technology. Along with this productivity came the erosion of many traditional skills cherished by experienced miners.

The Coke Industry

Consumption patterns also changed. The late nineteenth century saw the emergence of coke — a form of processed bituminous coal in which impurities are “baked” out under high temperatures — as a powerful fuel in the iron and steel industry. The discovery of excellent coking coal in the Connellsville region of southwestern Pennsylvania spurred the aggressive growth of coke furnaces there. By 1880, the Connellsville region contained more than 4,200 coke ovens and the national production of coke in the United States stood at three million tons. Two decades later, the United States consumed over twenty million tons of coke fuel.

Competition and Profits

The successful incorporation of new mining methods and the emergence of coke as a major fuel source served as both a blessing and a curse to mining firms. With the new technology they raised more coal, but as more coalfields opened up and national production neared eighty million tons by 1880, coal prices remained relatively low. Cheap coal undoubtedly helped America’s rapidly industrializing economy, but it also created an industry structure characterized by boom and bust periods, low profit margins, and cutthroat competition among firms. But however it was raised, the United States became more and more dependent upon coal as the nineteenth century progressed, as demonstrated by Figure 2.

Figure 2: Coal as a Percentage of American Energy Consumption, 1850-1900

Source: Sam H. Schurr and Bruce C. Netschert, Energy in the American Economy, 1850-1975 (Baltimore: Johns Hopkins Press, 1960), 36-37.

The Rise of Labor Unions

As coal mines became more capital intensive over the course of the nineteenth century, the role of miners changed dramatically. Proprietary mines usually employed skilled miners as subcontractors in the years prior to the Civil War; by doing so they abdicated a great deal of control over the pace of mining. Corporate reorganization and the introduction of expensive machinery eroded the traditional authority of the skilled miner. By the 1870s, many mining firms employed managers to supervise the pace of work, but kept the old system of paying mine laborers per ton rather than an hourly wage. Falling piece rates quickly became a source of discontent in coal mining regions.

Miners responded to falling wages and the restructuring of mine labor by organizing into craft unions. The Workingmen’s Benevolent Association founded in Pennsylvania in 1868, united English, Irish, Scottish, and Welsh anthracite miners. The WBA won some concessions from coal companies until Franklin Gowen, acting president of the Philadelphia and Reading Railroad led a concerted effort to break the union in the winter of 1874-75. When sporadic violence plagued the anthracite fields, Gowen led the charge against the “Molly Maguires,” a clandestine organization supposedly led by Irish miners. After the breaking of the WBA, most coal mining unions served to organize skilled workers in specific regions. In 1890, a national mining union appeared when delegates from across the United States formed the United Mine Workers of America. The UMWA struggled to gain widespread acceptance until 1897, when widespread strikes pushed many workers into union membership. By 1903, the UMWA listed about a quarter of a million members, raised a treasury worth over one million dollars, and played a major role in industrial relations of the nation’s coal industry.

Coal at the Turn of the Century

By 1900, the American coal industry was truly a national endeavor that raised fifty-seven million tons of anthracite and 212 million tons of bituminous coal. (See Tables 1 and 2 for additional trends.) Some coal firms grew to immense proportions by nineteenth-century standards. The U.S. Coal and Oil Company, for example, was capitalized at six million dollars and owned the rights to 30,000 acres of coal-bearing land. But small mining concerns with one or two employees also persisted through the turn of the century. New developments in mine technology continued to revolutionize the trade as more and more coal fields across the United States became integrated into the national system of railroads. Industrial relations also assumed nationwide dimensions. John Mitchell, the leader of the UMWA, and L.M. Bowers of the Colorado Fuel and Iron Company, symbolized a new coal industry in which hard-line positions developed in both labor and capital’s respective camps. Since the bituminous coal industry alone employed over 300,000 workers by 1900, many Americans kept a close eye on labor relations in this critical trade. Although “King Coal” stood unchallenged as the nation’s leading supplier of domestic and industrial fuel, tension between managers and workers threatened the stability of the coal industry in the twentieth century.

Table 1: Coal Production in the United States, 1829-1899

Year Coal Production (thousands of tons) Percent Increase over Decade Tons per capita
Anthracite Bituminous
1829 138 102 0.02
1839 1008 552 550 0.09
1849 3995 2453 313 0.28
1859 9620 6013 142 0.50
1869 17,083 15,821 110 0.85
1879 30,208 37,898 107 1.36
1889 45,547 95,683 107 2.24
1899 60,418 193,323 80 3.34

Source: Fourteenth Census of the United States, Vol. XI, Mines and Quarries, 1922, Tables 8 and 9, pp. 258 and 260.

Table 2: Leading Coal Producing States, 1889

State Coal Production (thousands of tons)
Pennsylvania 81,719
Illinois 12,104
Ohio 9977
West Virginia 6232
Iowa 4095
Alabama 3573
Indiana 2845
Colorado 2544
Kentucky 2400
Kansas 2221
Tennessee 1926

Source: Thirteenth Census of the United States, Vol. XI, Mines and Quarries, 1913, Table 4, p. 187

Suggestions for Further Reading

Adams, Sean Patrick. “Different Charters, Different Paths: Corporations and Coal in Antebellum Pennsylvania and Virginia,” Business and Economic History 27 (Fall 1998): 78-90.

Adams, Sean Patrick. Old Dominion, Industrial Commonwealth: Coal, Politics, and Economy in Antebellum America. Baltimore: Johns Hopkins University Press, 2004.

Binder, Frederick Moore. Coal Age Empire: Pennsylvania Coal and Its Utilization to 1860. Harrisburg: Pennsylvania Historical and Museum Commission, 1974.

Blatz, Perry. Democratic Miners: Work and Labor Relations in the Anthracite Coal Industry, 1875-1925. Albany: SUNY Press, 1994.

Broehl, Wayne G. The Molly Maguires. Cambridge, MA: Harvard University Press, 1964.

Bruce, Kathleen. Virginia Iron Manufacture in the Slave Era. New York: The Century Company, 1931.

Chandler, Alfred. “Anthracite Coal and the Beginnings of the ‘Industrial Revolution’ in the United States,” Business History Review 46 (1972): 141-181.

DiCiccio, Carmen. Coal and Coke in Pennsylvania. Harrisburg: Pennsylvania Historical and Museum Commission, 1996

Eavenson, Howard. The First Century and a Quarter of the American Coal Industry. Pittsburgh: Privately Printed, 1942.

Eller, Ronald. Miners, Millhands, and Mountaineers: Industrialization of the Appalachian South, 1880-1930. Knoxville: University of Tennessee Press, 1982.

Harvey, Katherine. The Best Dressed Miners: Life and Labor in the Maryland Coal Region, 1835-1910. Ithaca, NY: Cornell University Press, 1993.

Hoffman, John. “Anthracite in the Lehigh Valley of Pennsylvania, 1820-1845,” United States National Museum Bulletin 252 (1968): 91-141.

Laing, James T. “The Early Development of the Coal Industry in the Western Counties of Virginia,” West Virginia History 27 (January 1966): 144-155.

Laslett, John H.M. editor. The United Mine Workers: A Model of Industrial Solidarity? University Park: Penn State University Press, 1996.

Letwin, Daniel. The Challenge of Interracial Unionism: Alabama Coal Miners, 1878-1921 Chapel Hill: University of North Carolina Press, 1998.

Lewis, Ronald. Coal, Iron, and Slaves. Industrial Slavery in Maryland and Virginia, 1715-1865. Westport, Connecticut: Greenwood Press, 1979.

Long, Priscilla. Where the Sun Never Shines: A History of America’s Bloody Coal Industry. New York: Paragon, 1989.

Nye, David E.. Consuming Power: A Social History of American Energies. Cambridge: Massachusetts Institute of Technology Press, 1998.

Palladino, Grace. Another Civil War: Labor, Capital, and the State in the Anthracite Regions of Pennsylvania, 1840-1868. Urbana: University of Illinois Press, 1990.

Powell, H. Benjamin. Philadelphia’s First Fuel Crisis. Jacob Cist and the Developing Market for Pennsylvania Anthracite. University Park: The Pennsylvania State University Press, 1978.

Schurr, Sam H. and Bruce C. Netschert. Energy in the American Economy, 1850-1975: An Economic Study of Its History and Prospects. Baltimore: Johns Hopkins Press, 1960.

Stapleton, Darwin. The Transfer of Early Industrial Technologies to America. Philadelphia: American Philosophical Society, 1987.

Stealey, John E.. The Antebellum Kanawha Salt Business and Western Markets. Lexington: The University Press of Kentucky, 1993.

Wallace, Anthony F.C. St. Clair. A Nineteenth-Century Coal Town’s Experience with a Disaster-Prone Industry. New York: Alfred A. Knopf, 1981.

Warren, Kenneth. Triumphant Capitalism: Henry Clay Frick and the Industrial Transformation of America. Pittsburgh: University of Pittsburgh Press, 1996.

Woodworth, J. B.. “The History and Conditions of Mining in the Richmond Coal-Basin, Virginia.” Transactions of the American Institute of Mining Engineers 31 (1902): 477-484.

Yearley, Clifton K.. Enterprise and Anthracite: Economics and Democracy in Schuylkill County, 1820-1875. Baltimore: The

Citation: Adams, Sean. “US Coal Industry in the Nineteenth Century”. EH.Net Encyclopedia, edited by Robert Whaples. January 23, 2003. URL http://eh.net/encyclopedia/the-us-coal-industry-in-the-nineteenth-century/

Bankruptcy Law in the United States

Bradley Hansen, Mary Washington College

Since 1996 over a million people a year have filed for bankruptcy in the United States. Most seek a discharge of debts in exchange for having their assets liquidated for the benefit of their creditors. The rest seek the assistance of bankruptcy courts in working out arrangements with their creditors. The law has not always been so kind to insolvent debtors. Throughout most of the nineteenth century there was no bankruptcy law in the United States, and most debtors found it impossible to receive a discharge from their debts. Early in the century debtors could have expected even harsher treatment, such as imprisonment for debt.

Table 1. Chronology of Bankruptcy Law in The United States, 1789-1978

Date Event
1789 The Constitution empowers Congress to enact uniform laws on the subject of bankruptcy.
1800 First bankruptcy law is enacted. The law allows only for involuntary bankruptcy of traders.
1803 First bankruptcy law is repealed amid complaints of excessive expenses and corruption.
1841 Second bankruptcy law is enacted in the wake of the Panics of 1837 and 1839. The law allows both voluntary and involuntary bankruptcy.
1843 1841 Bankruptcy Act is repealed, amid complaints about expenses and corruption.
1867 Prompted by demands arising from financial failures during the Panic of 1857 and the Civil War, Congress enacts the third bankruptcy law.
1874 The 1867 Bankruptcy Act is amended to allow for compositions.
1878 The 1867 Bankruptcy Law is repealed.
1881 The National Convention of Boards of Trade is formed to lobby for bankruptcy legislation.
1889 The National Convention of Representatives of Commercial Bodies is formed to lobby for bankruptcy legislation. The president of the Convention, Jay L. Torrey, drafts a bankruptcy bill.
1898 Congress passes a bankruptcy bill based on the Torrey bill.
1933-34 The 1898 Bankruptcy Act is amended to include railroad reorganization, corporate reorganization, and individual debtor arrangements.
1938 The Chandler Act amends the 1898 Bankruptcy Act, creating a menu of options for both business and non-business debtors.
1978 The 1898 Bankruptcy Act is replaced by The Bankruptcy Reform Act.

To say that there was no bankruptcy law in the United States for most of the nineteenth century is not to say that there were no laws governing insolvency or the collection of debts. Americans have always relied on credit and have always had laws governing the collection of debts. Debtor-creditor laws and their enforcement are important because they influence the supply and demand for credit. Laws that do not encourage the repayment of debts increase risk for creditors and reduce the supply of credit. On the other hand, laws that are too strict also have costs. Strict laws such as imprisonment for debt can discourage entrepreneurs from experimenting. Many of America’s most famous entrepreneurs, such as Henry Ford, failed at least once before making their fortunes.

Over the last two hundred years the United States has shifted from a legal regime that was primarily directed at the strict enforcement of debt contracts to one that provides numerous means to alter the terms of debt contracts. As the economy developed groups of people became convinced that strict enforcement of credit contracts was unfair, inefficient, contrary to the public interest, or simply not in their own self interest. Periodic financial crises in the nineteenth century generated demands for bankruptcy laws to discharge debts. They also led to the introduction of voluntary bankruptcy and the extension of the right to file for bankruptcy to all individuals. The expansion of interstate commerce in the late nineteenth century led to demands for a uniform and efficient bankruptcy law throughout the United States. The rise of railroads gave rise to a demand for corporate reorganization. The expansion of consumer credit in the twentieth century and the rise in consumer bankruptcy cases led to the introduction of arrangements into bankruptcy law, and continue to fuel demands for revision of bankruptcy law today.

Origins of American Bankruptcy Law

Like much of American law, the origins of both state laws for the collection of debt and federal bankruptcy law can be found in England. State laws are, in general, derived from common law procedures for the collection of debt. Under the common law a variety of procedures evolved to aid a creditor in collecting a debt. Generally, the creditor can obtain a judgment from a court for the amount that he is owed and then have a legal official seize some of the debtor’s property or wages to satisfy this judgement. In the past a defaulting debtor could also be placed in prison to coerce repayment. Bankruptcy law does not replace other collection laws but does supercede them. Creditors still use procedures such as garnishing a debtor’s wages, but if the debtor or another creditor files for bankruptcy such collection efforts are stopped.

Under the U.S. Constitution, adopted 1789, bankruptcy law became a federal law in the United States. There are two clauses of the Constitution that influenced the evolution of bankruptcy law. First, in Article One, Section Eight Congress was empowered to enact uniform laws on the subject of bankruptcy. Second, the Contract Clause prohibited states from passing laws that impair the obligation of contracts. Courts have generally interpreted these clauses so as to give wide latitude to the federal government to alter the obligations of debt contracts while restricting state governments. States, however, are not completely barred from altering the terms of contracts. In its 1827 decision on Ogden vs. Saunders the Supreme Court declared that states could pass laws that granted a discharge for debts that were incurred after the law was passed; however, a state discharge can not be binding on creditors who are citizens of other states.

The evolution of bankruptcy law in the United States can be divided into two periods. In the first period, which encompasses most of the nineteenth century, Congress enacted three laws in the wake of financial crises. In each case the law was repealed within a few years amid complaints of high costs and corruption. The second period begins in 1881 when associations of merchants and manufacturers banded together to form a national association to lobby for a federal bankruptcy law. In contrast to previous demands for bankruptcy law, which were prompted largely by crises, late nineteenth century demands for bankruptcy law were for a permanent law suited to the needs of a commercial nation. In 1898 the Act to Establish a Uniform System of Bankruptcy was enacted and the United States has had a bankruptcy law ever since.

The Temporary Bankruptcy Acts of 1800, 1841 and 1867

Congress first exercised its power to enact uniform laws on bankruptcy in 1800. The debates in the Annals of Congress are brief but suggest that the demand for the law arose from individuals who were in financial distress. The law was modeled after the English bankruptcy law of the time. The law applied only to traders. Creditors could file a bankruptcy petition against a debtor, the debtor’s assets would be divided on a pro rata basis among his creditors, and the debtor would receive a discharge. Although debtors could not file a voluntary bankruptcy petition, it was generally believed that many debtors asked a friendly creditor to petition them into the bankruptcy court so that they could obtain a discharge. The law was intended to remain in effect for five years. Complaints that the law was expensive to administer, that it was difficult and costly to travel to federal courts, and that the law provided opportunities for fraud led to its repeal after only two years. Similar complaints were to follow the passage of subsequent bankruptcy laws.

Bankruptcy law largely disappeared from national politics until the Panic of 1839. A few petitions and memorials were sent to Congress in the wake of the Panic of 1819, but no law was passed. The Panic of 1839 and the recession that followed it brought forward a flood of petitions and memorials for bankruptcy legislation. Memorials typically declared that many business people had been brought to ruin by economic conditions that were beyond their control not through any fault of their own. In the wake of the Panic, Whigs made the attack on Democratic economic policies and the passage of bankruptcy relief central parts of their platform. After gaining control of Congress and the Presidency, the Whigs pushed through the 1841 Bankruptcy Act. The law went into effect February 2, 1842.

Like its predecessor, the Bankruptcy Act of 1841 was short-lived. The law was repealed March 3, 1843. The rapid about-face on bankruptcy was the result of the collapse of a bargain between Northern and Southern Whigs. Democrats overwhelmingly opposed the passage of the Act and supported its repeal. Southern Whigs also generally opposed a federal bankruptcy law. Northern Whigs appear to have obtained the Southern Whigs votes for passage by agreeing to distribute the proceeds from the sales of federal lands to the states. A majority of Southern Whigs voted for passage but then reversed their votes the next year. Despite its short life, over 41,000 petitions for bankruptcy, most of them voluntary, were filed under the 1841 law.

The primary innovations of the Bankruptcy Act of 1841 were the introduction of voluntary bankruptcy and the widening of the scope of occupations that could use the law. With the introduction of voluntary bankruptcy, debtors no longer had to resort to the assistance of a friendly creditor. Unlike the previous law in which only traders could become bankrupts, under the 1841 Act traders, bankers, brokers, factors, underwriters, and marine insurers could be made involuntary bankrupts and any person could apply for voluntary bankruptcy.

After repeal of the Bankruptcy Act of 1841, the subject of bankruptcy again disappeared from congressional consideration until the Panic of 1857, when appeals for a bankruptcy law resurfaced. The financial distress caused to Northern merchants by the Civil War further fueled demands for bankruptcy legislation. Though demands for a bankruptcy law persisted throughout the War, considerable opposition also existed to passing a law before the War was over. In the first Congress after the end of the War, the Bankruptcy Act of 1867 was enacted. The 1867 Act was amended several times and lasted longer than its predecessors. An 1874 amendment added compositions to bankruptcy law for the first time. Under the composition provision a debtor could offer a plan to distribute his assets among his creditors to settle the case. Again, complaints of excessive fees and expenses led to the repeal of the Bankruptcy Act in 1878. Table 2 shows the number of petitions filed under the 1867 law between 1867 and 1872.

Table 2. Bankruptcy Petitions, 1867-1872

Year Petitions
1867 7,345
1868 29,539
1869 5,921
1870 4,301
1871 5,438
1872 6,074

Source: Expenses of Proceedings in Bankruptcy In United States Courts. Senate Executive Document 19 (43-1) 1580.

During the first three quarters of the nineteenth century the demand for bankruptcy legislation rose with financial panics and fell as they passed. Many people came to believe that the forces that brought people to insolvency were often beyond their control and that to give them a fresh start was not only fair but in the best interest of society. Burdened with debts they had no hope of paying they had no incentive to be productive, creditors would take anything they earned. Freed from these debts they could once again become productive members of society. The spread of the belief that debtors should not be subjected to the harshest elements of debt collection law can also be seen in numerous state laws enacted during the nineteenth century. Homestead and exemption laws declared property that creditors could not take. Stay and moratoria laws were passed during recessions to stall collection efforts. Over the course of the nineteenth century, states also abolished imprisonment for debt.

Demand For A Permanent Bankruptcy Law

The repeal of the 1867 Bankruptcy Act was followed almost immediately by a well-organized movement to obtain a new Bankruptcy law. A national campaign by merchants and manufacturers to obtain bankruptcy legislation began in 1881 when The New York Board of Trade and Transportation organized a National Convention of Boards of Trade.The participants at the Convention endorsed a bankruptcy bill prepared by John Lowell, a judge from Massachusetts. They continued to lobby for the bill throughout the 1880s.

After failing to obtain passage of the Lowell bill, associations of merchants and manufacturers met again in 1889. Under the name of The National Convention of Representatives of Commercial Bodies they held meetings in St. Louis and in Minneapolis. The president of the Convention, a lawyer and businessman named Jay Torrey, drafted a bill that the Convention lobbied for throughout the 1890s. The bill allowed both voluntary and involuntary petitions, though wage earners and farmers could not be made involuntary bankrupts. The bill was primarily directed at liquidation but did include a provision for composition. A composition had to be approved by a majority of creditors in both number and value. In a compromise with states’ rights advocates, the bill declared that exemptions would be determined by the states.

The merchants and manufacturers, who organized the conventions, provided credit to their customers whenever they delivered goods in advance of payment. They were troubled by three features of state debtor-creditor laws. First, the details of collection laws varied from state to state, forcing them to learn the laws in all the states in which they wished to sell goods. Second, many state laws discriminated against foreign creditors, that is, creditors who were not citizens of the state. Third, many of the state laws provided for a first-come, first-served distribution of assets rather than a pro rata division. With the first-come, first-served rule, the first creditor to go to court could claim all the assets necessary to pay his debts leaving the last to receive nothing. The first-come, first-served rule of collection tended to create incentives for creditors to race to be the first to file a claim. The effect of this rule was described by Jay Torrey: “If a creditor suspects his debtor is in financial trouble, he usually commences an attachment suit, and as a result the debtor is thrown into liquidation irrespective of whether he is solvent or insolvent. This course is ordinarily imperative because if he does not pursue that course some other creditor will.” Thus the law could actually precipitate business failures. As interstate commerce expanded in the late nineteenth century more merchants and manufacturers experienced these three problems

Merchants and manufacturers also found it easier to form a national organization in the late nineteenth century because of the growth of trade associations, boards of trade, chambers of commerce and other commercial organizations. By forming a national organization composed of businessmen’s associations from all over the country, merchants and manufacturers were able to act in unison in drafting a bankruptcy bill and lobbying for a bankruptcy bill. The bill they drafted not only provided uniformity and a pro rata distribution, but was designed to prevent the excessive fees and expenses that had been a major complaint against previous bankruptcy laws.

As early as 1884, the Republican Party supported the bankruptcy bills put forward by the merchants and manufacturers. A majority in both the Republican and Democratic parties supported bankruptcy legislation during the late nineteenth century. It took nearly twenty years to enact bankruptcy legislation because they supported different versions of bankruptcy law. The Democratic Party supported bills that were purely voluntary (creditors could not initiate proceedings) and temporary (the law would only remain in effect for a few years). The requirement that the law be temporary was crucial to Democrats because a vote for a permanent bankruptcy law would have been a vote for the expansion of federal power and against states’ rights, a central component of Democratic policy. Throughout the 1880s and 1890s, votes on bankruptcy split strictly along party lines. The majority of Republicans preferred the status quo to the Democrats bills and the majority of Democrats preferred the status quo to the Republican bills. Because control of Congress was split between the two parties for most of the last quarter of the nineteenth century neither side could force through their version of bankruptcy law. This period of divided government ended with the 55th Congress, in which the Bankruptcy Act of 1898 was passed.

Railroad Receivership and the Origins of Corporate Reorganization

The 1898 Bankruptcy Act was designed to aid creditors in liquidation of an insolvent debtor’s assets, but one of the important features of current bankruptcy law is the provision for reorganization of insolvent corporations. To find the origins of corporate reorganization one has to look outside the early evolution of bankruptcy law and look instead at the evolution of receiverships for insolvent railroads. A receiver is an individual appointed by a court to take control of some property, but courts in the nineteenth century developed this tool as a means to reorganize troubled railroads. The first reorganization through receivership occurred in 1846, when a Georgia court appointed a receiver over the insolvent Munroe Railway Co. and successfully reorganized it as the Macon and Western Railway. In the last two decades of the nineteenth century the number of receiverships increased dramatically; see Table 3. In theory, courts were supposed to appoint an indifferent party as receiver, and the receiver was merely to conserve the railroad while the best means to liquidate it was ascertained. In fact, judges routinely appointed the president, vice-president or other officers of the insolvent railway and assigned them the task of getting the railroad back on its feet. The object of the receivership was typically a sale of the railroad as a whole. But the sale was at least partly a fiction. The sole bidder was usually a committee of the bondholders using their bonds as payment. Thus the receivership involved a financial reorganization of the firm in which the bond and stock holders of the railroad traded in their old securities for new ones. The task of the reorganizers was to find a plan acceptable to the bondholders. For example, in the Wabash receivership of 1886, first mortgage bondholders ultimately agreed to exchange their 7 percent bonds for new ones of 5 percent. The sale resulted in the creation of a new railroad with the assets of the old. Often the transformation was simply a matter of changing “Railway” to “Railroad” in the name of the corporation. Throughout the late nineteenth and early twentieth centuries judges denied other corporations the right to reorganize through receivership. They emphasized that railroads were special because of their importance to the public.

Unlike the credit supplied by merchants and manufacturers, much of the debt of railroads was secured. For example, bondholders might have a mortgage that said they could claim a specific line of track if the railroad failed to make its bond payments. If a railroad became insolvent different groups of bondholders might claim different parts of the railroad. Such piecemeal liquidation of a business presented two problems in the case of railroads. First, many people believed that piecemeal liquidation would destroy much of the value of the assets. In his 1859 Treatise on the Law of Railways, Isaac Redfield explained that, “The railway, like a complicated machine, consists of a great number of parts, the combined action of which is necessary to produce revenue.” Second, railroads were regarded as quasi-public corporations. They were given subsidies and special privileges. Their charters often stated that their corporate status had been granted in exchange for service to the public. Courts were reluctant to treat railroads like other enterprises when they became insolvent and instead used receivership proceedings to make sure that the railroad continued to operate while its finances were reorganized.

Table 3. Railroad Receiverships, 1870-1897

Percentage of
Receiverships Mileage in Mileage put in
Year Established Receivership Receivership
1870 3 531 1
1871 4 644 1.07
1872 4 535 0.81
1873 10 1,357 1.93
1874 33 4,414 6.1
1875 43 7,340 9.91
1876 25 4,714 6.14
1877 33 3,090 3.91
1878 27 2,371 2.9
1879 12 1,102 1.27
1880 13 940 1.01
1881 5 110 0.11
1882 13 912 0.79
1883 12 2,041 1.68
1884 40 8,731 6.96
1885 44 7,523 5.86
1886 12 1,602 1.17
1887 10 1,114 0.74
1888 22 3,205 2.05
1889 24 3,784 2.35
1890 20 2,460 1.48
1891 29 2,017 1.18
1892 40 4,313 2.46
1893 132 27,570 15.51
1894 50 4,139 2.31
1895 32 3,227 1.78
1896 39 3,715 2.03
1897 21 1,536 0.83

Source: Swain, H. H. “Economic Aspects of Railroad Receivership.” Economic Studies 3, (1898): 53-161.

Depression Era Bankruptcy Reforms

Reorganization and bankruptcy were brought together by the amendments to the 1898 Bankruptcy Act during the Great Depression. By the late 1920s, a number of problems had become apparent with both the bankruptcy law and receivership. Table 4 shows the number of bankruptcy petitions filed each year since the law was enacted. The use of consumer credit expanded rapidly in the 1920s and so did wage earner bankruptcy cases. As Table 5 shows, voluntary bankruptcy by wage earners became an increasingly large proportion of bankruptcy petitions. Unlike mercantile bankruptcy cases, in many wage earner cases there were no assets. Expecting no return, many creditors paid little attention to bankruptcy cases and corruption spread in the bankruptcy courts. An investigation into bankruptcy in the southern district of New York recorded numerous abuses and led to the disbarment of of more than a dozen lawyers. In the wake of the investigation President Hoover appointed Thomas Thacher to investigate bankruptcy procedure in the United States. The Thacher Report recommended that an administrative staff be created to oversee bankruptcies. The bankruptcy administrators would be empowered to investigate bankrupts and reject requests for discharge. The report also suggested that many debtors could pay their debts if given an opportunity to work out an arrangement with their creditors. It suggested that procedures for the adjustment or extension of debts be added to the law. Corporate lawyers also identified three problems with the corporate receiverships. First, it was necessary to obtain an ancillary receivership in each federal district in which the corporation had assets. Second, some creditors might try to withhold their approval of a reorganization plan in exchange for a better deal for themselves. Third, judges were unwilling to apply reorganization through receivership to corporations other than railroads. Consequently, the Thacher report suggested that procedures for corporate reorganization also be incorporated into bankruptcy law.

Table 4. Bankruptcy Petitions Filed, 1899-1997

Petitions per Percentage
Year Voluntary Involuntary Total 10,000 Population Involuntary
1899 20,994 1,452 22,446 3.00 6.47
1900 20,128 1,810 21,938 2.88 8.25
1901 17,015 1,992 19,007 2.45 10.48
1902 16,374 2,108 18,482 2.33 11.41
1903 14,308 2,567 16,875 2.09 15.21
1904 13,784 3,298 17,082 2.08 19.31
1905 13,852 3,094 16,946 2.02 18.26
1906 10,526 2,446 12,972 1.52 18.86
1907 11,127 3,033 14,160 1.63 21.42
1908 13,109 4,709 17,818 2.01 26.43
1909 13,638 4,380 18,018 1.99 24.31
1910 14,059 3,994 18,053 1.95 22.12
1911 14,907 4,431 19,338 2.06 22.91
1912 15,313 4,432 19,745 2.07 22.45
1913 16,361 4,569 20,930 2.15 21.83
1914 17,924 5,035 22,959 2.32 21.93
1915 21,979 5,653 27,632 2.75 20.46
1916 23,027 4,341 27,368 2.68 15.86
1917 21,161 3,677 24,838 2.41 14.80
1918 17,261 3,124 20,385 1.98 15.32
1919 12,035 2,013 14,048 1.34 14.33
1920 11,333 2,225 13,558 1.27 16.41
1921 16,645 6,167 22,812 2.10 27.03
1922 28,879 9,286 38,165 3.47 24.33
1923 33,922 7,832 41,754 3.73 18.76
1924 36,977 6,542 43,519 3.81 15.03
1925 39,328 6,313 45,641 3.94 13.83
1926 40,962 5,412 46,374 3.95 11.67
1927 43,070 5,688 48,758 4.10 11.67
1928 47,136 5,928 53,064 4.40 11.17
1929 51,930 5,350 57,280 4.70 9.34
1930 57,299 5,546 62,845 5.11 8.82
1931 58,780 6,555 65,335 5.27 10.03
1932 62,475 7,574 70,049 5.61 10.81
1933 56,049 6,207 62,256 4.96 9.97
1934 58,888 4.66
1935 69,153 5.43
1936 60,624 4.73
1937 55,842 1,643 57,485 4.46 2.86
1938 55,137 2,169 57,306 4.41 3.78
1939 48,865 2,132 50,997 3.90 4.18
1940 43,902 1,752 45,654 3.46 3.84
1941 47,581 1,491 49,072 3.69 3.04
1942 44,366 1,295 45,661 3.41 2.84
1943 30,913 649 31,562 2.35 2.06
1944 17,629 277 17,906 1.35 1.55
1945 11,101 264 11,365 0.86 2.38
1946 8,293 268 8,561 0.61 3.13
1947 9,657 697 10,354 0.72 6.73
1948 13,546 1,029 14,575 1.00 7.06
1949 18,882 1,240 20,122 1.35 6.16
1950 25,263 1,369 26,632 1.76 5.14
1951 26,594 1,099 27,693 1.81 3.97
1952 25,890 1,059 26,949 1.73 3.93
1953 29,815 1,064 30,879 1.95 3.45
1954 41,335 1,398 42,733 2.65 3.27
1955 47,650 1,249 48,899 2.98 2.55
1956 50,655 1,240 51,895 3.10 2.39
1957 60,335 1,189 61,524 3.61 1.93
1958 76,048 1,413 77,461 4.47 1.82
1959 85,502 1,288 86,790 4.90 1.48
1960 94,414 1,296 95,710 5.43 1.35
1961 124,386 1,444 125,830 6.99 1.15
1962 122,499 1,382 123,881 6.77 1.12
1963 128,405 1,409 129,814 6.99 1.09
1964 141,828 1,339 143,167 7.60 0.94
1965 149,820 1,317 151,137 7.91 0.87
1966 161,840 1,165 163,005 8.42 0.72
1967 173,884 1,241 175,125 8.95 0.71
1968 164,592 1,001 165,593 8.39 0.60
1969 154,054 946 155,000 7.77 0.61
1970 161,366 1,085 162,451 8.07 0.67
1971 167,149 1,215 168,364 8.26 0.72
1972 152,840 1,094 153,934 7.33 0.71
1973 144,929 985 145,914 6.89 0.68
1974 156,958 1,009 157,967 7.39 0.64
1975 208,064 1,266 209,330 9.69 0.60
1976 207,926 1,141 209,067 9.59 0.55
1977 180,062 1,132 181,194 8.23 0.62
1978 167,776 995 168,771 7.58 0.59
1979 182,344 915 183,259 8.14 0.50
1980 359,768 1,184 360,952 15.85 0.33
1981 358,997 1,332 360,329 15.67 0.37
1982 366,331 1,535 367,866 15.84 0.42
1983 373,064 1,670 374,734 15.99 0.45
1984 342,848 1,447 344,295 14.57 0.42
1985 362,939 1,597 364,536 15.29 0.44
1986 476,214 1,642 477,856 19.86 0.34
1987 559,658 1,620 561,278 23.12 0.29
1988 593,158 1,409 594,567 24.27 0.24
1989 641,528 1,465 642,993 25.71 0.23
1990 723,886 1,598 725,484 29.03 0.22
1991 878,626 1,773 880,399 34.85 0.20
1992 971,047 1,443 972,490 38.08 0.15
1993 917,350 1,384 918,734 35.60 0.15
1994 844,087 1,170 845,257 32.43 0.14
1995 856,991 1,113 858,104 32.62 0.13
1996 1,040,915 1,195 1,042,110 39.26 0.11
1997 1,315,782 1,217 1,316,999 49.16 0.09

Sources: 1899-1938 Annual Report of the Attorney General of the United States; 1939-1997; and Statistical Abstract of the United States. Various years. The Report of the Attorney General did not provide the numbers voluntary and involuntary from 1934-36.

Table 5. Wage Earner Bankruptcy and No Asset Cases, 1899-1933

Percentage of Cases
Year Wage Earners With No Assets
1899 5,288 51.12
1900 7,516 40.52
1901 7,068 48.99
1902 6,859 47.25
1903 4,852 41.36
1904 5,291 40.55
1905 5,426 40.75
1906 2,748 42.29
1907 3,257 42.11
1908 3,492 40.29
1909 3,528 38.46
1910 4,366 36.49
1911 4,139 48.14
1912 4,161 50.70
1913 4,863 49.63
1914 5,773 49.96
1915 6,632 49.88
1916 6,418 53.29
1917 7,787 57.12
1918 8,230 57.05
1919 6,743 64.53
1920 5,601 67.41
1921 5,897 65.66
1922 7,550 52.70
1923 10,173 61.10
1924 13,126 62.17
1925 14,444 61.23
1926 16,770 64.02
1927 18,494 64.86
1928 21,510 63.19
1929 25,478 67.34
1930 28,979 68.44
1931 29,698 69.15
1932 29,742 66.25
1933 27,385 62.76

Sources: 1899-1938 Annual Report of the Attorney General of the United States; 1939-1997; and Statistical Abstract of the United States. Various years. The Report of the Attorney General did not provide the numbers voluntary and involuntary from 1934-36.

In 1933, Congress enacted amendments that allowed farmers and wage earners to seek arrangements. Arrangements offered more flexibility than compositions. Debtors could offer to pay all or part of their debts over a longer period of time. Congress also added section 77, which provided for railroad reorganization. Section 77 solved two of the problems that had plagued corporate reorganization. Bankruptcy courts had jurisdiction of the assets throughout the country so that ancillary receiverships were not needed. The amendment also alleviated the holdout problem by making 2/3 votes of a class of creditors binding on all the members of the class. In 1934, Congress extended reorganization to non-railroad corporations as well. The Thacher Report’s recommendations for a bankruptcy administrator were not enacted, largely because of opposition from bankruptcy lawyers. The 1898 Bankruptcy Act had created a well-organized group with a vested interest in the evolution of the law–bankruptcy lawyers.

Although the 1933-34 reforms were ones that bankruptcy lawyers and judges had wanted, many of them believed that the law could be further improved. In 1932, The Commercial Law League, the American Bar Association, the National Association of Credit Management and the National Association of Referees in Bankruptcy joined together to form the National Bankruptcy Conference. The culmination of their efforts was the Chandler Act of 1938. The Chandler Act created a menu of options for both individual and corporate debtors. Debtors could choose traditional liquidation. They could seek an arrangement with their creditors through Chapter 10 of the Act. They could attempt to obtain an extension through Chapter 12. A corporation could seek an arrangement through Chapter 11 or reorganization through Chapter 10. Chapter 11 only allowed corporations to alter their unsecured debt, whereas Chapter 10 allowed reorganization of both secured and unsecured debt. However, corporations tended to prefer Chapter 11 because Chapter 10 required Securities and Exchange Commission review for all publicly traded firms with more than $250,000 in liabilities.

By 1938 modern American bankruptcy law had obtained its central features. The law dealt with all types of individuals and businesses. It allowed both voluntary and involuntary petitions. It enabled debtors to choose liquidation and a discharge, or to choose some type of readjustment of their debts. By 1939, the vast majority of bankruptcy cases were, as they are now, voluntary consumer bankruptcy cases. After 1939 involuntary bankruptcy cases never again rose above 2,000. (See Table 4). The decline of involuntary bankruptcy cases appears to have been associated with the decline in business failures. According to Dun and Bradstreet, the number of failures per 10,000 listed concerns averaged 100 per year from 1870 to 1933. From 1934-1988 the failure rate averaged 50 per 10,000 concerns. The failure rate did not rise above 70 per 10,000 listed concerns again until the 1980s. Also, the number of failures, which had averaged over 20,000 a year in the 1920s did not reach 20,000 a year again until the 1980s. The mercantile failures which had so troubled late nineteenth century merchants and manufacturers were much less of a problem after the Great Depression.

Table 6. Business Failures, 1870-1997

Failures per
Year Failures 10,000 Firms
1870 3,546 83
1871 2,915 64
1872 4,069 81
1873 5,183 105
1874 5,830 104
1875 7,740 128
1876 9,092 142
1877 8,872 139
1878 10,478 158
1879 6,658 95
1880 4,735 63
1881 5,582 71
1882 6,738 82
1883 9,184 106
1884 10,968 121
1885 10,637 116
1886 9,834 101
1887 9,634 97
1888 10,679 103
1889 10,882 103
1890 10,907 99
1891 12,273 107
1892 10,344 89
1893 15,242 130
1894 13,885 123
1895 13,197 112
1896 15,088 133
1897 13,351 125
1898 12,186 111
1899 9,337 82
1900 10,774 92
1901 11,002 90
1902 11,615 93
1903 12,069 94
1904 12,199 92
1905 11,520 85
1906 10,682 77
1907 11,725 83
1908 15,690 108
1909 12,924 87
1910 12,652 84
1911 13,441 88
1912 15,452 100
1913 16,037 98
1914 18,280 118
1915 22,156 133
1916 16,993 100
1917 13,855 80
1918 9,982 59
1919 6,451 37
1920 8,881 48
1921 19,652 102
1922 23,676 120
1923 18,718 93
1924 20,615 100
1925 21,214 100
1926 21,773 101
1927 23,146 106
1928 23,842 109
1929 22,909 104
1930 26,355 122
1931 28,285 133
1932 31,822 154
1933 19,859 100
1934 12,091 61
1935 12,244 62
1936 9,607 48
1937 9,490 46
1938 12,836 61
1939 14,768 70
1940 13,619 63
1941 11,848 55
1942 9,405 45
1943 3,221 16
1944 1,222 7
1945 809 4
1946 1,129 5
1947 3,474 14
1948 5,250 20
1949 9,246 34
1950 9,162 34
1951 8,058 31
1952 7,611 29
1953 8,862 33
1954 11,086 42
1955 10,969 42
1956 12,686 48
1957 13,739 52
1958 14,964 56
1959 14,053 52
1960 15,445 57
1961 17,075 64
1962 15,782 61
1963 14,374 56
1964 13,501 53
1965 13,514 53
1966 13,061 52
1967 12,364 49
1968 9,636 39
1969 9,154 37
1970 10,748 44
1971 10,326 42
1972 9,566 38
1973 9,345 36
1974 9,915 38
1975 11,432 43
1976 9,628 35
1977 7,919 28
1978 6,619 24
1979 7,564 28
1980 11,742 42
1981 16,794 61
1982 24,908 88
1983 31,334 110
1984 52,078 107
1985 57,078 115
1986 61,616 120
1987 61,111 102
1988 57,098 98
1989 50,631 65
1990 60,747 74
1991 88,140 107
1992 97,069 110
1993 86,133 96
1994 71,558 86
1995 71,128 82
1996 71,931 86
1997 84,342 89

Source: United States. Historical Statistics of the United States: Bicentennial Edition. 1975; and United States. Statistical Abstract of the United States. Washington D.C.: GPO. Various years.

The Bankruptcy Reform Act of 1978

In contrast to the decline in business failures, personal bankruptcy climbed steadily. Prompted by a rise in personal bankruptcy in the 1960s, Congress initiated an investigation of bankruptcy law that culminated in the Bankruptcy Reform Act of 1978, which replaced the much amended 1898 Bankruptcy Act. The Bankruptcy Reform Act, also known as the Bankruptcy Code or just “the Code”, maintains the menu of options for debtors embodied in the Chandler Act. It provides Chapter 7 liquidation for businesses and individuals, Chapter 11 reorganization, Chapter 13 adjustment of debts for individuals with regular income, and Chapter 12 readjustment for farmers. In 1991, seventy-one percent of all cases were Chapter 7 and twenty-seven percent were Chapter 13. Many of the changes introduced by the Code made bankruptcy, especially Chapter 13, more attractive to debtors. The number of bankruptcy petitions did climb rapidly after the law was enacted. Lobbying by creditor groups and a Supreme Court decision that ruled certain administrative parts of the Act unconstitutional led to the Bankruptcy Amendments and Federal Judgeship Act of 1984. The 1984 amendments attempted to roll back some of the pro-debtor provisions of the Code. Because bankruptcy filings continued their rapid ascent after the 1984, recent studies have tended to look toward changes in other factors, such as consumer finance, to explain the explosion in bankruptcy cases.

Bankruptcy law continues to evolve. To understand the evolution of bankruptcy law is to understand why groups of people came to believe that existing debt collection law was inadequate and to see how those people were able to use courts and legislatures to change the law. In the early nineteenth century demands were largely driven by victims of financial crises. In the late nineteenth century, merchants and manufacturers demanded a law that would facilitate interstate commerce. Unlike its predecessors, the 1898 Bankruptcy Act was not repealed after a few years and over time it gave rise to a group with a vested interest in bankruptcy law, bankruptcy lawyers. Bankruptcy lawyers have played a prominent role in drafting and lobbying for bankruptcy reform since the 1930s. Credit card companies and customers may be expected to play a significant role in changing bankruptcy law in the future.

References

Balleisen, Edward. Navigating Failure: Bankruptcy and Commercial Society in Antebellum America. Chapel Hill: University of North Carolina Press. 2001.

Balleisen, Edward. “Vulture Capitalism in Antebellum America: The 1841 Federal Bankruptcy Act and the Exploitation of Financial Distress.” Business History Review 70, Spring (1996): 473-516

Berglof, Erik and Howard Rosenthal (1999) “The Political Economy of American Bankruptcy: The Evidence from Roll Call Voting, 1800-1978.” working paper, Princeton University.

Coleman, Peter J. Debtors and Creditors in America: Insolvency, Imprisonment for Debt, and Bankruptcy, 1607-1900. Madison: The State Historical Society of Wisconsin. 1974.

Hansen, Bradley. “The Political Economy of Bankruptcy: The 1898 Act to Establish A Uniform System of Bankruptcy.” Essays in Economic and Business History 15, (1997):155-71.

Hansen, Bradley. “Commercial Associations and the Creation of a National Economy: The Demand for Federal Bankruptcy Law.” Business History Review 72, Spring (1998): 86-113.

Hansen, Bradley. “The People’s Welfare and the Origins of Corporate Reorganization: The Wabash Receivership Reconsidered.” Business History Review 74, Autumn (2000): 377-405.

Martin, Albro. “Railroads and the Equity Receivership: An Essay on Institutional Change.” Journal of Economic History 34, (1974): 685-709.

Matthews, Barbara. Forgive Us Our Debts: Bankruptcy And Insolvency in America, 1763-1841. Ph. D. diss. Brown University. 1994.

Moss, David and Gibbs A. Johnson. “The Rise of Consumer Bankruptcy: Evolution, Revolution or Both?” American Bankruptcy Law Journal 73, Spring (1999): 311-51.

Sandage, Scott. Deadbeats, Drunkards and Dreamers: A Cultural History of Failure in America, 1819-1893. Ph. D. diss. Rutgers University. 1995.

Skeel, David A. “An Evolutionary Theory of Corporate Law and Corporate Bankruptcy.” Vanderbilt Law Review, 51 (1998):1325-1398.

Skeel, David A. “The Genius of the 1898 Bankruptcy Act.” Bankruptcy Developments Journal 15, (1999): 321-341.

Skeel, David A. Debt’s Dominion: A History of Bankruptcy Law in America. Princeton: Princeton University Press. 2001.

Sullivan, Theresa, Elizabeth Warren and Jay Westbrook. As We Forgive Our Debtors: Bankruptcy and Consumer Credit in America. Oxford: Oxford University Press. 1989.

Swain, H.H. “Economic Aspects of Railroad Receivership.” Economic Studies 3, (1898): 53-161.

Tufano, Peter. “Business Failure, Judicial Intervention, and Financial Innovation: Restructuring U. S. Railroads in the Nineteenth Century.” Business History Review 71, Spring (1997):1-40.

United States. Report of the Attorney-General. Washington D.C.: GPO. Various years.

United States. Statistical Abstract of the United States. Washington D.C.: GPO. Various years.

United States. Historical Statistics of the United States: Bicentennial Edition. 1975.

Warren, Charles. Bankruptcy In United States History. Cambridge: Harvard University Press. 1935.

Citation: Hansen, Bradley. “Bankruptcy Law in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/bankruptcy-law-in-the-united-states/