EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Petroleum and Public Safety: Risk Management in the Gulf South, 1901-2015

Author(s):McSwain, James B.
Reviewer(s):Aldrich, Mark

Published by EH.Net (November 2018)

James B. McSwain, Petroleum and Public Safety: Risk Management in the Gulf South, 1901-2015. Baton Rouge: LSU Press, 2018. xxii + 368 pp., $55 (hardcover), ISBN: 978-0-8071-6912-4.

Reviewed for EH.Net by Mark Aldrich, Department of Economics, Smith College.

 

The primary focus of this book is the development of regulations governing the safety of storing and shipping petroleum in four Gulf Coast cities — Mobile, New Orleans, Houston and Galveston — primarily around the turn of the twentieth century. The book contains, in addition to a preface and introduction, a chapter on each of these cities. There is also an initial chapter entitled “Petroleum and Nineteenth Century Risk Culture,” as well as a conclusion and epilogue. The basic arguments are that insurance companies developed an approach to petroleum risk management in the nineteenth century and that the gulf cities “convert[ed] insurance rules into legally enforceable regulations” (p. 193).

The book reflects an immense amount of research into primary and secondary works — the author seems to have read every insurance publication there was — and the chapters on individual cities contain detail that is at times overwhelming. Although there are some differences among the cities, in each case the basic story revolves around squabbles between various interest groups — petroleum suppliers and users, third parties, and insurance interests.

I found the first chapter the most interesting as the author traces efforts of various groups to corral the risks of producing and consuming petroleum products from the beginnings to about 1900, and readers may be especially interested in the discussion of insurance companies’ safety work. The discussion of the intertwining of insurance standards and rates with municipal regulations is well done, but the chapter might have included more economic analysis; there is no discussion of moral hazard or of the externalities that bedeviled fire insurance and which surely help account for insurance companies’ enthusiasm for city-wide rules. Nor does the author integrate Dalit Baranoff’s work on competitive conditions in rate setting; did rate wars spill over to standards as well? The author hints (p. 44) at this relationship between rates and standards but does not much develop it.

Markets shaped risks in other ways as well. The author notes that adulteration of kerosene diminished as markets appeared for naphtha, but he ignores the role of prices in this story. The price spread between (cheap) naphtha and (expensive) kerosene, which measured the payoff to adulteration, gradually declined, and so, apparently, did adulteration. Standard Oil probably played a role here that the author might also have noted. Company histories depict Rockefeller as obsessed with quality control and perhaps one reason was that with Standard marketing kerosene under its own name, blowing up customers could have been bad for repeat business. The author does discuss Standard’s marketing of fuel oil as factory fuel giving it a mixed safety report card (pp. 40-41).

The end of chapter 1 provides a nice summary of how concern with petroleum risks evolved over the nineteenth century from a primary focus on volatile products (kerosene) to industrial uses of fuel oil and finally in 1902 resulted in a set of “Rules and Requirements” for fuel oil use and storage promulgated by the National Board of Fuel underwriters. This was the bible from which the insurance industry would try to shape public and private action, and the author discusses it in detail. A table might have brought this into sharper focus.

I found the four city case studies less interesting — in part because they seem broadly similar. While in his introduction the author emphasizes the differences among these cities in size and economic base, these seem to make comparatively little difference to the problem at hand — how to cope with the new risks arising from the large-scale use of petroleum after the Texas discoveries surrounding Spindletop. The basic focus is on the development of municipal regulations governing storage and transportation of crude oil and its products. The regulatory focus included such matters as the size and construction of tanks and pipelines as well as their location. Surprisingly, given modern interest in environmental justice, there is no discussion of how neighborhood racial makeup may have affected location decisions. As the author tells us, the parties shaping the rules included the oil interests, insurance companies and others. For third parties who might bear the costs of fire but not of protecting stored oil, regulations could not be too strong, but one wonders why there were such major disagreements between oil and insurance interests. In discussing events in Mobile, Alabama, at one point (p. 67) the author asserts that “potential dangers to property left other parties unfazed,” but surely tort law must have internalized risks to third parties. What may well account for some of the disagreements was expertise (as the author hints on page 82), for the insurance companies brought far more experience to the table than did the oil companies.

The author’s prose sometimes makes for difficult reading. There are too many bromides: the introductory sentence of the conclusion reads “These Gulf South cities created policies to manage risks associated with storing, transporting and consuming petroleum for fuel” (p. 190). There are also long sentences with vague words that do not read easily. Consider the following one-sentence paragraph from chapter 1 (p. 13): “Acting in step with the evolution of lighting and heating technology, as well as petroleum production, refining and transportation, federal and state governments as well as municipalities formed a risk management tradition shaped by safety and liability concerns for passenger vessels and insured property in which lamps, fixtures and gas machines burned turpentine, camphene mixtures or turpentine and alcohol, as well as naphtha, gasoline and kerosene.” Finding the focus of such constructions is challenging.

These chapters contain no discussion of insurance rates. Yet Baranoff points out that as these events were playing out, insurance interests were trying to stabilize rates and ensure that backsliders did not undercut local boards. At one point (p. 59) the author notes a company used nearby rates as evidence that its oil storage tank was acceptable to insurance companies, but such insights are exceptional and in general, one wonders if insurance companies may have employed ratings to shape safety outcomes in these cities, and if not, why not.

Economists will wonder if any of these rules and policies really did reduce risks, and they will not find an answer here. Granted that the data are imperfect; yet the author never asks the question. Were fire risks higher than average in these cities; were oil fires a peculiar problem; did risks decline and rates change with the new regulations? A stab at answering some of these questions, however imperfect, would have made these chapters more interesting to this reviewer.

 
Mark Aldrich is the author of Back on Track: American Railroad Accidents and Safety, 1965-2015 (Johns Hopkins University Press).

Copyright (c) 2018 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (November 2018). All EH.Net reviews are archived at http://www.eh.net/BookReview.

Subject(s):Agriculture, Natural Resources, and Extractive Industries
Geographic Area(s):North America
Time Period(s):19th Century
20th Century: Pre WWII

The Foundations of Female Entrepreneurship: Enterprise, Home and Household in London, c. 1800-1870

Author(s):Kay, Alison C.
Reviewer(s):Burnette, Joyce

Published by EH.NET (March 2010)

Alison C. Kay, The Foundations of Female Entrepreneurship: Enterprise, Home and Household in London, c. 1800-1870. New York: Routledge, 2009. xv + 185 pp. $138 (hardcover), ISBN: 978-0-415-43174-3

Reviewed for EH.NET by Joyce Burnette, Department of Economics, Wabash College.

In the tradition of Sanderson (1996) and Philips (2006), Alison Kay argues that women were active in business during the nineteenth century. Women were not confined to a separate sphere, and couverture did not prevent them from operating as entrepreneurs. Strikingly, Kay concludes that the story of women in business is neither a story of a lost golden age, nor one of emancipation, but a story of continuity across history. Whatever the rhetoric, businesswomen were consistently involved in business throughout the Victorian period.

This book provides the best data yet on businesswomen in London. The main source for the book, and its main contribution, are samples of male and female business owners from records of the Sun Fire insurance company. Fire insurance records were less likely to be skewed by social expectations than other records, and include businesses that trade directories do not. Both male and female business owners had the need and the opportunity to insure their business assets. Since the records are contracts, not advertisements, the information should be accurate; women owners would not have hidden behind male relatives because, as Kay notes, ?misrepresentation of proprietorship could be taken as fraud? (p. 50).

The female sample includes all policies covering business assets that were issued to women in 1747, 1761, 1851, and 1861. There are 634 such policies. For comparison Kay also collects a five-percent sample of male policies, using policies taken out in October of the same years. Though she is mainly interested in businesswomen, the male sample is necessary because it allows Kay to compare women to men. The importance of the male sample can be seen by comparison to Lewis?s (2009) study of businesswomen in Albany; Lewis measures the median life of a female-owned business, but with no comparable number for male-owned businesses it is hard to say whether women?s businesses were short-lived or long-lived.

Kay finds that women operated over the whole range of businesses. While women were more likely than men to operate in the textile and clothing trades, and less likely than men to operate in manufacturing, women owned businesses across the range of industries. Women were certainly not confined to a small number of trades. While dressmaking/millinery was by far the most popular trade for women, only 15 percent of the women taking out policies in 1851 were milliners or dressmakers. Kay does not report the average or median insured value by gender, but does show the distribution of insured value across categories. Women were more likely to have capital below ?100 and less likely to have capital over ?2000, but women were present in all categories.

By linking the insurance policies to the 1851 and 1861 censuses, Kay is able to determine the family status of the women in her sample for those years. While the majority of female business owners were widows, there were also significant numbers of single and married women. Many were mothers; one-third of businesswomen were living with children under age 14. Kay shows that it was relatively rare for a businesswoman to live with a sister, or with a son who was a likely heir to the business. Households headed by businesswomen were more likely to employ servants than the average female-headed household.

Chapter Four examines trade cards from the period. Before the tax on newspaper advertisements was abolished in 1853, few businesses advertised in newspapers, and trade cards were a more common form of advertisement. Since trade cards do not survive systematically, they cannot be used quantitatively, but are used to provide a broader picture of businesses owned by women. Fire insurance records reveal a greater number of lodging houses than do the Post Office Directories. The directories tend to include the larger establishments, but not the smaller ones. Lodging houses varied greatly in quality, and in the quality of their clientele. The majority of female lodging-house keepers were spinsters, and most were in their 30s or 40s.

Some businesswomen specialized in renting property. Men and women seem to have been equally likely to invest their assets in property. Kay argues that women who rented property should be seen as active businesswomen rather than as passive rentiers. She points out that men managing properties would be seen as businessmen, and that women should be treated similarly.

Kay has given us valuable information on businesswomen in London, and I hope that she continues her research in order to provide answers to other questions. If she followed these businessmen and businesswomen over time, Kay could determine whether businesses owned by men and women had different failure rates, or different growth rates. These and other questions could potentially be answered by delving further into fire insurance records.

References:

Susan Ingalls Lewis, 2009, Unexceptional Women: Female Proprietors in Mid-Nineteenth-Century Albany, New York, 1830-1885, Columbus: Ohio State University Press.

Nicola Phillips, 2006, Women in Business, 1700-1850, Woodbridge, Suffolk: Boydell Press.

Elizabeth Sanderson, 1996, Women and Work in Eighteenth-Century Edinburgh, New York: St. Martin?s Press.

Joyce Burnette is Professor of Economics at Wabash College in Crawfordsville, Indiana. Her book, Gender, Work and Wages in Industrial Revolution Britain, discusses the role of market forces in determining the wages and occupations of women workers during the late eighteenth and early nineteenth centuries. She can be reached at burnettj@wabash.edu.

Subject(s):Social and Cultural History, including Race, Ethnicity and Gender
Geographic Area(s):Europe
Time Period(s):19th Century

The Dismal Science: How Thinking Like an Economist Undermines Community

Author(s):Marglin, Stephen A.
Reviewer(s):Jones, Eric

Published by EH.NET (March 2008)

Stephen A. Marglin, The Dismal Science: How Thinking Like an Economist Undermines Community. Cambridge, MA: Harvard University Press, 2008. xvi + 359 pp. $35 (cloth), ISBN: 978-0-674-02654-4.

Reviewed for EH.NET by Eric Jones, Melbourne Business School.

This is an exceptionally learned, uncompromisingly contrarian critique of markets and economics by a member of the Department of Economics at Harvard University. Stephen Marglin emphasizes the costs of market transactions and blames economics for supplying the associated frame of reference. The Dismal Science is patently the result of a lifetime of reading and cogitating about conceptual issues related to market exchanges and economists’ approaches to them. Some historical background is given but what is mainly offered is extended commentary on the history of thought and on everyday practice.

The “modern world view,” in Marglin’s opinion, derives from economics, which ignores the breakdown of community and elevates instead an obsession with productive efficiency. No accusation seems too gross for him to level at the economics profession. Economics distorts everything, he says, particularly human proclivities, although I found it hard to keep clear whether he thinks non-economists are too sensible to think like smart-alec graduate students or have been brainwashed by the economics’ mind-set seeping into every debate. In a meandering volume crammed with long quotations, he seems to qualify each assertion only to proffer some variant a few pages later.

His charges against economics boil down to the way the subject fosters individual maximization, ignores distributional concerns, and legitimizes “the market” behind a pretense of scientific detachment. Yet economics is a broad church and is always evolving, never more so than at present. Aha, the reader thinks, at least behavioral economics is not so crass as to accept the Homo economicus of Econ 101. Marglin is a step ahead, however, urging that all the behavioral economists are doing is altering one or two assumptions at a time. Theirs may seem a prototypically scientific procedure but Marglin will not agree. His mind is made up, right to lamenting that Adam Smith failed to entitle his book, The Wealth of Workers.

One of Marglin’s favored examples is the Amish, whose mutual dependence resists the market. The Amish exemplify community in his terms, which is to say there is no exit short of exorbitant personal cost. Who are the Amish? For practical purposes they are the community leaders. A world thus dominated is surely as likely to become that of the Lord of the Flies as a circle of benevolence.

While Amish community patterns survive, they are only patterns: Pennsylvania is not the eighteenth-century Rhineland, nor can it be when some Amish run tools off propane gas although they are forbidden electricity, install telephones in their barns although they are not to have them in their houses, and so forth. Thus, while they do so at a long remove, the Amish shadow American society. Theirs is often, so to speak, the world of the Shabbas goy, not the principled realm of community implied.

Marglin’s notion that a world of neighborliness was swept away by impersonal insurance markets does not fully capture reality. He thinks the neighbors would have rallied round to put up another barn for you if yours burned down ? a Seven Brides for Seven Brothers’ model of community help. No doubt there was mutuality in small places. But he does not refer to what actually preceded the development of fire insurance. Previous arrangements were less, not more, personal than the policy one might buy for oneself from an insurance agent. They relied on briefs for alms, instruments not abolished in England until 1828. Parish records are full of sums collected for briefs for distant places.

The system was non-compulsory but also non-local ? you subscribed for sufferers whom you would never meet and lived in hopes they would respond to your brief if you suffered in turn. People helped their neighbors but simultaneously belonged to vast networks of Christian support that can only be termed “community” by considerable stretching. This had little to do with the nation-state, which is one of Marglin’s innumerable betes noires: the instrument was previously the Papal Brief. Briefs did not supplant community, they supplemented it, while being less reliable than formal insurance. Moreover it is misleading to single out England as the home of insurance markets. Continental countries were writing insurance against losses of crops from hailstorms back in the eighteenth century. England, supposed fount of market ideology, did not do so until 1842.

Nor is the impression given of the English enclosure movement more persuasive. Landowners were eager to take land into ring-fenced holdings of their own but this did not preclude their raising productivity. Marglin thinks they were merely engrossing. Enclosure processes were long drawn out, he claims, because until 1688 peasants who resisted were backed by the Crown. Nevertheless we have to explain why progress was slow even afterwards. Copyholders were not instantly stripped of resources ? Marglin does not acknowledge that they typically held their farms on three lives. Nor were farmers necessarily averse to leaving the land in order to become shopkeepers in the market towns of a gradually expanding economy.

We must agree that markets entail costs, as Marglin endlessly insists, even if economists neglect the fact. Yet he rationalizes away the corresponding costs of being trapped in small groups that risk being inequitable as well as inefficient. The work of Jonathan Hughes on the colonial economy shows just how onerous non-market regulatory control was: we need not rely on gains in efficiency from the adoption of markets to reject the politicized allocation of resources and roles inseparable from “community.”

Marglin does advance some telling points against the practice of modern economics and, even leaving aside the political animus evident in The Dismal Science, it would take another volume (though not such a long one) to expound and contest its hundreds of propositions. Economists may be left to look after themselves; and while economic historians may wish to contemplate aspects of the critique, I suspect most of them will leave by the door through which they first came in. They may also be under-whelmed by some of the stylized historical facts on which the arguments depend.

Eric Jones is Professorial Fellow, Melbourne Business School, University of Melbourne, and Visiting Professor, University of Exeter. He is the author of The European Miracle, Growth Recurring, and Cultures Merging.

Subject(s):Markets and Institutions
Geographic Area(s):General, International, or Comparative
Time Period(s):General or Comparative

A Culture of Credit: Embedding Trust and Transparency in American Business

Author(s):Olegario, Rowena
Reviewer(s):Dupont, Brandon

Published by EH.NET (July 2007)

Rowena Olegario, A Culture of Credit: Embedding Trust and Transparency in American Business. Cambridge, MA: Harvard University Press, 2006. xiv + 274 pp. $40 (hardcover), ISBN: 0-674-02340-4.

Reviewed for EH.NET by Brandon Dupont, Department of Economics, Western Washington University.

In A Culture of Credit, Rowena Olegario traces the development of credit-reporting firms in the U.S. from their origins with Lewis Tappan’s agency through the twentieth century. Her focus is primarily on how credit-reporting agencies (mostly the Mercantile Agency, which later became the R.G. Dun Company, and J.M. Bradstreet) embedded trust into American business. Olegario defines trust the way that business writers of the time understood the term: willingness to risk capital on borrowers who may not be personally known to the lenders and who may not become repeat customers.

The opening chapter presents a good discussion of mercantile credit over most of the eighteenth and nineteenth centuries in both the U.S. and England with particular emphasis on the emergence of bills of exchange. Trust, Olegario emphasizes, was important in the increasing use of bills of exchange, which was rooted in the belief that they would be paid on time when due. Even relatively safe instruments like bills of exchange depended on the issuer’s reputation, which was the key determinant in obtaining credit. The provision of credit is linked to the consumer mentality of early America: “Even places far removed from established commercial centers were well supplied with consumer goods, whose availability was made possible by credit from British and, increasingly, American suppliers” (p. 25).

The British willingness to use mercantile credit clearly crossed the Atlantic and became embedded in American trade practices; however, the emergence of credit-reporting firms, beginning with the Tappan agency in 1841, was a radical departure from the traditional closed networks that were common in England. In Chapter 2, Olegario argues that the traditional British-style trade protection societies, which performed similar functions, did not emerge in the U.S. because of the less established nature of trade in the U.S., a highly mobile population, high “churn” among businesses, and competition among sellers. Thus, the credit-reporting firm was a uniquely American invention designed to mitigate information asymmetries that are an inherent component of credit transactions. While the book would benefit from more discussion on this asymmetry problem, Olegario does an effective job of describing how Tappan effectively cracked open the old British credit reporting system of trade protection societies (closed groups whose members provided information only to other members on a non-profit basis). The American system was fundamentally different in that it was based on competition among firms for subscribers rather than cooperation within trade protection societies. This competition, particularly between the Mercantile Agency and Bradstreet, would fundamentally shape the evolution of credit-reporting firms over the course of the nineteenth century.

The credit-reporting firms focused on borrowers’ financial circumstances and past behavior to determine creditworthiness. This information was most reliably derived from local knowledge provided by correspondents who were mostly attorneys but also included sheriffs, merchants, postmasters and bank cashiers. This local standing of individuals was viewed as the key to their trustworthiness and explains the Tappan agency’s used of local correspondents who knew the community.

Olegario explains some of Tappan’s organizational and managerial problems and uses surviving circulars to illustrate his efforts to create legitimacy for his new steps into a brand new industry. Most of these arguments were essentially based on the increased efficiency that creditworthiness information made possible. These new agencies, not surprisingly, created deep suspicion among some who viewed the scrutiny as humiliating. Olegario writes that, “Attaining legitimacy was contingent on increased familiarity: as credit reports became more widely used, they became perceived as essential to the responsible management of risk” (p. 59). Eventually, the credit ratings agencies won the battle for public perception and increasing numbers of firms were willing to pay for the information they provided.

Robert G. Dun joined the Mercantile Agency in 1846 and became a Milwaukee reporter four years later. He would push aggressively into the South and West and also sought to diversify the client base beyond wholesalers to include banks, fire insurance companies, manufacturers and commission houses.

In Chapter 3, Olegario describes risk assessment methods that were defined on a specific set of character traits primarily because payment histories were nearly impossible to obtain. In the nineteenth century, there was no generally held belief that creditors should share information about client payment histories with each other, mostly out of fear that sharing positive information about clients would lead competitors to steal those clients. Credit reporting firms saw character manifested in traits of honesty, punctuality, thrift, vices (specifically drinking and gambling), energy, experience, marital status, age and focus. Olegario also points to “other considerations, including past behavior and experience” as important, although it is not clear specifically what she means here. Evaluations of creditworthiness would also typically include examinations of public records on mortgages and taxes paid on real estate, supplemented by visits to the establishments themselves. Credit ratings were therefore largely based on information that had some bearing on the probability of repayment, at least as understood at the time.

Chapter 4 focuses on Jewish merchants to shed some light on the insistence of credit reporters on transparency rather than the traditionally closed networks common to Jewish merchants despite the advantages of the closed networks (particularly a greater capacity to maintain stability during economic downturns). Olegario also uses the Jewish example to emphasize the role that ethnic communities played in an increasingly integrated national market. Specifically, these ethnic communities often provided the local knowledge that bolstered the confidence of outside creditors.

Chapter 5 focuses on the ways in which mercantile credit changed late in the nineteenth century, primarily in response to the Civil War. The most significant change was the shortening of credit terms, largely in response to the suspension of specie payments between 1862 and 1879 when sellers tried to compensate for the fluctuations in currency values by shortening the credit period to as little as fewer than thirty days. There was, of course, the pressure to attract and keep customers with generous and flexible credit terms. Despite these relatively minor changes, there were no major changes to the criteria and sources for determining creditworthiness in the late nineteenth century.

Growth continued despite competition (there were forty credit reporting firms in New York alone in the years from 1873-78) because as the economy grew larger, scale became a clear advantage and this was the distinguishing characteristic of the Bradstreet and Dun companies. They pushed aggressively westward, even moving into areas not yet reached by the railroad system and competition also drove them to become more inclusive of as many businesses as possible, regardless of size. In 1859, the R.G. Dun reference volumes rated 20, 268 companies but this number grew to nearly 1.3 million by 1900.

In addition to growth and competitive pressures, the fifth chapter revisits the continuing efforts of credit-reporting firms to gain broader legitimacy with the public ? efforts that were largely successful according to Olegario’s analysis. Articles appeared in newspapers and professional manuals extolling the virtues of the credit-reporting agencies. There was still scattered resistance to the agencies but the efforts to gain legitimacy in the business community and broader public ultimately proved a success. Some of the objections raised were spurious but others seemed to be more serious. In particular, objections were made on the huge spread in the published ratings keys (which would say things such as “worth $250K-$500K,” that rendered them largely meaningless. Some of the reports were old and some correspondents were simply inexperienced in rendering judgments about creditworthiness. The courts played the most prominent role in the struggle for legitimacy since lawsuits were frequently brought against credit reporting agencies but the courts increasingly broadened the definition of privileged communications and determined that as long as the agencies were reasonably diligent, they could not be held responsible for losses.

There are some areas where Olegario’s analysis could benefit from some quantification to the extent that it is possible, since readers will sometimes find themselves asking J.H. Clapham’s familiar questions of “How large? How long? How often? How representative?” At times, the book also seems to be pieced together leading to some repetition that occasionally impedes the flow of the text. Despite these relatively minor qualms, the book presents a fascinating account of the emergence of credit reporting in the U.S. and does a good job of illuminating some previously unknown factors in this important and understudied area of business history.

Brandon Dupont is an Assistant Professor of Economics at Western Washington University. His most recent publication is “Bank Runs, Information and Contagion in the Panic of 1893,” which is forthcoming in Explorations in Economic History.

Subject(s):Financial Markets, Financial Institutions, and Monetary History
Geographic Area(s):North America
Time Period(s):20th Century: Pre WWII

Concrete Economics: The Hamiltonian Approach to Economic Growth and Policy

Author(s):Cohen, Stephen S.
DeLong, J. Bradford
Reviewer(s):Salsman, Richard M.

Published by EH.Net (December 2016)

Stephen S. Cohen and J. Bradford DeLong, Concrete Economics: The Hamiltonian Approach to Economic Growth and Policy. Boston: Harvard Business Review Press, 2016. xi + 223 pp. $28 (cloth), ISBN: 978-1-422-18981-8.

Reviewed for EH.Net by Richard M. Salsman, Program in Philosophy, Politics and Economics, Duke University.

When the U.S. has prospered most has it been due mainly to a limited state that ensures equal legal treatment and relatively free markets, or has it been due to an active, intervening state that regiments activity and protects or subsidizes favored products, firms and sectors, at the expense of others? The latter, say the authors of this brief but rather interesting volume. Take note, any remaining fans of Adam Smith, true believers in “invisible hands,” or diehard devotees of “laissez faire.”

DeLong, a professor of economics at the University of California, Berkeley and Cohen, professor emeritus and co-director of the Berkeley Roundtable in the International Economy (BRIE), deserve credit for reminding us that no economic policy is truly “hands off.” As a discipline, political economy properly recognizes and studies the unavoidable interaction of politics and markets; the latter can’t even function without some basic provision of public goods (e.g., law and order, security of private property, legal sanctity of contract). They deserve kudos also for at least stipulating that prosperity (“the wealth of nations”) is a worthy goal (as it was for Smith), for that’s surely not a key premise of today’s environmentalists, social-justice warriors, or the Pope.

But is more needed, for prosperity, than the provision of basic, rights-protecting public goods? Yes, say these authors — substantially more. In their view the state is an economy’s designer, the one institution which can (must!) identify and clear out “spaces” where entrepreneurs can then confidently operate. They believe the U.S. has suffered economic crises and stagnation since 1980 because economic policy has been too pro-business — i.e., excessively-low tax rates, free trade, deregulation, entrepreneurialism, and promotion of “zero-sum” financial activity at the expense of value-added manufacturing. They tout five prior episodes in U.S. history when “real” prosperity occurred, due (they claim) to their preferred policy mix of high income tax (and tariff) rates, heavy regulation (especially of finance), infrastructure spending, and protectionism: the 1790s and early 1800s (via “Hamiltonian” principles), the post-Civil War Gilded Age (via Lincolnian prescriptions), the progressive era (via Teddy Roosevelt’s policies), the New Deal of the 1930s (via FDR’s New Deal), and the post-WWII decades of infrastructure/aerospace buildout (a by-product of the “military industrial complex” which Eisenhower both encouraged and distrusted).

By intention, the methodology here isn’t very rigorous. No new historical database is cited or analyzed. The authors eschew careful, disciplined treatments of history, empirics, and models. They offer, unapologetically, a selective historical narrative designed mainly to corroborate their theme. They believe policy formation (and analysis) goes awry if ever animated by “ideology,” especially in free-market form. Declaring themselves “non-ideological,” they focus on what they call the “concretes,” instead of abstractions; they endorse only what “works” (“pragmatically”), not what should work (“theoretically”). But can anyone specify what “works” without reference to some criterion? The authors implicitly deny that scientific methods require hypothesizing and testing — that some theory is necessary even to know where to look in a vast empirical record.

DeLong and Cohen’s methodology — more accurately, their anti-methodology posture — is worth mentioning, because in truth they cleverly apply a specific theory in choosing their anecdotes and structuring their narrative, one which economists have variously characterized as “economic nationalism” or “industrial policy.” In this model, popularized in the 1970s and early 1980s by Robert Reich and Lester Thurow, public officials and planners are presumed to be sufficiently wise and prescient to distinguish future economic “winners and losers” and thus able to generate sustainable prosperity by fostering the former and discouraging the latter, while (somehow) also avoiding the corporatist and labor union rent-seeking such policy targeting typically invites. A more recent example of the approach is Mazzucato’s The Entrepreneurial State (2013).

When a theory doesn’t explain economic reality very well, its adherents might elect to eschew theory altogether or instead to cherry-pick the historical record, to make the dubious theory “fit.” Hedging their bets, these authors try both. As for the cherry-picking, they cite nearly every major economic innovation in the U.S. since its founding era and attribute it to the encouragement of some government policy. Thanks mainly to Washington, America has had firearms, railroads, radio, aerospace, autos, trucking, assembly lines, nuclear power, electrification, central banking, paper money, infrastructure, computers, semiconductors, the Internet (yes, they credit Al Gore), and even smart phones. If the U.S. government has ever even remotely touched these things, the authors imply, it pretty much made them possible. At the same time, they blame failed products, eroded industries, and recession-depression decades in U.S. history on overly-free markets.

Even if the claim were true, that these products and sectors were made possible by Washington, it’s worth noting that the authors find that they entail primarily spin-offs from the outlays and projects of the U.S. Department of Defense — a state function even classical liberals can heartily endorse. Are prosperity-fostering spinoffs likely to flow also from the explosion of entitlement-transfer outlays in the half-century since the start of “Great Society” schemes? U.S. federal spending on national defense is now just 12% of all outlays, down from 16% in 2000, 23% in 1980, and 50% in 1960. It’s about as likely as the authors’ more liberal sympathizers being pleased to hear what aspect of U.S. spending most boosts the economy. These Berkeley dons are in the odd position of wishing devoutly for the “military-industrial complex” Ike warned against.

A misleading aspect of the book is the authors’ insistence that theirs is “the Hamilton approach to economic growth and policy.” In truth Alexander Hamilton, the first U.S. Treasury secretary (1789-1795) wanted (and implemented) a constitutionally-limited federal government, by no means a state engaged in “industrial planning” of the kind DeLong and Cohen want (let alone “social insurance” or “redistribution”). Hamilton rejected British mercantilism, which stunted American manufacturing; and unlike his Jeffersonian-agrarian opponents (and successors), he wanted low and uniform tariffs. Hamilton also defended and implemented a gold-silver based dollar, sustained reductions in the national debt, and a limited-power, privately-owned national (nationwide) bank (not a “central bank,” as the authors claim). Also unlike DeLong and Cohen, who devote a whole chapter to denouncing what they claim is a cancer-like “hypertrophy of finance” since 1980, Hamilton saw the financial sector as productive (if left free of government influence — as it surely isn’t today), not one that displaces real and healthy economic sinews.

Two periods in U.S. economic history are particularly misrepresented in the book, to fit the author’s theme. The “Gilded Age,” the half-century between the ended of the Civil War and start of World War I — supposedly entailed “vast accumulations of conspicuous wealth” and a “confiscation of the nation’s wealth” due to “the crushing power of trusts” (p. 71). In fact that was a half-century of stupendous invention, entrepreneurship, and wage gains, due to economic freedom, not theft; it was accomplished without a central bank, a federal income tax, centralized industrial planning, or a regulatory state. The other period misrepresented is the 1930s; the authors say FDR’s heavily-interventionist New Deal revived the economy, but in fact it mainly prolonged the stagnation.

It’s reasonable to expect a book authored by fans of “industrial policy” to highlight Japan, as did Reich and Thurow in the 1980s. After all, Japan’s Ministry of International Trade and Industry (“MITI”) was heralded as the model for planning agencies globally and the progenitor of its post-war economic “miracle.” Yes, Japan’s industrial production increased nearly 10% per annum from 1950 to 1991; but since then it has shrunk at a compounded rate of 0.4% per annum. Does this quarter-century contraction reflect free market policies enacted after 1991? Hardly. Somehow “things changed,” the authors report, dead-pan. “Japan had become a solidly rich nation.” “Asset values then crashed and stayed crashed,” and “rapid growth became dishearteningly elusive. Why? We do not claim to know” (p. 133).  Herein lies the futility (and dishonesty) of historical cherry-picking: when the facts don’t fit, plead humility and ignorance; otherwise, proclaim boldly and often that every sustained economic success necessarily has flowed from astute state planning.

DeLong and Cohen deserve thanks for issuing a reminder that the humane state should facilitate prosperity and higher living standards, as that’s become a minority (but much needed) view in recent decades. The book’s more refutable parts include the claim that prosperity is achievable by actively countermanding markets, the belief that today’s burgeoning welfare-transfer state (which they condone) can spawn wealth-producing “spinoffs,” and above all, the presumption that their book has the imprimatur of a truly Hamiltonian (pro-capitalist) political economy.

Richard M. Salsman is the author of The Political Economy of Public Debt: Three Centuries of Theory and Evidence (Edward Elgar, 2017). richard.salsman@duke.edu.

Copyright (c) 2016 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (December 2016). All EH.Net reviews are archived at http://eh.net/book-reviews/

Subject(s):Economic Planning and Policy
Geographic Area(s):North America
Time Period(s):18th Century
19th Century
20th Century: Pre WWII
20th Century: WWII and post-WWII

The Rise and Fall of American Growth: The U.S. Standard of Living since the Civil War

Author(s):Gordon, Robert J.
Reviewer(s):Margo, Robert A.

Published by EH.Net (July 2016)

Robert J. Gordon, The Rise and Fall of American Growth: The U.S. Standard of Living since the Civil War.  Princeton, NJ: Princeton University Press, 2016. vii + 762 pp. $40 (cloth), ISBN: 978-0-691-14772-7.

Reviewed for EH.Net by Robert A. Margo, Department of Economics, Boston University.

This is the age of blockbuster books in economics. By any metric, Robert Gordon’s new tome qualifies.  It tackles a grand subject, the productivity slowdown, by placing the slowdown in the context of the historical evolution of the American standard of living.  Gordon, who is the Stanley G. Harris Professor in the Social Sciences at Northwestern University, needs no introduction, having long been one of the most famous macroeconomists on planet Earth.

The Rise and Fall of American Growth is divided into three parts.  Part One (chapters 2-9) examines various components of the standard of living, in levels and changes from 1870 to 1940.  Part Two (chapters 10-15) does the same from 1940 to the present, maintaining the same relative order of topics (e.g. transportation appears after housing in both parts).  Part Three (chapters 16-18) provides explanations and offers predictions up through 2040.  There are brief interludes (“Entre’acte”) between parts, a Postscript, and a detailed Data Appendix.

Chapter 1 is an overview of the focus, approach, and structure of the book.  Gordon’s focus is on the standard of living of American households from 1870 to the present.  The approach is both quantitative — familiar to economists — and qualitative — familiar to historians.  As already noted, the organization is symmetric — Part One considers the pre-World War II period, and Part Two, the post-war.  The fundamental point of the book is that that some post-1970 slowdown in growth was inevitable, because so much of what was revolutionary about technology in the first half of the twentieth century was revolutionary only once.

Chapter 2 draws a bleak picture of the standard of living ca. 1870, the dawn of Robert Gordon’s modern America.  From the standpoint of a household in 2016, conditions of life in 1870 would appear to be revolting.  The diet was terrible and monotonous to boot; homemade clothing was ill-fitting and crudely made; transportation was dependent principally on the horse, which generated phenomenal amounts of waste; indoor plumbing was all but non-existent; rural Americans lived their lives largely in isolation of the wider world.  In Gordon’s view, much of this is missing from conventional real GNP estimates.  Chapter 3 continues the initial story, focusing on changes in food and clothing consumption.  Gordon contends there was not much change in underlying quality but he argues that, by the 1920s, consumers were paying lower prices for food — having shifted to lower-priced sources (chain stores as opposed to country merchants) — and that most clothing was store-bought rather than homemade.

Chapter 4 studies housing quality.  As with other consumer goods, housing also improved sharply in quality from 1870 to 1940.  Gordon argues that much farm housing was poor in quality, while new urban housing was typically larger and more durably built.  Indoor plumbing, appliances and, ultimately, electrification dramatically enhanced the quality of life while people were indoors.  As elsewhere in the book, reference is made to hedonic estimates of the value of these improvements as revealed in higher rents. Chapter 5 details improvements in transportation between 1870 and 1940. These are grouped into three categories.  The first is improvement in inter-city and inter-regional transportation in rail.  This occurs chiefly through improvements in the density of lines and in the speed of transit. The second is intra-city which occurred with the adoption of the electric streetcar.  The third, and most important arguably, is the internal combustion engine and its use in the automobile (and bus).  Gordon especially highlights improvements in the quality of automobiles, noting that the car is not reflected in standard price indices until the middle of the Great Depression.

Chapter 6 details advances in communication from 1870 to 1940.  By current standards, the relevant changes — the telegraph, telephone, the phonograph, and the radio — might not seem like much but from the point of view of a household in 1870, these technologies enabled Americans to dramatically reduce their isolation.  As Gordon points out, one could phone a neighbor to see if she had a cup of sugar rather than visit in person, or listen to Enrico Caruso’s voice on the phonograph if it were not possible to hear him in concert.  The radio brought millions of Americans into the national conversation, whether it was to hear one of Franklin Roosevelt’s fireside chats or listen to a baseball game.  Chapter 7 discusses improvements in health and mortality from 1870 to 1940 which, according to Gordon, were unprecedented.  After summarizing these, he turns to causes, chief among which are improved urban sanitation, clean water, and uncontaminated milk.   Gordon also highlights improvements in medical knowledge, particularly the diffusion (and understanding) of the germ theory of disease.  Chapter 8 studies changes in the quality of work from 1870 to 1940.  These changes were wholly for the better, according to Gordon.  Work became less dangerous, more interesting, and more rewarding in terms of real wages.  Most importantly, there was less working per se, as weekly hours fell, freeing up time for leisure activity.  There was a marked reduction in child labor, as children spent more of their time in school, particularly at older ages in high school.   This was also the period leading up, as Claudia Goldin has told us, to the “Quiet Revolution” in the labor force participation of married women, which was to increase substantially after World War II. Credit and insurance, private and social, is the topic of Chapter 9.  The ability to better smooth consumption and also insure against calamity are certainly improvements in living standards that are not captured by standard GNP price deflators.  Initially the shift of households from rural to urban areas arguably coincided with a decrease in consumer credit but by the 1920s credit was on the rise due to several innovations previously documented by economic historians such as Martha Olney.   Households were also better able to obtain insurance of various types (e.g. life, fire, automobile); in particular, loans against life insurance were frequently used as a source for a down payment on a house or car.  Government contributed by expanding social insurance and other programs that helped reduced systemic risks.

Chapter 10 begins the second part of the book, which focuses on the period from 1940 to the present.  As noted, the topic order of Part Two is the same as Part One, so Chapter 10 focuses on food, clothing, and shelter.  Gordon considers the changes in quality in these dimensions of the standard of living to be less monumental than as occurred before World War II.  For example, frozen food became a ubiquitous option after World War II but this change is far less important than the pre-1940 improvement in the milk supply.  Quantitatively, perhaps the most important change was a reduction in relative food prices which, predictably, led to increase in the quantity demanded.  Calories jumped, and so did obesity and many related health problems.  For clothing, the chief difference is in the diversity of styles and, as with food, a sharp reduction in relative price holding quality constant.  In Chapter 11 Gordon notes that automobiles continued to improve in quality after World War II, mostly in terms of amenities and gas mileage; and their usefulness as transportation improved with the building of the interstate highway system.  Gordon is less sanguine about air transportation, arguing that quality of the travel experience deteriorated after deregulation which was not offset by reductions in relative prices.  For housing, the major changes was suburbanization and a concomitant increase in square footage.  The early postwar period witnessed some sharp improvements in the quality of basic household appliances, and somewhat later, the widespread diffusion of air conditioning and microwaves.

Chapter 12 focuses on media and entertainment post-1940.  Certain older forms of entertainment gave way to television, the initial benefits of which were followed by steady improvements in the quality of transmission and reception.  Similarly, there were sharp improvements in the various platforms for listening to music, with substantial advances in recording technology and delivery — the 78 gave way to the LP to the CD to music streaming and YouTube.  The technology to deliver entertainment also delivered the news in ever greater quantity (quality is in the eye of the beholder, I suppose).  Americans today are connected almost immediately to every part of the world, a level of communications unthinkable a century ago.  A surprisingly brief Chapter 13, recounts the history of the modern computer.  There is no way to tell this history without emphasizing just how unprecedented the improvements have been, from the very first post-war computers to today’s laptops and supercomputers.  Moore’s Law, understandably, takes center stage, followed by the Internet and e-commerce.   Gordon has a few negative things to say about the worldwide web, but the main act — why haven’t computers led a revolution in productivity — is saved for later in the book.

Chapter 14 continues the story of health improvements to the present day.  As everyone knows, the U.S. health care system changed markedly after World War II, in terms of delivery of services, organization, and payment schemes.  Great advances were made in cardiovascular care and treatment of infectious disease through the use of antibiotics.   There were also advances in cancer treatment, mostly achieved by the 1970s; the subsequent “war” on cancer has not been as successful.  Most of the benefits were achieved through diffusion of public health and expansion of health knowledge in the general public (e.g. the harmful effects of smoking).  Since 1970 the health care system has shifted to more expensive, capital intensive treatments primarily provided in hospitals that have led to an inexorable growth in medical care’s share of GNP, increases that most scholars agree exceed any improvements in health outcomes.  The chapter concludes with a mixed assessment of Obamacare.  Chapter 15, on the labor force, is also rather short for its subject matter.  Gordon recounts the major changes in the structure and composition of work since World War II.  Again, it is a familiar tale — improved working conditions due to the shift towards the service sector and “indoor” jobs; rising labor force participation for married women; rising educational attainment, at least until recently; and the retirement revolution.  Your faithful reviewer gets a shout-out in a brief discussion of the “Great Compression” of the 1940s; my collaborator in that work, Claudia Goldin (and her collaborator, Lawrence Katz) gets much more attention for her scholarly contributions on the subject matter of Chapter 15, understandably so.

Part Three addresses explanations for the time series pattern in the standard of living.  Chapter 16 focuses on the first half of the twentieth century, which experienced a marked jump in total factor productivity (TFP) growth and the standard of living.  Gordon considers several explanations, dismissing two prominent ones — education and urbanization — right out of the gate.   In paeans to Paul David and Alex Field, he argues that the speed-up in TFP growth can be attributed to the eventual diffusion of key inventions of the “Second” industrial revolution, such as electricity; to the New Deal; and, finally, to World War II.  Chapters 17 and 18 tackle the disappointing performance of TFP growth and the standard of living in the last several decades of U.S. economic history.  Despite remarkable accomplishments in science and technology the impact on average living standards has been small, compared with the 1920-70 period.  Rising inequality since 1970, which can be tied in part to skill-biased technical change, has made matters worse, as did the Great Recession.  While Gordon is not all doom and gloom, he definitely falls on the pessimist side of the optimist-pessimist spectrum — his prediction for labor productivity growth over the 2015-40 period is 1.2 percent per year, a full third lower than the observed rate of growth from 1970 to 2014.

I think it is next to impossible to write a blockbuster economics book without it being a mixed bag in some way or other.  Gordon’s is no exception.  On the plus side, the book is well written, and one can only be in awe of Gordon’s mastery of the factual history of the American standard of living.  We all know macroeconomists who dabble in the past.  Gordon is no dabbler.  One can find interesting ideas for future (professional-level) research in every chapter — graduate students in search of topics for second year or job market papers, take note.  Many previous reviewers have chided Gordon for his pessimistic assessment of future prospects.  Of course, no one knows the future, and that includes Gordon.  It is certainly possible that he will be wrong about productivity growth over the next quarter-century — but I for one will be surprised if his prediction is off by, say, an order of magnitude.

I am less sanguine about the mixed qualitative-quantitative method of the book.  I gave up reading the history-of-technology-as-written-by-historians-of-technology a long time ago because it was just one-damn-invention-after-another.  At the end of a typical article recounting the history of improvements in, say, food processing, I was supposed to conclude that no amount of money would get me to travel back in the past before said improvements took place — except I never did reach this conclusion, knowing it to be fundamentally wrong.  Despite references to hedonic estimation, TFP, and the like, in the end Gordon’s book reads very much like conventional history of technology.  More than a half century ago Robert Fogel showed how one could quantify the social savings of a particular invention, thereby truly advancing scholarly knowledge of the treatment effects. Yet Railroads and American Economic Growth is not even cited in Gordon’s bibliography, let alone discussed in the text.  If one’s focus is the aggregate, I suppose a Fogelian approach is impossible — there are too many inventions, and (presumably) an adding-up problem to boot.  What exactly, though, do we learn from going back and forth between quantitative TFP and qualitative one-damn-invention-after-another? I’m not sure.  There’s the rub, or rather, the tradeoff.

Criticisms aside, if you are into economics blockbusters, The Rise and Fall of American Growth belongs on your bookshelf, next to Piketty and the like.  Just be sure it is a heavy-duty bookshelf.

Robert A. Margo’s Economic History Association presidential address, “Obama, Katrina, and the Persistence of Racial Inequality,” was published in the Journal of Economic History in June 2016.

Copyright (c) 2016 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (July 2016). All EH.Net reviews are archived at http://eh.net/book-reviews/

Subject(s):Economic Development, Growth, and Aggregate Productivity
History of Technology, including Technological Change
Household, Family and Consumer History
Living Standards, Anthropometric History, Economic Anthropology
Geographic Area(s):North America
Time Period(s):19th Century
20th Century: Pre WWII
20th Century: WWII and post-WWII

An Economic History of Weather Forecasting

Erik D. Craft, University of Richmond

Introduction

The United States Congress established a national weather organization in 1870 when it instructed the Secretary of War to organize the collection of meteorological observations and forecasting of storms on the Great Lakes and Atlantic Seaboard. Large shipping losses on the Great Lakes during the 1868 and 1869 seasons, growing acknowledgement that storms generally traveled from the West to the East, a telegraphic network that extended west of the Great Lakes and the Atlantic Seaboard, and an eager Army officer promising military discipline are credited with convincing Congress that a storm-warning system was feasible. The United States Army Signal Service weather organization immediately dwarfed its European counterparts in budget and geographical size and shortly thereafter created storm warnings that on the Great Lakes alone led to savings in shipping losses that exceeded the entire network’s expenses.

Uses of Weather Information

Altering Immediate Behavior

The most obvious use of weather information is to change behavior in response to expected weather outcomes. The motivating force behind establishing weather organizations in England, France, Germany, and the United States was to provide warnings to ships of forthcoming storms, so that the ships might remain in harbor. But it soon became obvious that agricultural and commercial interests would benefit from weather forecasts as well. Farmers could protect fruit sensitive to freezes, and shippers could limit spoilage of produce while en route. Beyond preparation for severe weather, weather forecasts are now created for ever more specialized activities: implementing military operations, scheduling operation of power generation facilities, routing aircraft safely and efficiently, planning professional sports teams’ strategies, estimating demand for commodities sensitive to weather outcomes, planning construction projects, and optimizing the use of irrigation and reservoir systems’ resources.

Applying Climatological Knowledge

Climatological data can be used to match crop varieties, construction practices, and other activities appropriately to different regions. For example, in 1947 the British Government planned to grow groundnuts on 3.2 million acres in East and Central Africa. The groundnut was chosen because it was suited to the average growing conditions of the chosen regions. But due a lack of understanding of the variance in amount and timing of rainfall, the project was abandoned after five years and initial capital outlays of 24 million British pounds and annual operating costs of 7 million pounds. The preparation of ocean wind and weather charts in the 1850s by Matthew Fontaine Maury, Superintendent of the U.S. Navy’s Depot of Charts and Instruments, identified better routes for vessels sailing between America and Europe and from the United States East Cost to United States West Coast. The reduced sailing durations are alleged to have saved millions of dollars annually. Climatological data can also be used in modern environmental forecasts of air quality and how pollution is dispersed in the air. There are even forensic meteorologists who specialize in identifying weather conditions at a given point in time after accidents and subsequent litigation. Basic climatological information is also one reason why the United States cinema industry became established in Southern California; it was known that a high percentage of all days were sunny, so that outdoor filming would not be delayed.

Smoothing Consumption of Weather-Sensitive Commodities

An indirect use of weather forecasts and subsequent weather occurrences is their influence on the prices of commodities that are affected by weather outcomes. Knowledge that growing conditions will be poor or have been poor will lead to expectations of a smaller crop harvest. This causes expected prices of the crop to rise, thereby slowing consumption. This is socially efficient, since the present inventory and now smaller future harvest will have to be consumed more slowly over the time period up until the next season’s crop can be planted, cultivated, and harvested. Without an appropriate rise in price after bad weather outcomes, an excessive depletion of the crop’s inventory could result, leading to more variability in the consumption path of the commodity. People generally prefer consuming their income and individual products in relatively smooth streams, rather than in large amounts in some periods and small amounts in other periods. Both improved weather forecasts and United State Department of Agriculture crop forecasts help buyers more effectively consume a given quantity of a crop.

The History Weather Forecasts in the United States

An important economic history question is whether or not it was necessary for the United States Federal Government to found a weather forecasting organization. There are two challenges in answering that question: establishing that the weather information was socially valuable and determining if private organizations were incapable of providing the appropriate level of services. Restating the latter issue, did weather forecasts and the gathering of climatological information possess enough attributes of a public good such that private organizations would create an insufficiently large amount of socially- beneficial information? There are also two parts to this latter public good problem: nonexcludability and nonrivalry. Could private producers of weather information create a system whereby they earned enough money from users of weather information to cover the costs of creating the information? Would such a weather system be of the socially optimal size?

Potential Organizational Sources of Weather Forecasts

There were many organizations during the 1860s that the observer might imagine would benefit from the creation of weather forecasts. After the consolidation of most telegraphic service in the United States into Western Union in 1866, an organization with employees throughout the country existed. The Associated Press had a weather-reporting network, but there is no evidence that it considered supplementing its data with forecasts. One Ebenezer E. Merriam began supplying New York newspapers with predictions in 1856. Many years later, astronomer turned Army Signal Service forecaster Cleveland Abbe concluded that Merriam made his predictions using newspaper weather reports. The Chicago Board of Trade declined an invitation in 1869 to support a weather forecasting service based in Cincinnati. Neither ship-owners nor marine insurers appear to have expressed any interest in creating or buying weather information. Great Lakes marine insurers had even already overcome organizational problems by forming the Board of Lake Underwriters in 1855. For example, the group incurred expenses of over $11,000 in 1861 inspecting vessels and providing ratings on behalf of its members in the annual Lake Vessel Register. The Board of Lake Underwriters even had nine inspectors distributed on the Great Lakes to inspect wrecks on behalf of its members. Although there was evidence that storms generally traveled in a westerly direction, none of these groups apparently expected the benefits to itself to exceed the costs of establishing the network necessary to provide useful weather information.

Cleveland Abbe at the Cincinnati Observatory began the most serious attempt to establish a quasi-private meteorological organization in 1868 when he sought financial support from the Associated Press, Western Union, local newspapers, and the Cincinnati Chamber of Commerce. His initial plan included a system of one hundred reporting stations with the Associated Press covering the $100 instrument costs at half of the stations and the dispatch costs. In the following year, he widened his scope to include the Chicago Board of Trade and individual subscribers and proposed a more limited network of between sixteen and twenty-two stations. The Cincinnati Chamber of Commerce, whose president published the Cincinnati Commercial, funded the experiment from September through November of 1869. Abbe likely never had more than ten observers report on any given day and could not maintain more than about thirty local subscribers for his service, which provided at most only occasional forecasts. Abbe continued to receive assistance from Western Union in the collection and telegraphing of observations after the three-month trial, but he fell short in raising funds to allow the expansion of his network to support weather forecasts. His ongoing “Weather Bulletin of the Cincinnati Observatory” was not even published in the Cincinnati Commercial.

Founding of the Army Signal Service Weather Organization

Just as the three-month trial of Abbe’s weather bulletin concluded, Increase A. Lapham, a Milwaukee natural scientist, distributed his second list of Great Lakes shipping losses, entitled “Disaster on the Lakes.” The list included 1,164 vessel casualties, 321 deaths, and $3.1 million in property damaged in 1868 and 1,914 vessel casualties, 209 lives lost, and $4.1 million in financial losses in 1869. The number of ships that were totally destroyed was 105 and 126 in each year, respectively. According to a separate account, the storm of November 16-19, 1869 alone destroyed vessels whose value exceeded $420,000. Lapham’s list of losses included a petition to establish a weather forecasting service. In 1850, he had prepared a similar proposal alongside a list of shipping of losses, and twice during the 1850s he had tracked barometric lows across Wisconsin to provide evidence that storms could be forecast.

Recipients of Lapham’s petitions included the Wisconsin Academy of Sciences, the Chicago Academy of Sciences, the National Board of Trade meeting in Richmond, a new Chicago monthly business periodical entitled The Bureau, and Congressman Halbert E. Paine of Milwaukee. Paine had studied meteorological theories under Professor Elias Loomis at Western Reserve College and would introduce storm-warning service bills and eventually the final joint resolution in the House that gave the Army Signal Service storm-warning responsibilities. In his book Treatise on Meteorology (1868), Loomis claimed that the approach of storms to New York could be predicted reliably given telegraphic reports from several locations in the Mississippi Valley. From December 1869 through February 1870, Lapham’s efforts received wider attention. The Bureau featured nine pieces on meteorology from December until March, including at least two by Lapham.

Following the Civil War, the future of a signaling organization in the Army was uncertain. Having had budget requests for telegraph and signal equipment for years 1870 and 1871 cut in half to $5000, Colonel Albert J. Myer, Chief Signal Officer, led a small organization seeking a permanent existence. He visited Congressman Paine’s office in December of 1869 with maps showing proposed observation stations throughout the United Stations. Myer’s eagerness for the weather responsibilities, as well as the discipline of the Army organization and a network of military posts in the West, many linked via telegraph, would appear to have made the Army Signal Service a natural choice. The marginal costs of an Army weather organization using Signal Service personnel included only instruments and commercial telegraphy expenses. On February 4, 1870, Congress approved the Congressional Joint Resolution which “authorizes and requires the Secretary of War to provide for taking of meteorological observations . . . and for giving notice on the northern lakes and on the sea-coast of the approach and force of storms.” Five days later, President Grant signed the bill.

Expansion of the Army Signal Service’s Weather Bureau

Observer-sergeants in the Signal Service recorded their first synchronous observations November 1, 1870, 7:35 a.m. Washington time at twenty-four stations. The storm-warning system began formal operation October 23, 1871 with potential flag displays at eight ports on the Great Lakes and sixteen ports on the Atlantic seaboard. At that time, only fifty general observation stations existed. Already by June 1872, Congress expanded the Army Signal Service’s explicit forecast responsibilities via an appropriations act to most of the United States “for such stations, reports, and signal as may be found necessary for the benefit of agriculture and commercial interests.” In 1872, the Signal Service also began publication of the Weekly Weather Chronicle during the growing seasons. It disappeared in 1877, reemerging in 1887 as the Weather Crop Bulletin. As the fall of 1872 began, confidence in the utility of weather information was so high that 89 agricultural societies and 38 boards of trade and chambers of commerce had appointed meteorological committees to communicate with the Army Signal Service. In addition to dispensing general weather forecasts for regions of the country three times a day, the Signal Service soon sent special warnings to areas in danger of cold waves and frosts.

The original method of warning ships of dangerous winds was hoisting a single red flag with a black square located in the middle. This was known as a cautionary signal, and Army personnel at Signal Service observation stations or civilians at display stations would raise the flag on a pole “whenever the winds are expected to be as strong as twenty-five miles per hour, and to continue so for several hours, within a radius of one hundred miles from the station.” In the first year of operation ending 1 September 1872, 354 cautionary signals were flown on both the Great Lakes and the Atlantic Seaboard, approximately 70% of which were verified as having met the above definition. Such a measure of accuracy is incomplete, however, as it can always be raised artificially by not forecasting storms under marginal conditions, even though such a strategy might diminish the value of the service.

The United States and Canada shared current meteorological information beginning in 1871. By 1880, seventeen Canadian stations reported meteorological data to the United States at least twice daily by telegraph. The number of Army Signal Service stations providing telegraphic reports three times a day stabilized at 138 stations in 1880, dipped to 121 stations in 1883, and grew to approximately 149 stations by 1888. (See Table 1 for a summary of the growth of the Army Signal Service Meteorological Network from 1870 to 1890.) Additional display stations only provided storm warnings at sea and lake ports. River stations monitored water levels in order to forecast floods. Special cotton-region stations, beginning in 1883, comprised a dense network of daily reporters of rainfall and maximum and minimum temperatures. Total Army Signal Service expenditures grew from a $15,000 supplemental appropriation for weather operations in fiscal year 1870 to about one million dollars for all Signal Service costs around 1880 and stabilized at that level. Figure 1 shows the extent geographical extent of the Army Signal Service telegraphic observation network in 1881.

Figure 1: Army Signal Service Observation Network in 1881
Click on the image for the larger, more detailed image (~600K)Source: Map between pages 250-51, Annual Report of the Chief Signal Officer, October 1, 1881, Congressional Serial Set Volume 2015. See the detailed map between pages 304-05 for the location of each of the different types of stations listed in Table 1.

Table 1: Growth of the United States Army Signal Service Meteorological Network

Budget (Real 1880 Dollars)

Stations of the Second Order

Stations of the Third Order

Repair Stations

Display Stations

Special River Stations

Special Cotton-Region Stations

1870

32,487

25

1871

112,456

54

1872

220,269

65

1873

549,634

80

9

1874

649,431

92

20

1875

749,228

98

20

1876

849,025

106

38

23

1877

849,025

116

29

10

9

23

1878

978,085

136

36

12

11

23

1879

1,043,604

158

30

17

46

30

1880

1,109,123

173

39

49

50

29

1881

1,080,254

171

47

44

61

29

87

1882

937,077

169

45

3

74

30

127

1883

950,737

143

42

27

7

30

124

1884

1,014,898

138

68

7

63

40

138

1885

1,085,479

152

58

8

64

66

137

1886

1,150,673

146

33

11

66

69

135

1887

1,080,291

145

31

13

63

70

133

1888

1,063,639

149

30

24

68

78

116

1889

1,022,031

148

32

23

66

72

114

1890

994,629

144

34

15

73

72

114

Sources: Report of the Chief Signal Officer: 1888, p. 171; 1889, p. 136; 1890, p. 203 and “Provision of Value of Weather Information Services,” Craft (1995), p. 34.

Notes: The actual total budgets for years 1870 through 1881 are estimated. Stations of the second order recorded meteorological conditions three times per day. Most immediately telegraphed the data. Stations of the third order recorded observations at sunset. Repair stations maintained Army telegraph lines. Display stations displayed storm warnings on the Great Lakes and Atlantic seaboard. Special river stations monitored water levels in order to forecast floods. Special cotton-region stations collected high temperature, low temperature, and precipitation data from a denser network of observation locations

Early Value of Weather Information

Budget reductions in the Army Signal Service’s weather activities in 1883 led to the reduction of fall storm-warning broadcast locations on the Great Lakes from 80 in 1882 to 43 in 1883. This one-year drop in the availability of storm-warnings creates a special opportunity to measure the value of warnings of extremely high winds on the Great Lakes (see Figure 2). Many other factors can be expected to affect the value of shipping losses on the Great Lakes: the level of commerce in a given season, the amount of shipping tonnage available to haul a season’s commerce, the relative composition of the tonnage (steam versus sail), the severity of the weather, and long-term trends in technological change or safety. Using a statistical technique know as multiple regression, in which the effect of these many factors on shipping losses are analyzed concurrently, Craft (1998) argued that each extra storm-warning location on the Great Lakes lowered losses by about one percent. This implies that the storm-warning system reduced losses on the Great Lakes by approximately one million dollars annually in the mid 1870s and between $1 million and $4.5 million dollars per year by the early 1880s.

Source: The data are found in the following: Chicago Daily Inter Ocean (December 5, 1874 p. 2; December 18, 1875; December 27, 1876 p. 6; December 17, 1878; December 29, 1879 p. 6; February 3, 1881 p. 12; December 28, 1883 p. 3; December 5, 1885 p. 4); Marine Record (December 27, 1883 p. 5; December 25, 1884 pp. 4-5; December 24, 1885 pp. 4-5; December 30, 1886 p. 6; December 15, 1887 pp 4-5); Chief Signal Officer, Annual Report of the Chief Signal Officer, 1871- 1890.

Note: Series E 52 of the Historical Statistics of the United States (U.S. Bureau of the Census, 1975) was used to adjust all values to real 1880 dollars.

There are additional indirect methods with which to confirm the preceding estimate of the value of early weather information. If storm-warnings actually reduced the risk of damage to cargo and ships due to bad weather, then the cost of shipping cargo would be expected to decline. In particular, such reductions in shipping prices due to savings in losses caused by storms can be differentiated from other types of technological improvements by studying how fall shipping prices changed relative to summer shipping prices. It was during the fall that ships were particularly vulnerable to accidents caused by storms. Changes is shipping prices of grain from Chicago to Buffalo during the summers and falls from the late 1860s to late 1880s imply that storm-warnings were valuable and are consistent with the more direct method estimating reductions in shipping losses. Although marine insurance premia data for shipments on the Great Lakes are limited and difficult to interpret due the waning and waxing of the insurance cartel’s cohesion, such data are also supportive of the overall interpretation.

Given Army Signal Service budgets of about one million dollars for providing meteorological services to the entire United States, a reasonable minimum bound for the rate of return to the creation of weather information from 1870 to 1888 is 64 percent. The figure includes no social benefits from any weather information other than Great Lakes storm warnings. This estimate of nineteenth century information implies that the creation and distribution of storm warnings by the United States Federal Government were a socially beneficial investment.

Transfer of Weather Services to the Department of Agriculture

The Allison Commission hearings in 1884 and 1885 sought to determine the appropriate organization of Federal agencies whose activities included scientific research. The Allison Commission’s long report included testimony and discussion relating to the organization of the Army Signal Service, the United States Geological Survey, the Coast and Geodetic Survey, and the Navy Hydrographic Office. Weather forecasting required a reliable network of observers, some of whom were the sole Army personnel at a location. Advantages of a military organizational structure included a greater range of disciplinary responses, including court-martials for soldiers, for deficient job performance. Problems, however, of the military organization included the limited ability to increase one’s rank while working for the Signal Service and tension between the civilian and military personnel. In 1891, after an unsuccessful Congressional attempt at reform in 1887, the Weather Bureau became a civilian organization when it joined the young Department of Agriculture.

Aviation and World War I

Interest in upper air weather conditions grew rapidly after the turn of the century on account of two related events: the development of aviation and World War I. Safe use of aircraft depended on more precise knowledge of weather conditions (winds, storms, and visibility) between takeoff and landing locations. Not only were military aircraft introduced during World War I, but understanding wind conditions was also crucial to the use of poison gas on the front lines. In the most important change of the Weather Bureau’s organizational direction since transfer to the Department of Agricultural, Congress passed the Air Commerce Act in 1926, which by 1932 led to 38% of the Weather Bureau’s budget being directed toward aerology research and support.

Transfer of the Weather Bureau to the Department of Commerce

Even though aerological expenditures by the Weather Bureau in support of aviation rivaled funding for general weather services by the late 1930s, the Weather Bureau came under increasing criticism from aviation interests. The Weather Bureau was transferred to the Department of Commerce in 1940 where other support for aviation already originated. This transition mirrored the declining role of agriculture in the United States and movement toward a more urban economy. Subsequently known as the United States Weather Service, it has remained there since.

World War II

During World War II, weather forecasts assumed greater importance, as aircraft and rapid troop movements became key parts of military strategy. Accurate long-range artillery use also depended on knowledge of prevailing winds. For extensive use of weather forecasts and climatological information during wartime, consider Allied plans the strike German oil refineries in Ploesti, Romania. In the winter of 1943 military weather teams parachuted into the mountains of Yugoslavia to relay weather data. Bombers from North Africa could only reach the refineries in the absence of headwinds in either direction of the sortie. Cloud cover en route was important for protection, clear skies were helpful for identification of targets, and southerly winds permitted the bombers to drop their ordinance on the first pass on the south side of the area’s infrastructure, allowing the winds to assist in spreading the fire. Historical data indicated that only March or August were possible windows. Though many aircraft were lost, the August 1 raid was considered a success.

Tide, wind, and cloud conditions were also crucial in the planning of the invasion of Normandy (planned for June 5 and postponed until June 6 in 1944). The German High Command had been advised by its chief meteorologist that conditions were not opportune for an Allied invasion on the days following June 4. Dissention among American and British military forecasters nearly delayed the invasion further. Had it been deferred until the next date of favorable tide conditions, the invasion would have taken place during the worst June storm in twenty years in the English Channel.

Forecasting in Europe

A storm on November 14, 1854 destroyed the French warship Henri IV and damaged other British and French vessels on the Black Sea involved in the Crimean War. A report from the state-supported Paris Observatory indicated that barometric readings showed that the storm has passed across Europe in about four days. Urban Leverrier, director of the Paris Observatory, concluded that had there been a telegraph line between Vienna and the Crimea, the British and French fleets could have received warnings. Although the United States weather network was preceded by storm-warning systems in the Netherlands in 1860, Great Britain in 1861, and France in 1863, the new United States observation network immediately dwarfed the European organizations in both financial resources and geographical magnitude.

Robert FitzRoy, captain of the Beagle during Darwin’s famous voyage, was appointed director of the Meteorological Department established by the British Board of Trade (a government organization) in 1854. The wreck of the well-constructed iron vessel Royal Charter in a storm with much loss of life in October of 1859 provided another opportunity for a meteorological leader to argue that storms could be tracked and forecast. With support from the Prince Consort, FitzRoy and the Meteorological Department were granted approval to establish a storm-warning service. On February 6, 1861 the first warnings were issued. By August 1861 weather forecasts were issued regularly. By 1863, the Meteorological Department had a budget of three thousand English pounds. Criticism arose from different groups. Scientists wished to establish meteorology on a sound theoretical foundation and differentiate it from astrology. At the time, many publishers of weather almanacs subscribed to various theories of the influence of the moon or other celestial bodies on weather (This is not as outlandish one might suppose; in 1875, well-known economist William Stanley Jevons studied connections between sunspot activity and meteorology with business cycles). Some members of this second group supported the practice of forecasting but were critical of FitzRoy’s technique, perhaps hoping to become alternative sources of forecasts. Amidst the criticism, FitzRoy committed suicide in 1865. Forecasts and warnings were discontinued in 1866 until the warnings resumed two years later. General forecasts were suspended until 1877.

In 1862, Leverrier wrote the French Ministry of Public Education that French naval and commercial interests might be compromised by their dependence on warnings from the British Board of Trade. A storm-warning service in France commenced in July of 1863. Given the general movement of storms westward, neither France nor Britain had the luxury of tracking storms well before they arrived, as would have been possible with the November 1854 storm in the Crimea and as the Army Signal Service soon would be able to do in America. On account of administrative difficulties that were to hinder effective functioning of the service until 1877, French warnings ceased in October 1865 but resumed in May the next year. The French Central Bureau Meteorology was founded only in 1878 with a budget of only $12,000.

After the initiation of storm warning systems that preceded the Army Signal Service weather network, Europe would not achieve meteorological prominence again until the Bergen School of meteorology developed new storm analysis techniques after World War I, which incorporated cold and warm fronts. In the difficult days in Norway during the conclusion of the Great War, meteorological information from the rest of Europe was unavailable. Theoretical physicist turned meteorological researcher Wilhelm Bjerknes appealed to Norway’s national interests in defense, in the development of commercial aviation, and in increased agricultural output to build a dense observation network, whose data helped yield a new paradigm for meteorology.

Conclusion

The first weather forecasts in the United States that were based on a large network of simultaneous observations provided information to society that was much more valuable than the cost of production. There was discussion in the early winter of 1870 between the scientist Increase Lapham and a businessman in Chicago of the feasibility of establishing a private forecasting organization in Wisconsin or Illinois (see Craft 1999). But previous attempts by private organizations in the United States had been unsuccessful in supporting any private weather-forecasting service. In the contemporary United States, the Federal government both collects data and offers forecasts, while private weather organizations provide a variety of customized services.

Weather Forecasting Timeline

1743

Benjamin Franklin, using reports of numerous postmasters, determined the northeastward path of a hurricane from the West Indies.

1772-1777

Thomas Jefferson at Monticello, Virginia and James Madison at Williamsburg, Virginia collect a series of contemporaneous weather observations.

1814

Surgeon General Tilton issues an order directing Army surgeons to keep a diary of the weather in order to ascertain any influences of weather upon disease.

1817

Josiah Meigs, Commission of the General Land Office, requests officials at their land offices to record meteorological observations.

1846-1848

Matthew F. Maury, Superintendent of the U.S. Naval Observatory, publishes his first charts compiled from ships’ log showing efficient sailing routes.

1847

Barometer used to issue storm warnings in Barbadoes.

1848

J. Jones of New York advertises meteorological reports costing between twelve and one half and twenty-five cents per city per day. There is no evidence the service was ever sold.

1848

Publication in the British Daily News of the first telegraphic daily weather report.

1849

The Smithsonian Institution begins a nearly three decade long project of collecting meteorological data with the goal of understanding storms.

1849

Captain Joseph Brooks, manager of the Portland Steamship Line, receives telegraphic reports three times a day from Albany, New York, and Plattsburg in order to determine if the line’s ships should remain in port in Maine.

1853-1855

Ebenezer E. Merriam of New York, using newspaper telegraphic reports, offers weather forecasts in New York’s newspapers on an apparently irregular basis.

1858

The U.S. Army Engineers begin collecting meteorological observations while surveying the Great Lakes.

1860

Christoph Buys Ballot issues first storm warnings in the Netherlands.

1861

Admiral Robert FitzRoy of the British Meteorological Office begins issuing storm-warnings.

1863

Urban Leverrier, director of the Paris Observatory, organizes a storm-warning service.

1868

Cleveland Abbe of the Cincinnati Observatory unsuccessfully proposes a weather service of one hundred observation stations to be supported by the Cincinnati Chamber of Commerce, Associated Press, Western Union, and local newspapers.

1869

The Cincinnati Chamber of Commerce funds a three-month trial of the Cincinnati Observatory’s weather bulletin. The Chicago Board of Trade declines to participate.

1869

Increase A. Lapham publishes a list of the shipping losses on the Great Lakes during the 1868 and 1869 seasons.

1870

Congress passes a joint resolution directing the Secretary of War to establish a meteorological network for the creation of storm warnings on the Great Lakes and Atlantic Seaboard. Storm-warnings are offered on November 8. Forecasts begin the following February 19.

1872

Congressional appropriations bill extends Army Signal Service duties to provide forecasts for agricultural and commercial interests.

1880

Frost warnings offered for Louisiana sugar producers.

1881-1884

Army Signal Service expedition to Lady Franklin Bay in support of international polar weather research. Only seven of the twenty-five member team survives.

1881

Special cotton-region weather reporting network established.

1891

Weather Bureau transferred to the Department of Agriculture.

1902

Daily weather forecasts sent by radio to Cunard Line steamships.

1905

First wireless weather report from a ship at sea.

1918

Norway expands its meteorological network and organization leading to the development of new forecasting theories centered on three-dimensional interaction of cold and warm fronts.

1919

American Meteorological Society founded.

1926

Air Commerce Act gives the Weather Bureau responsibility for providing weather services to aviation.

1934

First private sector meteorologist hired by a utility company.

1940

The Weather Bureau is transferred from the Department of Agriculture to the Department of Commerce.

1946

First private weather forecast companies begin service.

1960

The first meteorological satellite, Tiros I, enters orbit successfully.

1976

The United States launches its first geostationary weather satellites.

References

Abbe, Cleveland, Jr. “A Chronological Outline of the History of Meteorology in the United States.” Monthly Weather Review 37, no. 3-6 (1909): 87-89, 146- 49, 178-80, 252-53.

Alter, J. Cecil. “National Weather Service Origins.” Bulletin of the Historical and Philosophical Society of Ohio 7, no. 3 (1949): 139-85.

Anderson, Katharine. “The Weather Prophets: Science and Reputation in Victorian Meteorology.” History of Science 37 (1999): 179-216.

Burton, Jim. “Robert Fitzroy and the Early History of the Meteorological Office.” British Journal for the History of Science 19 (1986): 147-76.

Chief Signal Officer. Report of the Chief Signal Officer. Washington: GPO, 1871-1890.

Craft, Erik. “The Provision and Value of Weather Information Services in the United States during the Founding Period of the Weather Bureau with Special Reference to Transportation on the Great Lakes.” Ph.D. diss., University of Chicago, 1995.

Craft, Erik. “The Value of Weather Information Services for Nineteenth-Century Great Lakes Shipping.” American Economic Review 88, no.5 (1998): 1059-1076.

Craft, Erik. “Private Weather Organizations and the Founding of the United States Weather Bureau.” Journal of Economic History 59, no. 4 (1999): 1063- 1071.

Davis, John L. “Weather Forecasting and the Development of Meteorological Theory at the Paris Observatory.” Annals of Science 41 (1984): 359-82.

Fleming, James Rodger. Meteorology in America, 1800-1870. Baltimore: Johns Hopkins University Press, 1990.

Fleming, James Rodger, and Roy E. Goodman, editors. International Bibliography of Meteorology. Upland, Pennsylvania: Diane Publishing Co., 1994.

Friedman, Robert Marc. Appropriating the Weather: Vilhelm Bjerknes and the Construction of a Modern Meteorology. Ithaca: Cornell University Press, 1989.

Hughes, Patrick. A Century of Weather Service. New York: Gordon and Breach, 1970.

Miller, Eric R. “The Evolution of Meteorological Institutions in United States.” Monthly Weather Review 59 (1931): 1-6.

Miller, Eric R. “New Light on the Beginnings of the Weather Bureau from the Papers of Increase A. Lapham.” Monthly Weather Review 59 (1931): 65-70.

Sah, Raaj. “Priorities of Developing Countries in Weather and Climate.” World Development 7 no. 3 (1979): 337-47.

Spiegler, David B. “A History of Private Sector Meteorology.” In Historical Essays on Meteorology, 1919-1995, edited by James Rodger Fleming, 417- 41. Boston: American Meteorological Society, 1996.

Weber, Gustavus A. The Weather Bureau: Its History, Activities and Organization. New York: D. Appleton and Company, 1922.

Whitnah, Donald R. A History of the United States Weather Bureau. Urbana: University of Illinois Press, 1961.

Citation: Craft, Erik. “Economic History of Weather Forecasting”. EH.Net Encyclopedia, edited by Robert Whaples. October 6, 2001. URL http://eh.net/encyclopedia/an-economic-history-of-weather-forecasting/

Slavery in the United States

Jenny Bourne, Carleton College

Slavery is fundamentally an economic phenomenon. Throughout history, slavery has existed where it has been economically worthwhile to those in power. The principal example in modern times is the U.S. South. Nearly 4 million slaves with a market value estimated to be between $3.1 and $3.6 billion lived in the U.S. just before the Civil War. Masters enjoyed rates of return on slaves comparable to those on other assets; cotton consumers, insurance companies, and industrial enterprises benefited from slavery as well. Such valuable property required rules to protect it, and the institutional practices surrounding slavery display a sophistication that rivals modern-day law and business.

THE SPREAD OF SLAVERY IN THE U.S.

Not long after Columbus set sail for the New World, the French and Spanish brought slaves with them on various expeditions. Slaves accompanied Ponce de Leon to Florida in 1513, for instance. But a far greater proportion of slaves arrived in chains in crowded, sweltering cargo holds. The first dark-skinned slaves in what was to become British North America arrived in Virginia — perhaps stopping first in Spanish lands — in 1619 aboard a Dutch vessel. From 1500 to 1900, approximately 12 million Africans were forced from their homes to go westward, with about 10 million of them completing the journey. Yet very few ended up in the British colonies and young American republic. By 1808, when the trans-Atlantic slave trade to the U.S. officially ended, only about 6 percent of African slaves landing in the New World had come to North America.

Slavery in the North

Colonial slavery had a slow start, particularly in the North. The proportion there never got much above 5 percent of the total population. Scholars have speculated as to why, without coming to a definite conclusion. Some surmise that indentured servants were fundamentally better suited to the Northern climate, crops, and tasks at hand; some claim that anti-slavery sentiment provided the explanation. At the time of the American Revolution, fewer than 10 percent of the half million slaves in the thirteen colonies resided in the North, working primarily in agriculture. New York had the greatest number, with just over 20,000. New Jersey had close to 12,000 slaves. Vermont was the first Northern region to abolish slavery when it became an independent republic in 1777. Most of the original Northern colonies implemented a process of gradual emancipation in the late eighteenth and early nineteenth centuries, requiring the children of slave mothers to remain in servitude for a set period, typically 28 years. Other regions above the Mason-Dixon line ended slavery upon statehood early in the nineteenth century — Ohio in 1803 and Indiana in 1816, for instance.

TABLE 1
Population of the Original Thirteen Colonies, selected years by type

1750 1750 1790 1790 1790 1810 1810 1810 1860 1860 1860

State

White Black White Free Slave White Free Slave White Free Slave
Nonwhite Nonwhite Nonwhite
108,270 3,010 232,236 2,771 2,648 255,179 6,453 310 451,504 8,643 - Connecticut
27,208 1,496 46,310 3,899 8,887 55,361 13,136 4,177 90,589 19,829 1,798 Delaware
4,200 1,000 52,886 398 29,264 145,414 1,801 105,218 591,550 3,538 462,198 Georgia
97,623 43,450 208,649 8,043 103,036 235,117 33,927 111,502 515,918 83,942 87,189 Maryland
183,925 4,075 373,187 5,369 - 465,303 6,737 - 1,221,432 9,634 - Massachusetts
26,955 550 141,112 630 157 182,690 970 - 325,579 494 - New Hampshire
66,039 5,354 169,954 2,762 11,423 226,868 7,843 10,851 646,699 25,318 - New Jersey
65,682 11,014 314,366 4,682 21,193 918,699 25,333 15,017 3,831,590 49,145 - New York
53,184 19,800 289,181 5,041 100,783 376,410 10,266 168,824 629,942 31,621 331,059 North Carolina
116,794 2,872 317,479 6,531 3,707 786,804 22,492 795 2,849,259 56,956 - Pennsylvania
29,879 3,347 64,670 3,484 958 73,214 3,609 108 170,649 3,971 - Rhode Island
25,000 39,000 140,178 1,801 107,094 214,196 4,554 196,365 291,300 10,002 402,406 South Carolina
129,581 101,452 442,117 12,866 292,627 551,534 30,570 392,518 1,047,299 58,154 490,865 Virginia
934,340 236,420 2,792,325 58,277 681,777 4,486,789 167,691 1,005,685 12,663,310 361,247 1,775,515 United States

Source: Historical Statistics of the U.S. (1970), Franklin (1988).

Slavery in the South

Throughout colonial and antebellum history, U.S. slaves lived primarily in the South. Slaves comprised less than a tenth of the total Southern population in 1680 but grew to a third by 1790. At that date, 293,000 slaves lived in Virginia alone, making up 42 percent of all slaves in the U.S. at the time. South Carolina, North Carolina, and Maryland each had over 100,000 slaves. After the American Revolution, the Southern slave population exploded, reaching about 1.1 million in 1810 and over 3.9 million in 1860.

TABLE 2
Population of the South 1790-1860 by type

Year White Free Nonwhite Slave
1790 1,240,454 32,523 654,121
1800 1,691,892 61,575 851,532
1810 2,118,144 97,284 1,103,700
1820 2,867,454 130,487 1,509,904
1830 3,614,600 175,074 1,983,860
1840 4,601,873 207,214 2,481,390
1850 6,184,477 235,821 3,200,364
1860 8,036,700 253,082 3,950,511

Source: Historical Statistics of the U.S. (1970).

Slave Ownership Patterns

Despite their numbers, slaves typically comprised a minority of the local population. Only in antebellum South Carolina and Mississippi did slaves outnumber free persons. Most Southerners owned no slaves and most slaves lived in small groups rather than on large plantations. Less than one-quarter of white Southerners held slaves, with half of these holding fewer than five and fewer than 1 percent owning more than one hundred. In 1860, the average number of slaves residing together was about ten.

TABLE 3
Slaves as a Percent of the Total Population
selected years, by Southern state

1750 1790 1810 1860
State Black/total Slave/total Slave/total Slave/total
population population population population
Alabama 45.12
Arkansas 25.52
Delaware 5.21 15.04 5.75 1.60
Florida 43.97
Georgia 19.23 35.45 41.68 43.72
Kentucky 16.87 19.82 19.51
Louisiana 46.85
Maryland 30.80 32.23 29.30 12.69
Mississippi 55.18
Missouri 9.72
North Carolina 27.13 25.51 30.39 33.35
South Carolina 60.94 43.00 47.30 57.18
Tennessee 17.02 24.84
Texas 30.22
Virginia 43.91 39.14 40.27 30.75
Overall 37.97 33.95 33.25 32.27

Sources: Historical Statistics of the United States (1970), Franklin (1988).

TABLE 4
Holdings of Southern Slaveowners
by states, 1860

State Total Held 1 Held 2 Held 3 Held 4 Held 5 Held 1-5 Held 100- Held 500+
slaveholders slave slaves Slaves slaves slaves slaves 499 slaves slaves
AL 33,730 5,607 3,663 2,805 2,329 1,986 16,390 344 -
AR 11,481 2,339 1,503 1,070 894 730 6,536 65 1
DE 587 237 114 74 51 34 510 - -
FL 5,152 863 568 437 365 285 2,518 47 -
GA 41,084 6,713 4,335 3,482 2,984 2,543 20,057 211 8
KY 38,645 9,306 5,430 4,009 3,281 2,694 24,720 7 -
LA 22,033 4,092 2,573 2,034 1,536 1,310 11,545 543 4
MD 13,783 4,119 1,952 1,279 1,023 815 9,188 16 -
MS 30,943 4,856 3,201 2,503 2,129 1,809 14,498 315 1
MO 24,320 6,893 3,754 2,773 2,243 1,686 17,349 4 -
NC 34,658 6,440 4,017 3,068 2,546 2,245 18,316 133 -
SC 26,701 3,763 2,533 1,990 1,731 1,541 11,558 441 8
TN 36,844 7,820 4,738 3,609 3,012 2,536 21,715 47 -
TX 21,878 4,593 2,874 2,093 1,782 1,439 12,781 54 -
VA 52,128 11,085 5,989 4,474 3,807 3,233 28,588 114 -
TOTAL 393,967 78,726 47,244 35,700 29,713 24,886 216,269 2,341 22

Source: Historical Statistics of the United States (1970).

Rapid Natural Increase in U.S. Slave Population

How did the U.S. slave population increase nearly fourfold between 1810 and 1860, given the demise of the trans-Atlantic trade? They enjoyed an exceptional rate of natural increase. Unlike elsewhere in the New World, the South did not require constant infusions of immigrant slaves to keep its slave population intact. In fact, by 1825, 36 percent of the slaves in the Western hemisphere lived in the U.S. This was partly due to higher birth rates, which were in turn due to a more equal ratio of female to male slaves in the U.S. relative to other parts of the Americas. Lower mortality rates also figured prominently. Climate was one cause; crops were another. U.S. slaves planted and harvested first tobacco and then, after Eli Whitney’s invention of the cotton gin in 1793, cotton. This work was relatively less grueling than the tasks on the sugar plantations of the West Indies and in the mines and fields of South America. Southern slaves worked in industry, did domestic work, and grew a variety of other food crops as well, mostly under less abusive conditions than their counterparts elsewhere. For example, the South grew half to three-quarters of the corn crop harvested between 1840 and 1860.

INSTITUTIONAL FRAMEWORK

Central to the success of slavery are political and legal institutions that validate the ownership of other persons. A Kentucky court acknowledged the dual character of slaves in Turner v. Johnson (1838): “[S]laves are property and must, under our present institutions, be treated as such. But they are human beings, with like passions, sympathies, and affections with ourselves.” To construct slave law, lawmakers borrowed from laws concerning personal property and animals, as well as from rules regarding servants, employees, and free persons. The outcome was a set of doctrines that supported the Southern way of life.

The English common law of property formed a foundation for U.S. slave law. The French and Spanish influence in Louisiana — and, to a lesser extent, Texas — meant that Roman (or civil) law offered building blocks there as well. Despite certain formal distinctions, slave law as practiced differed little from common-law to civil-law states. Southern state law governed roughly five areas: slave status, masters’ treatment of slaves, interactions between slaveowners and contractual partners, rights and duties of noncontractual parties toward others’ slaves, and slave crimes. Federal law and laws in various Northern states also dealt with matters of interstate commerce, travel, and fugitive slaves.

Interestingly enough, just as slave law combined elements of other sorts of law, so too did it yield principles that eventually applied elsewhere. Lawmakers had to consider the intelligence and volition of slaves as they crafted laws to preserve property rights. Slavery therefore created legal rules that could potentially apply to free persons as well as to those in bondage. Many legal principles we now consider standard in fact had their origins in slave law.

Legal Status Of Slaves And Blacks

By the end of the seventeenth century, the status of blacks — slave or free — tended to follow the status of their mothers. Generally, “white” persons were not slaves but Native and African Americans could be. One odd case was the offspring of a free white woman and a slave: the law often bound these people to servitude for thirty-one years. Conversion to Christianity could set a slave free in the early colonial period, but this practice quickly disappeared.

Skin Color and Status

Southern law largely identified skin color with status. Those who appeared African or of African descent were generally presumed to be slaves. Virginia was the only state to pass a statute that actually classified people by race: essentially, it considered those with one quarter or more black ancestry as black. Other states used informal tests in addition to visual inspection: one-quarter, one-eighth, or one-sixteenth black ancestry might categorize a person as black.

Even if blacks proved their freedom, they enjoyed little higher status than slaves except, to some extent, in Louisiana. Many Southern states forbade free persons of color from becoming preachers, selling certain goods, tending bar, staying out past a certain time of night, or owning dogs, among other things. Federal law denied black persons citizenship under the Dred Scott decision (1857). In this case, Chief Justice Roger Taney also determined that visiting a free state did not free a slave who returned to a slave state, nor did traveling to a free territory ensure emancipation.

Rights And Responsibilities Of Slave Masters

Southern masters enjoyed great freedom in their dealings with slaves. North Carolina Chief Justice Thomas Ruffin expressed the sentiments of many Southerners when he wrote in State v. Mann (1829): “The power of the master must be absolute, to render the submission of the slave perfect.” By the nineteenth century, household heads had far more physical power over their slaves than their employees. In part, the differences in allowable punishment had to do with the substitutability of other means of persuasion. Instead of physical coercion, antebellum employers could legally withhold all wages if a worker did not complete all agreed-upon services. No such alternate mechanism existed for slaves.

Despite the respect Southerners held for the power of masters, the law — particularly in the thirty years before the Civil War — limited owners somewhat. Southerners feared that unchecked slave abuse could lead to theft, public beatings, and insurrection. People also thought that hungry slaves would steal produce and livestock. But masters who treated slaves too well, or gave them freedom, caused consternation as well. The preamble to Delaware’s Act of 1767 conveys one prevalent view: “[I]t is found by experience, that freed [N]egroes and mulattoes are idle and slothful, and often prove burdensome to the neighborhood wherein they live, and are of evil examples to slaves.” Accordingly, masters sometimes fell afoul of the criminal law not only when they brutalized or neglected their slaves, but also when they indulged or manumitted slaves. Still, prosecuting masters was extremely difficult, because often the only witnesses were slaves or wives, neither of whom could testify against male heads of household.

Law of Manumission

One area that changed dramatically over time was the law of manumission. The South initially allowed masters to set their slaves free because this was an inherent right of property ownership. During the Revolutionary period, some Southern leaders also believed that manumission was consistent with the ideology of the new nation. Manumission occurred only rarely in colonial times, increased dramatically during the Revolution, then diminished after the early 1800s. By the 1830s, most Southern states had begun to limit manumission. Allowing masters to free their slaves at will created incentives to emancipate only unproductive slaves. Consequently, the community at large bore the costs of young, old, and disabled former slaves. The public might also run the risk of having rebellious former slaves in its midst.

Antebellum U.S. Southern states worried considerably about these problems and eventually enacted restrictions on the age at which slaves could be free, the number freed by any one master, and the number manumitted by last will. Some required former masters to file indemnifying bonds with state treasurers so governments would not have to support indigent former slaves. Some instead required former owners to contribute to ex-slaves’ upkeep. Many states limited manumissions to slaves of a certain age who were capable of earning a living. A few states made masters emancipate their slaves out of state or encouraged slaveowners to bequeath slaves to the Colonization Society, which would then send the freed slaves to Liberia. Former slaves sometimes paid fees on the way out of town to make up for lost property tax revenue; they often encountered hostility and residential fees on the other end as well. By 1860, most Southern states had banned in-state and post-mortem manumissions, and some had enacted procedures by which free blacks could voluntarily become slaves.

Other Restrictions

In addition to constraints on manumission, laws restricted other actions of masters and, by extension, slaves. Masters generally had to maintain a certain ratio of white to black residents upon plantations. Some laws barred slaves from owning musical instruments or bearing firearms. All states refused to allow slaves to make contracts or testify in court against whites. About half of Southern states prohibited masters from teaching slaves to read and write although some of these permitted slaves to learn rudimentary mathematics. Masters could use slaves for some tasks and responsibilities, but they typically could not order slaves to compel payment, beat white men, or sample cotton. Nor could slaves officially hire themselves out to others, although such prohibitions were often ignored by masters, slaves, hirers, and public officials. Owners faced fines and sometimes damages if their slaves stole from others or caused injuries.

Southern law did encourage benevolence, at least if it tended to supplement the lash and shackle. Court opinions in particular indicate the belief that good treatment of slaves could enhance labor productivity, increase plantation profits, and reinforce sentimental ties. Allowing slaves to control small amounts of property, even if statutes prohibited it, was an oft-sanctioned practice. Courts also permitted slaves small diversions, such as Christmas parties and quilting bees, despite statutes that barred slave assemblies.

Sale, Hire, And Transportation Of Slaves

Sales of Slaves

Slaves were freely bought and sold across the antebellum South. Southern law offered greater protection to slave buyers than to buyers of other goods, in part because slaves were complex commodities with characteristics not easily ascertained by inspection. Slave sellers were responsible for their representations, required to disclose known defects, and often liable for unknown defects, as well as bound by explicit contractual language. These rules stand in stark contrast to the caveat emptor doctrine applied in antebellum commodity sales cases. In fact, they more closely resemble certain provisions of the modern Uniform Commercial Code. Sales law in two states stands out. South Carolina was extremely pro-buyer, presuming that any slave sold at full price was sound. Louisiana buyers enjoyed extensive legal protection as well. A sold slave who later manifested an incurable disease or vice — such as a tendency to escape frequently — could generate a lawsuit that entitled the purchaser to nullify the sale.

Hiring Out Slaves

Slaves faced the possibility of being hired out by their masters as well as being sold. Although scholars disagree about the extent of hiring in agriculture, most concur that hired slaves frequently worked in manufacturing, construction, mining, and domestic service. Hired slaves and free persons often labored side by side. Bond and free workers both faced a legal burden to behave responsibly on the job. Yet the law of the workplace differed significantly for the two: generally speaking, employers were far more culpable in cases of injuries to slaves. The divergent law for slave and free workers does not necessarily imply that free workers suffered. Empirical evidence shows that nineteenth-century free laborers received at least partial compensation for the risks of jobs. Indeed, the tripartite nature of slave-hiring arrangements suggests why antebellum laws appeared as they did. Whereas free persons had direct work and contractual relations with their bosses, slaves worked under terms designed by others. Free workers arguably could have walked out or insisted on different conditions or wages. Slaves could not. The law therefore offered substitute protections. Still, the powerful interests of slaveowners also may mean that they simply were more successful at shaping the law. Postbellum developments in employment law — North and South — in fact paralleled earlier slave-hiring law, at times relying upon slave cases as legal precedents.

Public Transportation

Public transportation also figured into slave law: slaves suffered death and injury aboard common carriers as well as traveled as legitimate passengers and fugitives. As elsewhere, slave-common carrier law both borrowed from and established precedents for other areas of law. One key doctrine originating in slave cases was the “last-clear-chance rule.” Common-carrier defendants that had failed to offer slaves — even negligent slaves — a last clear chance to avoid accidents ended up paying damages to slaveowners. Slaveowner plaintiffs won several cases in the decade before the Civil War when engineers failed to warn slaves off railroad tracks. Postbellum courts used slave cases as precedents to entrench the last-clear-chance doctrine.

Slave Control: Patrollers And Overseers

Society at large shared in maintaining the machinery of slavery. In place of a standing police force, Southern states passed legislation to establish and regulate county-wide citizen patrols. Essentially, Southern citizens took upon themselves the protection of their neighbors’ interests as well as their own. County courts had local administrative authority; court officials appointed three to five men per patrol from a pool of white male citizens to serve for a specified period. Typical patrol duty ranged from one night per week for a year to twelve hours per month for three months. Not all white men had to serve: judges, magistrates, ministers, and sometimes millers and blacksmiths enjoyed exemptions. So did those in the higher ranks of the state militia. In many states, courts had to select from adult males under a certain age, usually 45, 50, or 60. Some states allowed only slaveowners or householders to join patrols. Patrollers typically earned fees for captured fugitive slaves and exemption from road or militia duty, as well as hourly wages. Keeping order among slaves was the patrollers’ primary duty. Statutes set guidelines for appropriate treatment of slaves and often imposed fines for unlawful beatings. In rare instances, patrollers had to compensate masters for injured slaves. For the most part, however, patrollers enjoyed quasi-judicial or quasi-executive powers in their dealings with slaves.

Overseers commanded considerable control as well. The Southern overseer was the linchpin of the large slave plantation. He ran daily operations and served as a first line of defense in safeguarding whites. The vigorous protests against drafting overseers into military service during the Civil War reveal their significance to the South. Yet slaves were too valuable to be left to the whims of frustrated, angry overseers. Injuries caused to slaves by overseers’ cruelty (or “immoral conduct”) usually entitled masters to recover civil damages. Overseers occasionally confronted criminal charges as well. Brutality by overseers naturally generated responses by their victims; at times, courts reduced murder charges to manslaughter when slaves killed abusive overseers.

Protecting The Master Against Loss: Slave Injury And Slave Stealing

Whether they liked it or not, many Southerners dealt daily with slaves. Southern law shaped these interactions among strangers, awarding damages more often for injuries to slaves than injuries to other property or persons, shielding slaves more than free persons from brutality, and generating convictions more frequently in slave-stealing cases than in other criminal cases. The law also recognized more offenses against slaveowners than against other property owners because slaves, unlike other property, succumbed to influence.

Just as assaults of slaves generated civil damages and criminal penalties, so did stealing a slave to sell him or help him escape to freedom. Many Southerners considered slave stealing worse than killing fellow citizens. In marked contrast, selling a free black person into slavery carried almost no penalty.

The counterpart to helping slaves escape — picking up fugitives — also created laws. Southern states offered rewards to defray the costs of capture or passed statutes requiring owners to pay fees to those who caught and returned slaves. Some Northern citizens worked hand-in-hand with their Southern counterparts, returning fugitive slaves to masters either with or without the prompting of law. But many Northerners vehemently opposed the peculiar institution. In an attempt to stitch together the young nation, the federal government passed the first fugitive slave act in 1793. To circumvent its application, several Northern states passed personal liberty laws in the 1840s. Stronger federal fugitive slave legislation then passed in 1850. Still, enough slaves fled to freedom — perhaps as many as 15,000 in the decade before the Civil War — with the help (or inaction) of Northerners that the profession of “slave-catching” evolved. This occupation was often highly risky — enough so that such men could not purchase life insurance coverage — and just as often highly lucrative.

Slave Crimes

Southern law governed slaves as well as slaveowners and their adversaries. What few due process protections slaves possessed stemmed from desires to grant rights to masters. Still, slaves faced harsh penalties for their crimes. When slaves stole, rioted, set fires, or killed free people, the law sometimes had to subvert the property rights of masters in order to preserve slavery as a social institution.

Slaves, like other antebellum Southern residents, committed a host of crimes ranging from arson to theft to homicide. Other slave crimes included violating curfew, attending religious meetings without a master’s consent, and running away. Indeed, a slave was not permitted off his master’s farm or business without his owner’s permission. In rural areas, a slave was required to carry a written pass to leave the master’s land.

Southern states erected numerous punishments for slave crimes, including prison terms, banishment, whipping, castration, and execution. In most states, the criminal law for slaves (and blacks generally) was noticeably harsher than for free whites; in others, slave law as practiced resembled that governing poorer white citizens. Particularly harsh punishments applied to slaves who had allegedly killed their masters or who had committed rebellious acts. Southerners considered these acts of treason and resorted to immolation, drawing and quartering, and hanging.

MARKETS AND PRICES

Market prices for slaves reflect their substantial economic value. Scholars have gathered slave prices from a variety of sources, including censuses, probate records, plantation and slave-trader accounts, and proceedings of slave auctions. These data sets reveal that prime field hands went for four to six hundred dollars in the U.S. in 1800, thirteen to fifteen hundred dollars in 1850, and up to three thousand dollars just before Fort Sumter fell. Even controlling for inflation, the prices of U.S. slaves rose significantly in the six decades before South Carolina seceded from the Union. By 1860, Southerners owned close to $4 billion worth of slaves. Slavery remained a thriving business on the eve of the Civil War: Fogel and Engerman (1974) projected that by 1890 slave prices would have increased on average more than 50 percent over their 1860 levels. No wonder the South rose in armed resistance to protect its enormous investment.

Slave markets existed across the antebellum U.S. South. Even today, one can find stone markers like the one next to the Antietam battlefield, which reads: “From 1800 to 1865 This Stone Was Used as a Slave Auction Block. It has been a famous landmark at this original location for over 150 years.” Private auctions, estate sales, and professional traders facilitated easy exchange. Established dealers like Franklin and Armfield in Virginia, Woolfolk, Saunders, and Overly in Maryland, and Nathan Bedford Forrest in Tennessee prospered alongside itinerant traders who operated in a few counties, buying slaves for cash from their owners, then moving them overland in coffles to the lower South. Over a million slaves were taken across state lines between 1790 and 1860 with many more moving within states. Some of these slaves went with their owners; many were sold to new owners. In his monumental study, Michael Tadman (1989) found that slaves who lived in the upper South faced a very real chance of being sold for profit. From 1820 to 1860, he estimated that an average of 200,000 slaves per decade moved from the upper to the lower South, most via sales. A contemporary newspaper, The Virginia Times, calculated that 40,000 slaves were sold in the year 1830.

Determinants of Slave Prices

The prices paid for slaves reflected two economic factors: the characteristics of the slave and the conditions of the market. Important individual features included age, sex, childbearing capacity (for females), physical condition, temperament, and skill level. In addition, the supply of slaves, demand for products produced by slaves, and seasonal factors helped determine market conditions and therefore prices.

Age and Price

Prices for both male and female slaves tended to follow similar life-cycle patterns. In the U.S. South, infant slaves sold for a positive price because masters expected them to live long enough to make the initial costs of raising them worthwhile. Prices rose through puberty as productivity and experience increased. In nineteenth-century New Orleans, for example, prices peaked at about age 22 for females and age 25 for males. Girls cost more than boys up to their mid-teens. The genders then switched places in terms of value. In the Old South, boys aged 14 sold for 71 percent of the price of 27-year-old men, whereas girls aged 14 sold for 65 percent of the price of 27-year-old men. After the peak age, prices declined slowly for a time, then fell off rapidly as the aging process caused productivity to fall. Compared to full-grown men, women were worth 80 to 90 percent as much. One characteristic in particular set some females apart: their ability to bear children. Fertile females commanded a premium. The mother-child link also proved important for pricing in a different way: people sometimes paid more for intact families.


Source: Fogel and Engerman (1974)

Other Characteristics and Price

Skills, physical traits, mental capabilities, and other qualities also helped determine a slave’s price. Skilled workers sold for premiums of 40-55 percent whereas crippled and chronically ill slaves sold for deep discounts. Slaves who proved troublesome — runaways, thieves, layabouts, drunks, slow learners, and the like — also sold for lower prices. Taller slaves cost more, perhaps because height acts as a proxy for healthiness. In New Orleans, light-skinned females (who were often used as concubines) sold for a 5 percent premium.

Fluctuations in Supply

Prices for slaves fluctuated with market conditions as well as with individual characteristics. U.S. slave prices fell around 1800 as the Haitian revolution sparked the movement of slaves into the Southern states. Less than a decade later, slave prices climbed when the international slave trade was banned, cutting off legal external supplies. Interestingly enough, among those who supported the closing of the trans-Atlantic slave trade were several Southern slaveowners. Why this apparent anomaly? Because the resulting reduction in supply drove up the prices of slaves already living in the U.S and, hence, their masters’ wealth. U.S. slaves had high enough fertility rates and low enough mortality rates to reproduce themselves, so Southern slaveowners did not worry about having too few slaves to go around.

Fluctuations in Demand

Demand helped determine prices as well. The demand for slaves derived in part from the demand for the commodities and services that slaves provided. Changes in slave occupations and variability in prices for slave-produced goods therefore created movements in slave prices. As slaves replaced increasingly expensive indentured servants in the New World, their prices went up. In the period 1748 to 1775, slave prices in British America rose nearly 30 percent. As cotton prices fell in the 1840s, Southern slave prices also fell. But, as the demand for cotton and tobacco grew after about 1850, the prices of slaves increased as well.

Interregional Price Differences

Differences in demand across regions led to transitional regional price differences, which in turn meant large movements of slaves. Yet because planters experienced greater stability among their workforce when entire plantations moved, 84 percent of slaves were taken to the lower South in this way rather than being sold piecemeal.

Time of Year and Price

Demand sometimes had to do with the time of year a sale took place. For example, slave prices in the New Orleans market were 10 to 20 percent higher in January than in September. Why? September was a busy time of year for plantation owners: the opportunity cost of their time was relatively high. Prices had to be relatively low for them to be willing to travel to New Orleans during harvest time.

Expectations and Prices

One additional demand factor loomed large in determining slave prices: the expectation of continued legal slavery. As the American Civil War progressed, prices dropped dramatically because people could not be sure that slavery would survive. In New Orleans, prime male slaves sold on average for $1381 in 1861 and for $1116 in 1862. Burgeoning inflation meant that real prices fell considerably more. By war’s end, slaves sold for a small fraction of their 1860 price.


Source: Data supplied by Stanley Engerman and reported in Walton and Rockoff (1994).

PROFITABILITY, EFFICIENCY, AND EXPLOITATION

That slavery was profitable seems almost obvious. Yet scholars have argued furiously about this matter. On one side stand antebellum writers such as Hinton Rowan Helper and Frederick Law Olmstead, many antebellum abolitionists, and contemporary scholars like Eugene Genovese (at least in his early writings), who speculated that American slavery was unprofitable, inefficient, and incompatible with urban life. On the other side are scholars who have marshaled masses of data to support their contention that Southern slavery was profitable and efficient relative to free labor and that slavery suited cities as well as farms. These researchers stress the similarity between slave markets and markets for other sorts of capital.

Consensus That Slavery Was Profitable

This battle has largely been won by those who claim that New World slavery was profitable. Much like other businessmen, New World slaveowners responded to market signals — adjusting crop mixes, reallocating slaves to more profitable tasks, hiring out idle slaves, and selling slaves for profit. One well-known instance shows that contemporaneous free labor thought that urban slavery may even have worked too well: employees of the Tredegar Iron Works in Richmond, Virginia, went out on their first strike in 1847 to protest the use of slave labor at the Works.

Fogel and Engerman’s Time on the Cross

Carrying the banner of the “slavery was profitable” camp is Nobel laureate Robert Fogel. Perhaps the most controversial book ever written about American slavery is Time on the Cross, published in 1974 by Fogel and co-author Stanley Engerman. These men were among the first to use modern statistical methods, computers, and large datasets to answer a series of empirical questions about the economics of slavery. To find profit levels and rates of return, they built upon the work of Alfred Conrad and John Meyer, who in 1958 had calculated similar measures from data on cotton prices, physical yield per slave, demographic characteristics of slaves (including expected lifespan), maintenance and supervisory costs, and (in the case of females) number of children. To estimate the relative efficiency of farms, Fogel and Engerman devised an index of “total factor productivity,” which measured the output per average unit of input on each type of farm. They included in this index controls for quality of livestock and land and for age and sex composition of the workforce, as well as amounts of output, labor, land, and capital

Time on the Cross generated praise — and considerable criticism. A major critique appeared in 1976 as a collection of articles entitled Reckoning with Slavery. Although some contributors took umbrage at the tone of the book and denied that it broke new ground, others focused on flawed and insufficient data and inappropriate inferences. Despite its shortcomings, Time on the Cross inarguably brought people’s attention to a new way of viewing slavery. The book also served as a catalyst for much subsequent research. Even Eugene Genovese, long an ardent proponent of the belief that Southern planters had held slaves for their prestige value, finally acknowledged that slavery was probably a profitable enterprise. Fogel himself refined and expanded his views in a 1989 book, Without Consent or Contract.

Efficiency Estimates

Fogel’s and Engerman’s research led them to conclude that investments in slaves generated high rates of return, masters held slaves for profit motives rather than for prestige, and slavery thrived in cities and rural areas alike. They also found that antebellum Southern farms were 35 percent more efficient overall than Northern ones and that slave farms in the New South were 53 percent more efficient than free farms in either North or South. This would mean that a slave farm that is otherwise identical to a free farm (in terms of the amount of land, livestock, machinery and labor used) would produce output worth 53 percent more than the free. On the eve of the Civil War, slavery flourished in the South and generated a rate of economic growth comparable to that of many European countries, according to Fogel and Engerman. They also discovered that, because slaves constituted a considerable portion of individual wealth, masters fed and treated their slaves reasonably well. Although some evidence indicates that infant and young slaves suffered much worse conditions than their freeborn counterparts, teenaged and adult slaves lived in conditions similar to — sometimes better than — those enjoyed by many free laborers of the same period.

Transition from Indentured Servitude to Slavery

One potent piece of evidence supporting the notion that slavery provides pecuniary benefits is this: slavery replaces other labor when it becomes relatively cheaper. In the early U.S. colonies, for example, indentured servitude was common. As the demand for skilled servants (and therefore their wages) rose in England, the cost of indentured servants went up in the colonies. At the same time, second-generation slaves became more productive than their forebears because they spoke English and did not have to adjust to life in a strange new world. Consequently, the balance of labor shifted away from indentured servitude and toward slavery.

Gang System

The value of slaves arose in part from the value of labor generally in the antebellum U.S. Scarce factors of production command economic rent, and labor was by far the scarcest available input in America. Moreover, a large proportion of the reward to owning and working slaves resulted from innovative labor practices. Certainly, the use of the “gang” system in agriculture contributed to profits in the antebellum period. In the gang system, groups of slaves perfomed synchronized tasks under the watchful overseer’s eye, much like parts of a single machine. Masters found that treating people like machinery paid off handsomely.

Antebellum slaveowners experimented with a variety of other methods to increase productivity. They developed an elaborate system of “hand ratings” in order to improve the match between the slave worker and the job. Hand ratings categorized slaves by age and sex and rated their productivity relative to that of a prime male field hand. Masters also capitalized on the native intelligence of slaves by using them as agents to receive goods, keep books, and the like.

Use of Positive Incentives

Masters offered positive incentives to make slaves work more efficiently. Slaves often had Sundays off. Slaves could sometimes earn bonuses in cash or in kind, or quit early if they finished tasks quickly. Some masters allowed slaves to keep part of the harvest or to work their own small plots. In places, slaves could even sell their own crops. To prevent stealing, however, many masters limited the products that slaves could raise and sell, confining them to corn or brown cotton, for example. In antebellum Louisiana, slaves even had under their control a sum of money called a peculium. This served as a sort of working capital, enabling slaves to establish thriving businesses that often benefited their masters as well. Yet these practices may have helped lead to the downfall of slavery, for they gave slaves a taste of freedom that left them longing for more.

Slave Families

Masters profited from reproduction as well as production. Southern planters encouraged slaves to have large families because U.S. slaves lived long enough — unlike those elsewhere in the New World — to generate more revenue than cost over their lifetimes. But researchers have found little evidence of slave breeding; instead, masters encouraged slaves to live in nuclear or extended families for stability. Lest one think sentimentality triumphed on the Southern plantation, one need only recall the willingness of most masters to sell if the bottom line was attractive enough.

Profitability and African Heritage

One element that contributed to the profitability of New World slavery was the African heritage of slaves. Africans, more than indigenous Americans, were accustomed to the discipline of agricultural practices and knew metalworking. Some scholars surmise that Africans, relative to Europeans, could better withstand tropical diseases and, unlike Native Americans, also had some exposure to the European disease pool.

Ease of Identifying Slaves

Perhaps the most distinctive feature of Africans, however, was their skin color. Because they looked different from their masters, their movements were easy to monitor. Denying slaves education, property ownership, contractual rights, and other things enjoyed by those in power was simple: one needed only to look at people to ascertain their likely status. Using color was a low-cost way of distinguishing slaves from free persons. For this reason, the colonial practices that freed slaves who converted to Christianity quickly faded away. Deciphering true religious beliefs is far more difficult than establishing skin color. Other slave societies have used distinguishing marks like brands or long hair to denote slaves, yet color is far more immutable and therefore better as a cheap way of keeping slaves separate. Skin color, of course, can also serve as a racist identifying mark even after slavery itself disappears.

Profit Estimates

Slavery never generated superprofits, because people always had the option of putting their money elsewhere. Nevertheless, investment in slaves offered a rate of return — about 10 percent — that was comparable to returns on other assets. Slaveowners were not the only ones to reap rewards, however. So too did cotton consumers who enjoyed low prices and Northern entrepreneurs who helped finance plantation operations.

Exploitation Estimates

So slavery was profitable; was it an efficient way of organizing the workforce? On this question, considerable controversy remains. Slavery might well have profited masters, but only because they exploited their chattel. What is more, slavery could have locked people into a method of production and way of life that might later have proven burdensome.

Fogel and Engerman (1974) claimed that slaves kept about ninety percent of what they produced. Because these scholars also found that agricultural slavery produced relatively more output for a given set of inputs, they argued that slaves may actually have shared in the overall material benefits resulting from the gang system. Other scholars contend that slaves in fact kept less than half of what they produced and that slavery, while profitable, certainly was not efficient. On the whole, current estimates suggest that the typical slave received only about fifty percent of the extra output that he or she produced.

Did Slavery Retard Southern Economic Development?

Gavin Wright (1978) called attention as well to the difference between the short run and the long run. He noted that slaves accounted for a very large proportion of most masters’ portfolios of assets. Although slavery might have seemed an efficient means of production at a point in time, it tied masters to a certain system of labor which might not have adapted quickly to changed economic circumstances. This argument has some merit. Although the South’s growth rate compared favorably with that of the North in the antebellum period, a considerable portion of wealth was held in the hands of planters. Consequently, commercial and service industries lagged in the South. The region also had far less rail transportation than the North. Yet many plantations used the most advanced technologies of the day, and certain innovative commercial and insurance practices appeared first in transactions involving slaves. What is more, although the South fell behind the North and Great Britain in its level of manufacturing, it compared favorably to other advanced countries of the time. In sum, no clear consensus emerges as to whether the antebellum South created a standard of living comparable to that of the North or, if it did, whether it could have sustained it.

Ultimately, the South’s system of law, politics, business, and social customs strengthened the shackles of slavery and reinforced racial stereotyping. As such, it was undeniably evil. Yet, because slaves constituted valuable property, their masters had ample incentives to take care of them. And, by protecting the property rights of masters, slave law necessarily sheltered the persons embodied within. In a sense, the apologists for slavery were right: slaves sometimes fared better than free persons because powerful people had a stake in their well-being.

Conclusion: Slavery Cannot Be Seen As Benign

But slavery cannot be thought of as benign. In terms of material conditions, diet, and treatment, Southern slaves may have fared as well in many ways as the poorest class of free citizens. Yet the root of slavery is coercion. By its very nature, slavery involves involuntary transactions. Slaves are property, whereas free laborers are persons who make choices (at times constrained, of course) about the sort of work they do and the number of hours they work.

The behavior of former slaves after abolition clearly reveals that they cared strongly about the manner of their work and valued their non-work time more highly than masters did. Even the most benevolent former masters in the U.S. South found it impossible to entice their former chattels back into gang work, even with large wage premiums. Nor could they persuade women back into the labor force: many female ex-slaves simply chose to stay at home. In the end, perhaps slavery is an economic phenomenon only because slave societies fail to account for the incalculable costs borne by the slaves themselves.

REFERENCES AND FURTHER READING

For studies pertaining to the economics of slavery, see particularly Aitken, Hugh, editor. Did Slavery Pay? Readings in the Economics of Black Slavery in the United States. Boston: Houghton-Mifflin, 1971.

Barzel, Yoram. “An Economic Analysis of Slavery.” Journal of Law and Economics 20 (1977): 87-110.

Conrad, Alfred H., and John R. Meyer. The Economics of Slavery and Other Studies. Chicago: Aldine, 1964.

David, Paul A., Herbert G. Gutman, Richard Sutch, Peter Temin, and Gavin Wright. Reckoning with Slavery: A Critical Study in the Quantitative History of American Negro Slavery. New York: Oxford University Press, 1976

Fogel , Robert W. Without Consent or Contract. New York: Norton, 1989.

Fogel, Robert W., and Stanley L. Engerman. Time on the Cross: The Economics of American Negro Slavery. New York: Little, Brown, 1974.

Galenson, David W. Traders, Planters, and Slaves: Market Behavior in Early English America. New York: Cambridge University Press, 1986

Kotlikoff, Laurence. “The Structure of Slave Prices in New Orleans, 1804-1862.” Economic Inquiry 17 (1979): 496-518.

Ransom, Roger L., and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. New York: Cambridge University Press, 1977.

Ransom, Roger L., and Richard Sutch “Capitalists Without Capital” Agricultural History 62 (1988): 133-160.

Vedder, Richard K. “The Slave Exploitation (Expropriation) Rate.” Explorations in Economic History 12 (1975): 453-57.

Wright, Gavin. The Political Economy of the Cotton South: Households, Markets, and Wealth in the Nineteenth Century. New York: Norton, 1978.

Yasuba, Yasukichi. “The Profitability and Viability of Slavery in the U.S.” Economic Studies Quarterly 12 (1961): 60-67.

For accounts of slave trading and sales, see
Bancroft, Frederic. Slave Trading in the Old South. New York: Ungar, 1931. Tadman, Michael. Speculators and Slaves. Madison: University of Wisconsin Press, 1989.

For discussion of the profession of slave catchers, see
Campbell, Stanley W. The Slave Catchers. Chapel Hill: University of North Carolina Press, 1968.

To read about slaves in industry and urban areas, see
Dew, Charles B. Slavery in the Antebellum Southern Industries. Bethesda: University Publications of America, 1991.

Goldin, Claudia D. Urban Slavery in the American South, 1820-1860: A Quantitative History. Chicago: University of Chicago Press,1976.

Starobin, Robert. Industrial Slavery in the Old South. New York: Oxford University Press, 1970.

For discussions of masters and overseers, see
Oakes, James. The Ruling Race: A History of American Slaveholders. New York: Knopf, 1982.

Roark, James L. Masters Without Slaves. New York: Norton, 1977.

Scarborough, William K. The Overseer: Plantation Management in the Old South. Baton Rouge, Louisiana State University Press, 1966.

On indentured servitude, see
Galenson, David. “Rise and Fall of Indentured Servitude in the Americas: An Economic Analysis.” Journal of Economic History 44 (1984): 1-26.

Galenson, David. White Servitude in Colonial America: An Economic Analysis. New York: Cambridge University Press, 1981.

Grubb, Farley. “Immigrant Servant Labor: Their Occupational and Geographic Distribution in the Late Eighteenth Century Mid-Atlantic Economy.” Social Science History 9 (1985): 249-75.

Menard, Russell R. “From Servants to Slaves: The Transformation of the Chesapeake Labor System.” Southern Studies 16 (1977): 355-90.

On slave law, see
Fede, Andrew. “Legal Protection for Slave Buyers in the U.S. South.” American Journal of Legal History 31 (1987). Finkelman, Paul. An Imperfect Union: Slavery, Federalism, and Comity. Chapel Hill: University of North Carolina, 1981.

Finkelman, Paul. Slavery, Race, and the American Legal System, 1700-1872. New York: Garland, 1988.

Finkelman, Paul, ed. Slavery and the Law. Madison: Madison House, 1997.

Flanigan, Daniel J. The Criminal Law of Slavery and Freedom, 1800-68. New York: Garland, 1987.

Morris, Thomas D., Southern Slavery and the Law: 1619-1860. Chapel Hill: University of North Carolina Press, 1996.

Schafer, Judith K. Slavery, The Civil Law, and the Supreme Court of Louisiana. Baton Rouge: Louisiana State University Press, 1994.

Tushnet, Mark V. The American Law of Slavery, 1810-60: Considerations of Humanity and Interest. Princeton: Princeton University Press, 1981.

Wahl, Jenny B. The Bondsman’s Burden: An Economic Analysis of the Common Law of Southern Slavery. New York: Cambridge University Press, 1998.

Other useful sources include
Berlin, Ira, and Philip D. Morgan, eds. The Slave’s Economy: Independent Production by Slaves in the Americas. London: Frank Cass, 1991.

Berlin, Ira, and Philip D. Morgan, eds, Cultivation and Culture: Labor and the Shaping of Slave Life in the Americas. Charlottesville, University Press of Virginia, 1993.

Elkins, Stanley M. Slavery: A Problem in American Institutional and Intellectual Life. Chicago: University of Chicago Press, 1976.

Engerman, Stanley, and Eugene Genovese. Race and Slavery in the Western Hemisphere: Quantitative Studies. Princeton: Princeton University Press, 1975.

Fehrenbacher, Don. Slavery, Law, and Politics. New York: Oxford University Press, 1981.

Franklin, John H. From Slavery to Freedom. New York: Knopf, 1988.

Genovese, Eugene D. Roll, Jordan, Roll. New York: Pantheon, 1974.

Genovese, Eugene D. The Political Economy of Slavery: Studies in the Economy and Society of the Slave South . Middletown, CT: Wesleyan, 1989.

Hindus, Michael S. Prison and Plantation. Chapel Hill: University of North Carolina Press, 1980.

Margo, Robert, and Richard Steckel. “The Heights of American Slaves: New Evidence on Slave Nutrition and Health.” Social Science History 6 (1982): 516-538.

Phillips, Ulrich B. American Negro Slavery: A Survey of the Supply, Employment and Control of Negro Labor as Determined by the Plantation Regime. New York: Appleton, 1918.

Stampp, Kenneth M. The Peculiar Institution: Slavery in the Antebellum South. New York: Knopf, 1956.

Steckel, Richard. “Birth Weights and Infant Mortality Among American Slaves.” Explorations in Economic History 23 (1986): 173-98.

Walton, Gary, and Hugh Rockoff. History of the American Economy. Orlando: Harcourt Brace, 1994, chapter 13.

Whaples, Robert. “Where Is There Consensus among American Economic Historians?” Journal of Economic History 55 (1995): 139-154.

Data can be found at
U.S. Bureau of the Census, Historical Statistics of the United States, 1970, collected in ICPSR study number 0003, “Historical Demographic, Economic and Social Data: The United States, 1790-1970,” located at http://fisher.lib.virginia.edu/census/.

Citation: Bourne, Jenny. “Slavery in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/slavery-in-the-united-states/

Economic History of Retirement in the United States

Joanna Short, Augustana College

One of the most striking changes in the American labor market over the twentieth century has been the virtual disappearance of older men from the labor force. Moen (1987) and Costa (1998) estimate that the labor force participation rate of men age 65 and older declined from 78 percent in 1880 to less than 20 percent in 1990 (see Table 1). In recent decades, the labor force participation rate of somewhat younger men (age 55-64) has been declining as well. When coupled with the increase in life expectancy over this period, it is clear that men today can expect to spend a much larger proportion of their lives in retirement, relative to men living a century ago.

Table 1

Labor Force Participation Rates of Men Age 65 and Over

Year Labor Force Participation Rate (percent)
1850 76.6
1860 76.0
1870 —–
1880 78.0
1890 73.8
1900 65.4
1910 58.1
1920 60.1
1930 58.0
1940 43.5
1950 47.0
1960 40.8
1970 35.2
1980 24.7
1990 18.4
2000 17.5

Sources: Moen (1987), Costa (1998), Bureau of Labor Statistics

Notes: Prior to 1940, ‘gainful employment’ was the standard the U.S. Census used to determine whether or not an individual was working. This standard is similar to the ‘labor force participation’ standard used since 1940. With the exception of the figure for 2000, the data in the table are based on the gainful employment standard.

How can we explain the rise of retirement? Certainly, the development of government programs like Social Security has made retirement more feasible for many people. However, about half of the total decline in the labor force participation of older men from 1880 to 1990 occurred before the first Social Security payments were made in 1940. Therefore, factors other than the Social Security program have influenced the rise of retirement.

In addition to the increase in the prevalence of retirement over the twentieth century, the nature of retirement appears to have changed. In the late nineteenth century, many retirements involved a few years of dependence on children at the end of life. Today, retirement is typically an extended period of self-financed independence and leisure. This article documents trends in the labor force participation of older men, discusses the decision to retire, and examines the causes of the rise of retirement including the role of pensions and government programs.

Trends in U.S. Retirement Behavior

Trends by Gender

Research on the history of retirement focuses on the behavior of men because retirement, in the sense of leaving the labor force permanently in old age after a long career, is a relatively new phenomenon among women. Goldin (1990) concludes that “even as late as 1940, most young working women exited the labor force on marriage, and only a small minority would return.” The employment of married women accelerated after World War II, and recent evidence suggests that the retirement behavior of men and women is now very similar. Gendell (1998) finds that the average age at exit from the labor force in the U.S. was virtually identical for men and women from 1965 to 1995.

Trends by Race and Region

Among older men at the beginning of the twentieth century, labor force participation rates varied greatly by race, region of residence, and occupation. In the early part of the century, older black men were much more likely to be working than older white men. In 1900, for example, 84.1 percent of black men age 65 and over and 64.4 percent of white men were in the labor force. The racial retirement gap remained at about twenty percentage points until 1920, then narrowed dramatically by 1950. After 1950, the racial retirement gap reversed. In recent decades older black men have been slightly less likely to be in the labor force than older white men (see Table 2).

Table 2

Labor Force Participation Rates of Men Age 65 and Over, by Race

Labor Force Participation Rate (percent)
Year White Black
1880 76.7 87.3
1890 —- —-
1900 64.4 84.1
1910 58.5 86.0
1920 57.0 76.8
1930 —- —-
1940 44.1 54.6
1950 48.7 51.3
1960 40.3 37.3
1970 36.6 33.8
1980 27.1 23.7
1990 18.6 15.7
2000 17.8 16.6

Sources: Costa (1998), Bureau of Labor Statistics

Notes: Census data are unavailable for the years 1890 and 1930.

With the exception of the figures for 2000, participation rates are based on the gainful employment standard

Similarly, the labor force participation rate of men age 65 and over living in the South was higher than that of men living in the North in the early twentieth century. In 1900, for example, the labor force participation rate for older Southerners was sixteen percentage points higher than for Northerners. The regional retirement gap began to narrow between 1910 and 1920, and narrowed substantially by 1940 (see Table 3).

Table 3

Labor Force Participation Rates of Men Age 65 and Over, by Region

Labor Force Participation Rate (percent)
Year North South
1880 73.7 85.2
1890 —- —-
1900 66.0 82.9
1910 56.6 72.8
1920 58.8 69.9
1930 —- —-
1940 42.8 49.4
1950 43.2 42.9

Source: Calculated from Ruggles and Sobek, Integrated Public Use Microdata Series for 1880, 1900, 1910, 1920, 1940, and 1950, Version 2.0, 1997

Note: North includes the New England, Middle Atlantic, and North Central regions

South includes the South Atlantic and South Central regions

Differences in retirement behavior by race and region of residence are related. One reason Southerners appear less likely to retire in the late nineteenth and early twentieth centuries is that a relatively large proportion of Southerners were black. In 1900, 90 percent of black households were located in the South (see Maloney on African Americans in this Encyclopedia). In the early part of the century, black men were effectively excluded from skilled occupations. The vast majority worked for low pay as tenant farmers or manual laborers. Even controlling for race, southern per capita income lagged behind the rest of the nation well into the twentieth century. Easterlin (1971) estimates that in 1880, per capita income in the South was only half that in the Midwest, and per capita income remained less than 70 percent of the Midwestern level until 1950. Lower levels of income among blacks, and in the South as a whole during this period, may have made it more difficult for these men to accumulate resources sufficient to rely on in retirement.

Trends by Occupation

Older men living on farms have long been more likely to be working than men living in nonfarm households. In 1900, for example, 80.6 percent of farm residents and 62.7 percent of nonfarm residents over the age of 65 were in the labor force. Durand (1948), Graebner (1980), and others have suggested that older farmers could remain in the labor force longer than urban workers because of help from children or hired labor. Urban workers, on the other hand, were frequently forced to retire once they became physically unable to keep up with the pace of industry.

Despite the large difference in the labor force participation rates of farm and nonfarm residents, the actual gap in the retirement rates of farmers and nonfarmers was not that great. Confusion on this issue stems from the fact that the labor force participation rate of farm residents does not provide a good representation of the retirement behavior of farmers. Moen (1994) and Costa (1995a) point out that farmers frequently moved off the farm in retirement. When the comparison is made by occupation, farmers have labor force participation rates only slightly higher than laborers or skilled workers. Lee (2002) finds that excluding the period 1900-1910 (a period of exceptional growth in the value of farm property), the labor force participation rate of older farmers was on average 9.3 percentage points higher than that of nonfarmers from 1880-1940.

Trends in Living Arrangements

In addition to the overall rise of retirement, and the closing of differences in retirement behavior by race and region, over the twentieth century retired men became much more independent. In 1880, nearly half of retired men lived with children or other relatives. Today, fewer than 5 percent of retired men live with relatives. Costa (1998) finds that between 1910 and 1940, men who were older, had a change in marital status (typically from married to widowed), or had low income were much more likely to live with family members as a dependent. Rising income appears to explain most of the movement away from coresidence, suggesting that the elderly have always preferred to live by themselves, but they have only recently had the means to do so.

Explaining Trends in the Retirement Decision

One way to understand the rise of retirement is to consider the individual retirement decision. In order to retire permanently from the labor force, one must have enough resources to live on to the end of the expected life span. In retirement, one can live on pension income, accumulated savings, and anticipated contributions from family and friends. Without at least the minimum amount of retirement income necessary to survive, the decision-maker has little choice but to remain in the labor force. If the resource constraint is met, individuals choose to retire once the net benefits of retirement (e.g., leisure time) exceed the net benefits of working (labor income less the costs associated with working). From this model, we can predict that anything that increases the costs associated with working, such as advancing age, an illness, or a disability, will increase the probability of retirement. Similarly, an increase in pension income increases the probability of retirement in two ways. First, an increase in pension income makes it more likely the resource constraint will be satisfied. In addition, higher pension income makes it possible to enjoy more leisure in retirement, thereby increasing the net benefits of retirement.

Health Status

Empirically, age, disability, and pension income have all been shown to increase the probability that an individual is retired. In the context of the individual model, we can use this observation to explain the overall rise of retirement. Disability, for example, has been shown to increase the probability of retirement, both today and especially in the past. However, it is unlikely that the rise of retirement was caused by increases in disability rates — advances in health have made the overall population much healthier. Costa (1998), for example, shows that chronic conditions were much more prevalent for the elderly born in the nineteenth century than for men born in the twentieth century.

The Decline of Agriculture

Older farmers are somewhat more likely to be in the labor force than nonfarmers. Furthermore, the proportion of people employed in agriculture has declined steadily, from 51 percent of the work force in 1880, to 17 percent in 1940, to about 2 percent today (Lebergott, 1964). Therefore, as argued by Durand (1948), the decline in agriculture could explain the rise in retirement. Lee (2002) finds, though, that the decline of agriculture only explains about 20 percent of the total rise of retirement from 1880 to 1940. Since most of the shift away from agricultural work occurred before 1940, the decline of agriculture explains even less of the retirement trend since 1940. Thus, the occupational shift away from farming explains part of the rise of retirement. However, the underlying trend has been a long-term increase in the probability of retirement within all occupations.

Rising Income: The Most Likely Explanation

The most likely explanation for the rise of retirement is the overall increase in income, both from labor market earnings and from pensions. Costa (1995b) has shown that the pension income received by Union Army veterans in the early twentieth century had a strong effect on the probability that the veteran was retired. Over the period from 1890 to 1990, economic growth has led to nearly an eightfold increase in real gross domestic product (GDP) per capita. In 1890, GDP per capita was $3430 (in 1996 dollars), which is comparable to the levels of production in Morocco or Jamaica today. In 1990, real GDP per capita was $26,889. On average, Americans today enjoy a standard of living commensurate with eight times the income of Americans living a century ago. More income has made it possible to save for an extended retirement.

Rising income also explains the closing of differences in retirement behavior by race and region by the 1950s. Early in the century blacks and Southerners earned much lower income than Northern whites, but these groups made substantial gains in earnings by 1950. In the second half of the twentieth century, the increasing availability of pension income has also made retirement more attractive. Expansions in Social Security benefits, Medicare, and growth in employer-provided pensions all serve to increase the income available to people in retirement.

Costa (1998) has found that income is now less important to the decision to retire than it once was. In the past, only the rich could afford to retire. Income is no longer a binding constraint. One reason is that Social Security provides a safety net for those who are unable or unwilling to save for retirement. Another reason is that leisure has become much cheaper over the last century. Television, for example, allows people to enjoy concerts and sporting events at a very low price. Golf courses and swimming pools, once available only to the rich, are now publicly provided. Meanwhile, advances in health have allowed people to enjoy leisure and travel well into old age. All of these factors have made retirement so much more attractive that people of all income levels now choose to leave the labor force in old age.

Financing Retirement

Rising income also provided the young with a new strategy for planning for old age and retirement. Ransom and Sutch (1986a,b) and Sundstrom and David (1988) hypothesize that in the nineteenth century men typically used the promise of a bequest as an incentive for children to help their parents in old age. As more opportunities for work off the farm became available, children left home and defaulted on the implicit promise to care for retired parents. Children became an unreliable source of old age support, so parents stopped relying on children — had fewer babies — and began saving (in bank accounts) for retirement.

To support the “babies-to-bank accounts” theory, Sundstrom and David look for evidence of an inheritance-for-old age support bargain between parents and children. They find that many wills, particularly in colonial New England and some ethnic communities in the Midwest, included detailed clauses specifying the care of the surviving parent. When an elderly parent transferred property directly to a child, the contracts were particularly specific, often specifying the amount of food and firewood with which the parent was to be supplied. There is also some evidence that people viewed children and savings as substitute strategies for retirement planning. Haines (1985) uses budget studies from northern industrial workers in 1890 and finds a negative relationship between the number of children and the savings rate. Short (2001) conducts similar studies for southern men that indicate the two strategies were not substitutes until at least 1920. This suggests that the transition from babies to bank accounts occurred later in the South, only as income began to approach northern levels.

Pensions and Government Retirement Programs

Military and Municipal Pensions (1781-1934)

In addition to the rise in labor market income, the availability of pension income greatly increased with the development of Social Security and the expansion of private (employer-provided) pensions. In the U.S., public (government-provided) pensions originated with the military pensions that have been available to disabled veterans and widows since the colonial era. Military pensions became available to a large proportion of Americans after the Civil War, when the federal government provided pensions to Union Army widows and veterans disabled in the war. The Union Army pension program expanded greatly as a result of the Pension Act of 1890. As a result of this law, pensions were available for all veterans age 65 and over who had served more than 90 days and were honorably discharged, regardless of current employment status. In 1900, about 20 percent of all white men age 55 and over received a Union Army pension. The Union Army pension was generous even by today’s standards. Costa (1995b) finds that the average pension replaced about 30 percent of the income of a laborer. At its peak of nearly one million pensioners in 1902, the program consumed about 30 percent of the federal budget.

Each of the formerly Confederate states also provided pensions to its Confederate veterans. Most southern states began paying pensions to veterans disabled in the war and to war widows around 1880. These pensions were gradually liberalized to include most poor or disabled veterans and their widows. Confederate veteran pensions were much less generous than Union Army pensions. By 1910, the average Confederate pension was only about one-third the amount awarded to the average Union veteran.

By the early twentieth century, state and municipal governments also began paying pensions to their employees. Most major cities provided pensions for their firemen and police officers. By 1916, 33 states had passed retirement provisions for teachers. In addition, some states provided limited pensions to poor elderly residents. By 1934, 28 states had established these pension programs (See Craig in this Encyclopedia for more on public pensions).

Private Pensions (1875-1934)

As military and civil service pensions became available to more men, private firms began offering pensions to their employees. The American Express Company developed the first formal pension in 1875. Railroads, among the largest employers in the country, also began providing pensions in the late nineteenth century. Williamson (1992) finds that early pension plans, like that of the Pennsylvania Railroad, were funded entirely by the employer. Thirty years of service were required to qualify for a pension, and retirement was mandatory at age 70. Because of the lengthy service requirement and mandatory retirement provision, firms viewed pensions as a way to reduce labor turnover and as a more humane way to remove older, less productive employees. In addition, the 1926 Revenue Act excluded from current taxation all income earned in pension trusts. This tax advantage provided additional incentive for firms to provide pensions. By 1930, a majority of large firms had adopted pension plans, covering about 20 percent of all industrial workers.

In the early twentieth century, labor unions also provided pensions to their members. By 1928, thirteen unions paid pension benefits. Most of these were craft unions, whose members were typically employed by smaller firms that did not provide pensions.

Most private pensions survived the Great Depression. Exceptions were those plans that were funded under a ‘pay as you go’ system — where benefits were paid out of current earnings, rather than from built-up reserves. Many union pensions were financed under this system, and hence failed in the 1930s. Thanks to strong political allies, the struggling railroad pensions were taken over by the federal government in 1937.

Social Security (1935-1991)

The Social Security system was designed in 1935 to extend pension benefits to those not covered by a private pension plan. The Social Security Act consisted of two programs, Old Age Assistance (OAA) and Old Age Insurance (OAI). The OAA program provided federal matching funds to subsidize state old age pension programs. The availability of federal funds quickly motivated many states to develop a pension program or to increase benefits. By 1950, 22 percent of the population age 65 and over received OAA benefits. The OAA program peaked at this point, though, as the newly liberalized OAI program began to dominate Social Security. The OAI program is administered by the federal government, and financed by payroll taxes. Retirees (and later, survivors, dependents of retirees, and the disabled) who have paid into the system are eligible to receive benefits. The program remained small until 1950, when coverage was extended to include farm and domestic workers, and average benefits were increased by 77 percent. In 1965, the Social Security Act was amended to include Medicare, which provides health insurance to the elderly. The Social Security program continued to expand in the late 1960s and early 1970s — benefits increased 13 percent in 1968, another 15 percent in 1969, and 20 percent in 1972.

In the late 1970s and early 1980s Congress was finally forced to slow the growth of Social Security benefits, as the struggling economy introduced the possibility that the program would not be able to pay beneficiaries. In 1977, the formula for determining benefits was adjusted downward. Reforms in 1983 included the delay of a cost-of-living adjustment, the taxation of up to half of benefits, and payroll tax increases.

Today, Social Security benefits are the main source of retirement income for most retirees. Poterba, Venti, and Wise (1994) find that Social Security wealth was three times as large as all the other financial assets of those age 65-69 in 1991. The role of Social Security benefits in the budgets of elderly households varies greatly. In elderly households with less than $10,000 in income in 1990, 75 percent of income came from Social Security. Higher income households gain larger shares of income from earnings, asset income, and private pensions. In households with $30,000 to $50,000 in income, less than 30 percent was derived from Social Security.

The Growth of Private Pensions (1935-2000)

Even in the shadow of the Social Security system, employer-provided pensions continued to grow. The Wage and Salary Act of 1942 froze wages in an attempt to contain wartime inflation. In order to attract employees in a tight labor market, firms increasingly offered generous pensions. Providing pensions had the additional benefit that the firm’s contributions were tax deductible. Therefore, pensions provided firms with a convenient tax shelter from high wartime tax rates. From 1940 to 1960, the number of people covered by private pensions increased from 3.7 million to 23 million, or to nearly 30 percent of the labor force.

In the 1960s and 1970s, the federal government acted to regulate private pensions, and to provide tax incentives (like those for employer-provided pensions) for those without access to private pensions to save for retirement. Since 1962, the self-employed have been able to establish ‘Keogh plans’ — tax deferred accounts for retirement savings. In 1974, the Employment Retirement Income Security Act (ERISA) regulated private pensions to ensure their solvency. Under this law, firms are required to follow funding requirements and to insure against unexpected events that could cause insolvency. To further level the playing field, ERISA provided those not covered by a private pension with the option of saving in a tax-deductible Individual Retirement Account (IRA). The option of saving in a tax-advantaged IRA was extended to everyone in 1981.

Over the last thirty years, the type of pension plan that firms offer employees has shifted from ‘defined benefit’ to ‘defined contribution’ plans. Defined benefit plans, like Social Security, specify the amount of benefits the retiree will receive. Defined contribution plans, on the other hand, specify only how much the employer will contribute to the plan. Actual benefits then depend on the performance of the pension investments. The switch from defined benefit to defined contribution plans therefore shifts the risk of poor investment performance from the employer to the employee. The employee stands to benefit, though, because the high long-run average returns on stock market investments may lead to a larger retirement nest egg. Recently, 401(k) plans have become a popular type of pension plan, particularly in the service industries. These plans typically involve voluntary employee contributions that are tax deductible to the employee, employer matching of these contributions, and more choice as far as how the pension is invested.

Summary and Conclusions

The retirement pattern we see today, typically involving decades of self-financed leisure, developed gradually over the last century. Economic historians have shown that rising labor market and pension income largely explain the dramatic rise of retirement. Rather than being pushed out of the labor force because of increasing obsolescence, older men have increasingly chosen to use their rising income to finance an earlier exit from the labor force. In addition to rising income, the decline of agriculture, advances in health, and the declining cost of leisure have contributed to the popularity of retirement. Rising income has also provided the young with a new strategy for planning for old age and retirement. Instead of being dependent on children in retirement, men today save for their own, more independent, retirement.

References

Achenbaum, W. Andrew. Social Security: Visions and Revisions. New York: Cambridge University Press, 1986. Bureau of Labor Statistics, cpsaat3.pdf

Costa, Dora L. The Evolution of Retirement: An American Economic History, 1880-1990. Chicago: University of Chicago Press, 1998.

Costa, Dora L. “Agricultural Decline and the Secular Rise in Male Retirement Rates.” Explorations in Economic History 32, no. 4 (1995a): 540-552.

Costa, Dora L. “Pensions and Retirement: Evidence from Union Army Veterans.” Quarterly Journal of Economics 110, no. 2 (1995b): 297-319.

Durand, John D. The Labor Force in the United States 1890-1960. New York: Gordon and Breach Science Publishers, 1948.

Easterlin, Richard A. “Interregional Differences in per Capita Income, Population, and Total Income, 1840-1950.” In Trends in the American Economy in the Nineteenth Century: A Report of the National Bureau of Economic Research, Conference on Research in Income and Wealth. Princeton, NJ: Princeton University Press, 1960.

Easterlin, Richard A. “Regional Income Trends, 1840-1950.” In The Reinterpretation of American Economic History, edited by Robert W. Fogel and Stanley L. Engerman. New York: Harper & Row, 1971.

Gendell, Murray. “Trends in Retirement Age in Four Countries, 1965-1995.” Monthly Labor Review 121, no. 8 (1998): 20-30.

Glasson, William H. Federal Military Pensions in the United States. New York: Oxford University Press, 1918.

Glasson, William H. “The South’s Pension and Relief Provisions for the Soldiers of the

Confederacy.” Publications of the North Carolina Historical Commission, Bulletin no. 23, Raleigh, 1918.

Goldin, Claudia. Understanding the Gender Gap: An Economic History of American Women. New York: Oxford University Press, 1990.

Graebner, William. A History of Retirement: The Meaning and Function of an American Institution, 1885-1978. New Haven: Yale University Press, 1980.

Haines, Michael R. “The Life Cycle, Savings, and Demographic Adaptation: Some Historical Evidence for the United States and Europe.” In Gender and the Life Course, edited by Alice S. Rossi, pp. 43-63. New York: Aldine Publishing Co., 1985.

Kingson, Eric R. and Edward D. Berkowitz. Social Security and Medicare: A Policy Primer. Westport, CT: Auburn House, 1993.

Lebergott, Stanley. Manpower in Economic Growth. New York: McGraw Hill, 1964.

Lee, Chulhee. “Sectoral Shift and the Labor-Force Participation of Older Males in the United States, 1880-1940.” Journal of Economic History 62, no. 2 (2002): 512-523.

Maloney, Thomas N. “African Americans in the Twentieth Century.” EH.Net Encyclopedia, edited by Robert Whaples, Jan 18, 2002. http://www.eh.net/encyclopedia/contents/maloney.african.american.php

Moen, Jon R. Essays on the Labor Force and Labor Force Participation Rates: The United States from 1860 through 1950. Ph.D. dissertation, University of Chicago, 1987.

Moen, Jon R. “Rural Nonfarm Households: Leaving the Farm and the Retirement of Older Men, 1860-1980.” Social Science History 18, no. 1 (1994): 55-75.

Ransom, Roger and Richard Sutch. “Babies or Bank Accounts, Two Strategies for a More Secure Old Age: The Case of Workingmen with Families in Maine, 1890.” Paper prepared for presentation at the Eleventh Annual Meeting of the Social Science History Association, St. Louis, 1986a.

Ransom, Roger L. and Richard Sutch. “Did Rising Out-Migration Cause Fertility to Decline in Antebellum New England? A Life-Cycle Perspective on Old-Age Security Motives, Child Default, and Farm-Family Fertility.” California Institute of Technology, Social Science Working Paper, no. 610, April 1986b.

Ruggles, Steven and Matthew Sobek, et al. Integrated Public Use Microdata Series: Version 2.0. Minneapolis: Historical Census Projects, University of Minnesota, 1997.

http://www.ipums.umn.edu

Short, Joanna S. “The Retirement of the Rebels: Georgia Confederate Pensions and Retirement Behavior in the New South.” Ph.D. dissertation, Indiana University, 2001.

Sundstrom, William A. and Paul A. David. “Old-Age Security Motives, Labor Markets, and Farm Family Fertility in Antebellum America.” Explorations in Economic History 25, no. 2 (1988): 164-194.

Williamson, Samuel H. “United States and Canadian Pensions before 1930: A Historical Perspective.” In Trends in Pensions, U.S. Department of Labor, Vol. 2, 1992, pp. 34-45.

Williamson, Samuel H. The Development of Industrial Pensions in the United States during the Twentieth Century. World Bank, Policy Research Department, 1995.

Citation: Short, Joanna. “Economic History of Retirement in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. September 30, 2002. URL http://eh.net/encyclopedia/economic-history-of-retirement-in-the-united-states/

Public Sector Pensions in the United States

Lee A. Craig, North Carolina State University

Introduction

Although employer-provided retirement plans are a relatively recent phenomenon in the private sector, dating from the late nineteenth century, public sector plans go back much further in history. From the Roman Empire to the rise of the early-modern nation state, rulers and legislatures have provided pensions for the workers who administered public programs. Military pensions, in particular, have a long history, and they have often been used as a key element to attract, retain, and motivate military personnel. In the United States, pensions for disabled and retired military personnel predate the signing of the U.S. Constitution.

Like military pensions, pensions for loyal civil servants date back centuries. Prior to the nineteenth century, however, these pensions were typically handed out on a case-by-case basis; except for the military, there were few if any retirement plans or systems with well-defined rules for qualification, contributions, funding, and so forth. Most European countries maintained some type of formal pension system for their public sector workers by the late nineteenth century. Although a few U.S. municipalities offered plans prior to 1900, most public sector workers were not offered pensions until the first decades of the twentieth century. Teachers, firefighters, and police officers were typically the first non-military workers to receive a retirement plan as part of their compensation.

By 1930, pension coverage in the public sector was relatively widespread in the United States, with all federal workers being covered by a pension and an increasing share of state and local employees included in pension plans. In contrast, pension coverage in the private sector during the first three decades of the twentieth century remained very low, perhaps as low as 10 to 12 percent of the labor force (Clark, Craig, and Wilson 2003). Even today, pension coverage is much higher in the public sector than it is in the private sector. Over 90 percent of public sector workers are covered by an employer-provided pension plan, whereas only about half of the private sector work force is covered (Employee Benefit Research Institute 1997).

It should be noted that although today the term “pension” generally refers to cash payments received after the termination of one’s working years, typically in the form of an annuity, historically, a much wider range of retiree benefits, survivor’s annuities, and disability benefits were also referred to as pensions. In the United States, for example, the initial army and navy pension systems were primarily disability plans. However, disability was often liberally defined and included superannuation or the inability to perform regular duties due to infirmities associated with old age. In fact, every disability plan created for U.S. war veterans eventually became an old-age pension plan, and the history of these plans often reflected broader economic and social trends.

Early Military Pensions

Ancient Rome

Military pensions date from antiquity. Almost from its founding, the Roman Republic offered pensions to its successful military personnel; however, these payments, which often took the form of land or special appropriations, were generally ad hoc and typically based on the machinations of influential political cliques. As a result, on more than one occasion, a pension served as little more than a bribe to incite soldiers to serve as the personal troops of the politicians who secured the pension. No small amount of the turmoil accompanying the Republic’s decline can be attributed to this flaw in Roman public finance.

After establishing the Empire, Augustus, who knew a thing or two about the politics and economics of military issues, created a formal pension plan (13 BC): Veteran legionnaires were to receive a pension upon the completion of sixteen years in a legion and four years in the military reserves. This was a true retirement plan designed to reward and mollify veterans returning from Rome’s frontier campaigns. The original Augustan pension suffered from the fact that it was paid from general revenues (and Augustus’ own generous contributions), and in 5 AD (6 AD according to some sources), Augustus established a special fund (aeririum militare) from which retiring soldiers were paid. Although the length of service was also increased from sixteen years on active duty to twenty (and five years in the reserves), the pension system was explicitly funded through a five percent tax on inheritances and a one percent tax on all transactions conducted through auctions — essentially a sales tax. Retiring legionnaires were to receive 3,000 denarii; centurions received considerably larger stipends (Crook 1996). In the first century AD, a lump-sum payment of 3,000 denarii would have represented a substantial amount of money — at least by working class standards. A single denarius equaled roughly a days’ wage for a common laborer; so at an eight percent discount rate (Homer and Sylla 1991), the pension would have yielded an annuity of roughly 66 to 75 percent of a laborer’s annual earnings. Curiously, the basic parameters of the Augustan pension system look much like those of modern public sector pension plans. Although the state pension system perished with Rome, the key features — twenty to twenty-five years of service to quality and a “replacement rate” of 66 to 75 percent — would reemerge more than a thousand years later to become benchmarks for modern public sector plans.

Early-modern Europe

The Roman pension system collapsed, or perhaps withered away is the better term, with Rome itself, and for nearly a thousand years military service throughout Western Civilization was based on personal allegiance within a feudal hierarchy. During the Middle Ages, there were no military pensions strictly comparable to the Roman system, but with the establishment of the nation state came the reemergence of standing armies led by professional soldiers. Like the legions of Imperial Rome, these armies owed their allegiance to a state rather than to a person. The establishment of standardized systems of military pensions followed very shortly thereafter, beginning as early as the sixteenth century in England. During its 1592-93 session, Parliament established “reliefe for Souldiours … [who] adventured their lives and lost their limbs or disabled their bodies” in the service of the Crown (quoted in Clark, Craig, and Wilson 2003, p. 29). Annual pensions were not to exceed ten pounds for “private soldiers,” or twenty pounds for a “lieutenant.” Although one must be cautious in the use of income figures and exchange rates from that era, an annuity of ten pounds would have roughly equaled fifty gold dollars (at subsequent exchange rates), which was the equivalent of per capita income a century or so later, making the pension generous by contemporary standards.

These pensions were nominally disability payments not retirement pensions, though governments often awarded the latter on a case-by-case basis, and by the eighteenth century all of the other early-modern Great Powers — France, Austria, Spain, and Prussia — maintained some type of military pensions for their officer castes. These public pensions were not universally popular. Indeed, they were often viewed as little more than spoils. Samuel Johnson famously described a public pension as “generally understood to mean pay given to a state-hireling for treason to his country” (quoted in Clark, Craig, and Wilson 2003, 29). By the early nineteenth century, Britain, France, Prussia, and Spain all had formal retirement plans for their military personnel. The benchmark for these plans was the British “half-pay” system in which retired, disabled or otherwise unemployed officers received roughly fifty percent of their base pay. This was fairly lucrative compared to the annuities received by their continental counterparts.

Military Pensions in the United States

Prior to the American Revolution, Britain’s American colonies provided pensions to disabled men who were injured defending the colonists and their property from the French, the Spanish, and the natives. During the Revolutionary War the colonies extended this coverage to the members of their militias. Several colonies maintained navies, and they also offered pensions to their naval personnel. Independent of the actions of the colonial legislatures, the Continental Congress established pensions for its army (1776) and naval forces (1775). U.S. military pensions have been continuously provided, in one form or another ever since.

Revolutionary War Era

Although initially these were all strictly disability plans, in order to keep the troops in the field during the crucial months leading up to the Battle of Yorktown (1781), Congress authorized the payment of a life annuity, equal to one-half base pay, to all officers remaining in the service for the duration of the Revolution. It was not long before Congress and the officers in question realized that the national governments’ cash-flow situation and the present value of its future revenues were insufficient to meet this promise. Ultimately, the leaders of the disgruntled officers met at Newburgh, New York and pressed their demands on Congress, and in the spring of 1783, Congress converted the life annuities to a fixed-term payment equal to full pay for five years. Even these more limited obligations were not fully paid to qualifying veterans, and only the direct intervention of George Washington defused a potential coup (Ferguson 1961; Middlekauff 1982). The Treaty of Paris was signed in September of 1783, and the Continental Army was furloughed shortly thereafter. The officers’ pension claims were subsequently met to a degree by special interest-bearing “commutation certificates” — bonds, essentially. It took another eight years before the Constitution and Alexander Hamilton’s financial reforms placed the new federal government in a position to honor these obligations by the issuance of the new (consolidated) federal debt. However, because of the country’s precarious financial situation, between the Revolution and the consolidation of the debt, many embittered officers sold their “commutation” bonds in the secondary market at a steep discount.

In addition to a “regular” army pension plan, every war from the Revolution through the Indian Wars of the late-nineteenth century, saw the creation of a pension plan for the veterans of that particular war. Although every one of those plans was initially a disability plan, they were all eventually converted into an old-age pension plan — though this conversion often took a long time. The Revolutionary War plan became a general retirement plan in 1832 — 49 years after the Treaty of Paris ended the war. At that time every surviving veteran of the Revolutionary War received a pension equal to 100 percent of his base pay at the end of the war. Similarly, it was 56 years after the War of 1812, before survivors of that war were given retirement pensions.

Severance Pay

As for a retirement plan for the “regular” army, there was none until the Civil War; however, soldiers who were discharged after 1800 were given three months’ pay as severance. Officers were initially offered the same severance package as enlisted personnel, but in 1802, officers began receiving one months’ pay for each year of service over three years. Hence an officer with twelve years of service earning, say, $40 a month could, theoretically, convert his severance into an annuity, which at a six percent rate of interest would pay $2.40 a month, or less than $30 a year. This was substantially less than a prime farmhand could expect to earn and a pittance compared to that of, say, a British officer. Prior to the onset of the War of 1812, Congress supplemented these disability and severance packages with a type of retirement pension. Any soldier who enlisted for five years and who was honorably discharged would receive, in addition to his three months’ severance, 160 acres of land from the so-called military reserve. If he was killed in action or died in the service, his widow or heir(s) would receive the same benefit. The reservation price of public land at that time was $2.00 per acre ($1.64 for cash). So, the severance package would have been worth roughly $350, which, annuitized at six percent, would have yielded less than $2.00 a month in perpetuity. This was an ungenerous settlement by almost any standard. Of course in a nation of small farmers, a 160 acres might have represented a good start for a young cash-poor farmhand just out of the army.

The Army Develops a Retirement Plan

The Civil War resulted in a fundamental change in this system. Seeking the power to cull the active list of officers, the Lincoln administration persuaded Congress to pass the first general army retirement law. All officers could apply for retirement after 40 years of service, and a formal retirement board could retire any officer (after 40 years of service) who was deemed incapable of field service. There was a limit put on the number of officers who could be retired in this manner. Congress amended the law several times over the next few decades, with the key changes coming in 1870 and 1882. Taken together, these acts established 30 years as the minimum service requirement, 75 percent of base pay as the standard pension, and age 64 as the mandatory retirement age. This was the basic army pension plan until 1920, when Congress established the “up-or-out” policy in which an officer who was not deemed to be on track for promotion was retired. As such, he was to receive a retirement benefit equal to 2.5 percent multiplied by years of service not to exceed 75 percent of his base pay at the time of retirement. Although the maximum was reduced to 60 percent in 1924, it was subsequently increased back to 75 percent, and the service requirement was reduced to 20 years. As such, this remains the basic plan for military personnel to this day (Hustead and Hustead 2001).

Except for the disability plans that were eventually converted to old-page pensions, prior to 1885 the army retirement plan was only available to commissioned officers; however, in that year Congress created the first systematic retirement plan for enlisted personnel in the U.S. Army. Like the officers’ plan, it permitted retirement upon the completion of 30 years service at 75 percent of base pay. With the subsequent reduction in the minimum service requirement to 20 years, the enlisted plan merged with that for officers.

Naval Pensions

Until after World War I, the army and the navy maintained separate pension plans for their officers. The Continental Navy created a pension plan for its officers and seamen in 1775, even before an army plan was established. In the following year the navy plan was merged with the first army pension plan, and it too was eventually converted to a retirement plan for surviving veterans in 1832. The first disability pension plan for “regular” navy personnel was created in 1799. Officers’ benefits were not to exceed half-pay, while those for seamen and marines were not to exceed $5.00 a month, which was roughly 33 percent of an unskilled seaman’s base pay or 25 percent of that of a hired laborer in the private sector.

Except for the eventual conversion of the war pensions to retirement plans, there was no formal retirement plan for naval personnel until 1855. In that year Congress created a review board composed of five officers from each of the following ranks: captain, commander, and lieutenant. The board was to identify superannuated officers or those generally found to be unfit for service, and at the discretion of the Secretary of the Navy, the officers were to be placed on the reserve list at half-pay subject to the approval of the President. Before the plan had much impact the Civil War intervened, and in 1861 Congress established the essential features of the navy retirement plan, which were to remain in effect throughout the rest of the century. Like the army plan, retirement could occur through one of two ways: Either a retirement board could find the officer incapable of continuing on active duty, or after 40 years of service an officer could apply for retirement. In either case, officers on the retired list remained subject to recall; they were entitled to wear their uniforms; they were subject to the Articles of War and courts-martial; and they received 75 percent of their base pay. However, just as with the army certain constraints on the length of the retired list limited the effectiveness of the act.

In 1899, largely at the urging of then Assistant Secretary of the Navy Theodore Roosevelt, the navy adopted a rather Byzantine scheme for identifying and forcibly retiring officers deemed unfit to continue on active duty. Retirement (or “plucking”) boards were responsible for identifying those to be retired. Officers could avoid the ignominy of forced retirement by volunteering to retire, and there was a ceiling on the number who could be retired by the boards. In addition, all officers retired under this plan were to receive 75 percent of the sea pay of the next rank above that which they held at the time of retirement. (This last feature was amended in 1912, and officers simply received three-fourths of the pay of the rank in which they retired.) During the expansion of the navy leading up to America’s participation in the World War I, the plan was further amended, and in 1915 the president was authorized, with the advice and consent of the Senate, to reinstate any officer involuntarily retired under the 1899 act.

Still, the navy continued to struggle with its superannuated officers. In 1908, Congress finally granted naval officers the right to retire voluntarily at 75 percent of the active-duty pay upon the completion of 30 years of service. In 1916, navy pension rules were again altered, and this time a basic principle – “up or out” (with a pension) – was established, a principle which continues to this day. There were four basic components that differentiated the new navy pension plan from earlier ones. First, promotion to the ranks of rear admiral, captain, and commander were based on the recommendations of a promotion board. Prior to that time, promotions were based solely on seniority. Second, the officers on the active list were to be distributed among the ranks according to percentages that were not to exceed certain limits; thus, there was a limit placed on the number of officers who could be promoted to a certain rank. Third, age limits were placed on officers in each grade. Officers who obtained a certain age in a certain rank were retired with their pay equal to 2.5 percent multiplied by the number of years in service, with the maximum not to exceed 75 percent of their final active-duty pay. For example, a commander who reached age 50 and who had not been selected for promotion to captain, would be placed on the retired list. If he had served 25 years, then he would receive 62.5 percent of his base pay upon retirement. Finally, the act also imposed the same mandatory retirement provision on naval personnel as the 1882 (amended in 1890) act imposed on army personnel, with age 64 being established as the universal age of retirement in the armed forces of the United States.

These plans applied to naval officers only; however, in 1867 Congress authorized the retirement of seamen and marines who had served 20 or more years and who had become infirm as a result of old-age. These veterans would receive one-half their base pay for life. In addition, the act allowed any seaman or marine who had served 10 or more years and subsequently become disabled to apply to the Secretary of the Navy for a “suitable amount of relief” up to one-half base pay from the navy’s pension fund (see below). In 1899, the retirement act of 1885, which covered enlisted army personnel, was extended to enlisted navy personnel, with a few minor differences, which were eliminated in 1907. From that year, all enlisted personnel in both services were entitled to voluntarily retire at 75 percent of their pay and other allowances after 30 years’ of service, subsequently reduced to 20 years.

Funding U.S. Military Pensions

The history of pensions, particularly public sector pensions, cannot be easily separated from the history of pension finance. The creation of a pension plan coincides with the simultaneous creation of pension liabilities, and the parameters of the plan establish the size and the timing of those liabilities. U.S. Army pensions have always been funded on a “pay-as-you-go” basis from the general revenues of the U.S. Treasury. Thus army pensions have always been simply one more liability of the federal government. Despite the occasional accounting gimmick, the general revenues and obligations of the federal government are highly fungible, and so discussing the actuarial properties of the U.S. Army pension plan is like discussing the actuarial properties of the Department of Agriculture or the salaries of F.B.I. agents. However, until well into the twentieth century, this was not the case with navy pensions. They were long paid from a specific fund established separately from the general accounts of the treasury, and thus, their history is quite different from that of the army’s pensions.

From its inception in 1775, the navy’s pension plan for officers and seamen was financed with monies from the sale of captured prizes — enemy ships and those of other states carrying contraband. This funding mechanism meant that the flow of revenues needed to finance the navy’s pension liabilities were very erratic over time, fluctuating with the fortunes of war and peace. To manage these monies, the Continental Congress (and later the U.S. Congress) established the navy pension fund and allowed the trustees of this fund to invest the monies in a wide range of assets, including private equities. The history of the management of this pension fund illustrates many of the problems that can arise when public pension monies are used to purchase private assets. These include the loss of a substantial proportion of its assets on bad investments in private equities, the treasury’s bailout of the fund for these losses, and investment decisions that were influenced by political pressure. In addition there is evidence of gross malfeasance on the part of the agents of the fund, including trading on their on accounts, insider trading, and outright fraud.

Excluding a brief interlude just prior to the Civil War, the navy pension fund had a colorful history, lasting nearly one hundred and fifty years. Between its establishment in 1775 and 1842, it went bankrupt no less than three times, being bailed out by Congress each time. By 1842, there was little opportunity to continue to replenish the fund with fresh prize monies, and Congress, temporarily as it turned out, converted the navy pensions to a pay-as-you-go system, like army pensions. With the onset of the Civil War, the Union Navy’s blockade of Confederate ports created new prize opportunities, and the fund was reestablished, and navy pensions were once again paid from the prize fund. The fund subsequently accumulated an enormous balance. Like the antebellum losses of the fund, its postbellum surplus became something of a political football, and after much acrimonious debate, Congress took much of the fund’s balance and turned it over to the treasury. Still, the remnants of the fund persisted into the 1930s (Clark, Craig, and Wilson 2003).

Federal Civil Service Pensions

Like military pensions, pensions for loyal civil servants date back centuries; however, pension plans are of a more recent vintage, generally dating from the nineteenth century in Europe. In the United States, the federal government did not adopt a universal pension plan for civilian employees until 1920. This is not to say that there were no federal pensions before 1920. Pensions were available for some retiring civil servants, but Congress created them on a case-by-case basis. In the year before the federal pension plan went into effect, for example, there were 1,467 special acts of Congress either granting a new pension (912) or increasing the payments on old pensions (555) (Clark, Craig, and Wilson 2003). This process was as inefficient as it was capricious. Ending this system became a key objective of Congressional reforms.

The movement to create public sector pension plans at the turn of the twentieth century reflected the broader growth of the welfare state, particularly in Europe. As part of the progressive movement, many progressives envisioned the nascent European “cradle-to-grave” programs as the precursor of a better society, one with a new social covenant between the state and its people. Old-age pensions would fill the last step before the grave. Although the ultimate goal of this movement, universal old-age pensions, would not be realized until the creation of the social security system during the Great Depression, the initial objective was to have the government supply old-age security to its own workers. To support the movement in the United States, proponents of universal old-age pensions pointed out that by the early twentieth century, thirty-two countries around the world, including most of the European states and many regimes considered to be reactionary on social issues, had some type of old-age pension for their non-military public employees. If the Russians could humanely treat their superannuated civil servants, the argument went, why couldn’t the United States.

Establishing the Civil Service System

In the United States, the key to the creation of a civil service pension plan was the creation of a civil service. Prior to the late nineteenth century, the vast majority of federal employees were patronage employees — that is they served at the leisure of an elected or appointed official. With the tremendous growth of the number of such employees in the nineteenth century, the costs of the patronage system eventually outweighed the benefits derived from it. For example, over the century as a whole the number of post offices grew from 906 to 44,848; federal revenues grew from $3 million to over $400 million; and non-military employment went from 1,000 to 100,000. Indeed, the federal labor force nearly doubled in the 1870s alone (Johnson and Libecap 1994). The growth rates of these indicators of the size of the public sector are large even when compared to the dramatic fourteen-fold increase in U.S. population between 1800 and 1900. As a result, in 1883 Congress passed the Pendleton Act, which created the federal civil service, and which was passed largely, though not entirely, along party lines. As the party in power, the Republicans saw the conversion of federal employment from patronage to “merit” as an opportunity to gain the lifetime loyalty of an entire cohort of federal workers. In other words, by converting patronage jobs to civil service jobs, the party in power attempted to create lifetime tenure for its patronage workers. Of course, once in their civil service jobs, protected from the harshest effects of the market and the spoils system, federal workers simply did not want to retire — or put another way, many tended to retire on the job — and thus the conversion from patronage to civil service led to an abundance of superannuated federal workers. Thus began the quest for a federal pension plan.

Passage of the Federal Employees Retirement Act

A bill providing pensions for non-military employees of the federal government was introduced in every session of Congress between 1900 and 1920. Representatives of workers’ groups, the executive branch, the United States Civil Service Commission and inquiries conducted by congressional committees all requested or recommended the adoption of retirement plans for civil-service employees. While the political dynamics between these parties was often subtle and complex, the campaigns culminated in the passage of the Federal Employees Retirement Act on May 22, 1920 (Craig 1995). The key features of the original act of 1920 included:

  • All classified civil service employees qualified for a pension after reaching age 70 and rendering at least 15 years of service. Mechanics, letter carriers, and post office clerks were eligible for a pension after reaching age 65, and railway clerks qualified at age 62.
  • The ages at which employees qualified were also mandatory retirement ages. An employee could, however, be retained for two years beyond the mandatory age if his department head and the head of the Civil Service Commission approved.
  • All eligible employees were required to contribute two and one-half percent of their salaries or wages towards the payment of pensions.
  • The pension benefit was determined by the number of years of service. Class A employees were those who had served 30 or more years. Their benefit was 60 percent of their average annual salary during the last ten years of service. The benefits were scaled down through Class F employees (at least 15 years but less than 18 years of service). They received 30 percent of their average annual salary during the last ten years of service.

Although subsequently revised, this plan remains one of the two main civil service pension plans in the United States, and it served as something of a model for many subsequent pension plans in the United States. The other, newer federal plan, established in 1983, is a hybrid. That is, it has a traditional defined benefit component, a defined contribution component, and a Social Security component (Hustead and Hustead 2001).

State and Local Pensions

Decades before the states or the federal government provided civilian workers with a pension plan, several large American cities established plans for at least some of their employees. Until the first decades of the twentieth century, however, these plans were generally limited to three groups of employees: police officers, firefighters, and teachers. New York City established the first such plan for its police officers in 1857. Like the early military plans, the New York City police pension plan was a disability plan until a retirement feature was added in 1878 (Mitchell et al. 2001). Only a few other (primarily large) cities joined New York with a plan before 1900. In contrast, municipal workers in Austria-Hungary, Belgium, France, Germany, the Netherlands, Spain, Sweden, and the United Kingdom were covered by retirement plans by 1910 (Squier 1912).

Despite the relatively late start, the subsequent growth of such plans in the United States was rapid. By 1916, 159 cities had a plan for one or more of these groups of workers, and 21 of those cities included other municipal employees in some type of pension coverage (Monthly Labor Review, 1916). In 1917, 85 percent of cities with 100,000 or more residents paid some form of police pension; as did 66 percent of those with populations between 50,000 and 100,000; and 50 percent of cities with population between 30,000 and 50,000 had some pension liability (James 1921). These figures do not mean that all of these cities had a formal retirement plan. They only indicate that a city had at least $1 of pension liability. This liability could have been from a disability pension, a forced savings plan, or a discretionary pension. Still, by 1928, the Monthly Labor Review (April, 1928) could characterize police and fire plans as “practically universal”. At that time, all cities with populations of over 400,000 had a pension plan for either police officers or firefighters or both. Only one did not have a plan for police officers, and only one did not have a plan for firefighters. Several of those cities also had plans for their other municipal employees, and some cities maintained pension plans for their public school teachers separately from state teachers’ plans, which are reviewed below.

Eventually, some states also began to establish pension plans for state employees; however, initially these plans were primarily limited to teachers. Massachusetts established the first retirement pension plan for general state employees in 1911. The plan required workers to pay up to 5 percent of their salaries to a trust fund. Benefits were payable upon retirement. Workers were eligible to retire at age 60, and retirement was mandatory at age 70. At the time of retirement, the state purchased an annuity equal to twice the accumulated value (with interest) of the employee’s contribution. The calculation of the appropriate interest rate was, in many cases, not straightforward. Sometimes market rates or yields from a portfolio of assets were employed; sometimes a rate was simply established by legislation (see below). The Massachusetts plan initially became something of a model for subsequent public-sector pensions, but it was soon replaced by what became the standard public sector, defined benefit plan, much like the federal plan described above, in which the pension annuity was based on years of service and end-of-career earnings. Curiously, the Massachusetts plan resembled in some respects what have been referred to more recently as cash balance plans — hybrid plans that contain elements of both defined benefit and defined contribution plans.

Relative to the larger municipalities, the states were, in general, quite slow to adopt pension plans for their employees. As late as 1929, only six states had anything like a civil service pension plan for their (non-teacher) employees (Millis and Montgomery 1938). The record shows that pensions for state and local civil servants are for the most part, twentieth-century developments. However, after individual municipalities began adopting plans for their teachers in the early twentieth century, the states moved fairly aggressively in the 1910s and 1920s to create or consolidate plans for their other teachers. By the late 1920s, 21 states had formal retirement plans for their public school teachers (Clark, Craig, and Wilson 2003). On the one hand, this summary of state and local pension plans suggests that of all of the political units in the United States, the states themselves were the slowest to create pension plans for their civil service workers. However, this observation is slightly misleading. In 1930, 40 percent of all state and local employees were schoolteachers, and the 21 states that maintained a plan for their teachers included the most populous states at the time. While public sector pensions at the state and local level were far from universal by the 1920s, they did cover a substantial proportion of public sector workers, and that proportion was growing rapidly in the early decades of the twentieth century.

Funding State and Local Pensions

No discussion of the public sector pension plans would be complete without addressing the way in which the various plans were funded. The term “funded pension” is often used to mean a pension plan that had a specific source of revenues dedicated to pay for the plan’s liabilities. Historically, most public sector pension plans required some contribution from the employees covered by the plan, and in a sense, this contribution “funded” the plan; however, the term “funded” is more often taken to mean that the pension plan receives a stream of public funds from, for example, a specific source, such a share of property tax revenues. In addition, the term “actuarially sound” is often used to describe a pension plan in which the present value of tangible assets roughly equaled the present value of expected liabilities. Whereas one would logically expect an actuarially sound plan to be a funded plan, indeed a “fully funded” plan, a funded plan need not be actuarially sound, because it is possible that the flow of funds was simply too small to sufficiently cover liabilities.

Many early state and local plans were not funded at all; and fewer still were actuarially sound. Of course, in another sense, public sector pension plans are implicitly funded to the extent that they are backed by the coercive powers of the state. Through their monopoly of taxation, financially solvent and militarily successful states will be able to rely on their tax bases to fund their pension liabilities. Although this is exactly how most of the early state and local plans were ultimately financed, this is not what is typically meant by the term “funded plan”. Still, an important part of the history of state and local pensions revolves around exactly what happened to the funds (mostly employee contributions) that were maintained on behalf of the public sector workers.

Although the maintenance and operation of the state and local pension funds varied greatly during this early period, most plans required a contribution from workers, and this contribution was to be deposited in a so-called “annuity fund.” The assets of the fund were to be “invested” in various ways. In some cases the funds were invested “in accordance with the laws of the state governing the investment of savings bank funds.” In others the investments of the fund were to be credited “regular interest”, which was defined as, “the rate determined by the retirement board, and shall be substantially that which is actually earned by the fund of the retirement association.” This “rate” varied from state to state. In Connecticut, for example, it was literally a realized rate – i.e. a market rate. In Massachusetts, it was initially set at 3 percent by the retirement board, but subsequently it became a realized rate, which turned out to be roughly 4 percent in the late 1910s. In Pennsylvania, law set the rate at 4 percent. In addition, all three states created a “pension fund”, which contained the state’s contribution to the workers’ retirement annuity. In Connecticut and Massachusetts, this fund simply consisted of “such amounts as shall be appropriated by the general assembly from time to time.” In other words, the state’s share of the pension was on a “pay-as-you-go” basis. In Pennsylvania, however, the state actually contributed 2.8 percent of a teacher’s salary semi-annually to the state pension fund (Clark, Craig, and Wilson 2003).

By the late 1920s some states were basing their contributions to their teachers’ pension fund on actuarial calculations. The first states to adopt such plans were New Jersey, Ohio, and Vermont (Studenski 1920). What this meant in practice was that the state essentially estimated its expected future liability based on a worker’s experience, age, earnings, life expectancy, and so forth, and then deposited that amount into the pension fund. This was originally referred to as a “scientific” pension plan. These were truly funded and actuarially sound defined benefit plans.

As noted, several of the early plans paid an annuity based on the performance of the pension fund. The return on the fund’s portfolio is important because it would ultimately determine the soundness of the funding scheme and in some case the actual annuity the worker would receive. Even the funded, defined benefit plans based the worker’s and the employer’s contributions on expected earnings on the invested funds. How did these early state and local pension funds manage the assets they held? Several state plans restricted the plans to holding only those assets that could be held by state chartered mutual savings banks. Typically, these banks could hold federal, state, or local government debt. In most states, they could usually hold debt issued by private corporations and occasionally private equities. In the first half of the twentieth century, there were 19 states that chartered mutual savings banks. They were overwhelmingly in the Northeast, Midwest, and Far West — the same regions in which state and local pension plans were most prevalent. However, in most cases the corporate securities were limited to those on a so-called “legal list,” which was supposed to contain only the safest corporate investments. Admission to the legal list was based on a compilation of corporate assets, earnings, dividends, prior default records and so forth. The objective was to provide a list that consisted of the bluest of blue chip corporate securities. In the early decades of the twentieth century, these lists were dominated by railroad and public-utility issues (Hickman 1958). States, such as Massachusetts that did not restrict investments to those held by mutual savings banks, placed similar limits on state pension funds. Massachusetts limited investments to those that could be made in state-established “sinking funds”. Ohio explicitly limited its pension funds to U.S. debt, Ohio state debt, and the debt of any “county, village, city, or school district of the state of Ohio” (Studenski 1920).

Collectively, the objective of these restrictions was risk minimization — though the economics of that choice is not as simple it might appear. Cities and states that invested in their own municipal bonds faced an inherent moral hazard. Specifically, public employees might be forced to contribute a proportion of their earnings to their pension funds. If the city then purchased debt at par from itself for the pension fund when that debt might for various reasons not circulate at par on the open market, then the city could be tempted to go to the pension fund rather than the market for funds. This process would tend to insulate the city from the discipline of the market, which would in turn tend to cause the city to over-invest in activities financed in this way. Thus, the pension funds, actually the workers themselves, would essentially be forced to subsidize other city operations. In practice, the main beneficiaries would have been the contractors whose activities were funded by the workers’ pensions funds. At the time, these would have included largely sewer, water, and road projects. The Chicago police pension fund offers an example of the problem. An audit of the fund in 1912 reported: “It is to be regretted that there are no complete statistical records showing the operation of this fund in the city of Chicago.” As a recent history of pensions noted, “It is hard to imagine that the records were simply misplaced by accident” (Clark, Craig, and Wilson 2003, 213). Thus, like the U.S. Navy pension fund, the agents of these municipal and state funds faced a moral hazard that scholars are still analyzing more than a century later.

References

Clark, Robert L., Lee A. Craig, and Jack W. Wilson. A History of Public Sector Pensions. Philadelphia: University of Pennsylvania Press, 2003.

Craig, Lee A. “The Political Economy of Public-Private Compensation Differentials: The Case of Federal Pensions.” Journal of Economic History 55 (1995): 304-320.

Crook, J. A. “Augustus: Power, Authority, Achievement.” In The Cambridge Ancient History, edited by Alan K. Bowman, Edward Champlin, and Andrew Lintoff. Cambridge: Cambridge University Press, 1996.

Employee Benefit Research Institute. EBRI Databook on Employee Benefits. Washington, D. C.: EBRI, 1997.

Ferguson, E. James. Power of the Purse: A History of American Public Finance. Chapel Hill, NC: University of North Carolina Press, 1961.

Hustead, Edwin C., and Toni Hustead. “Federal Civilian and Military Retirement Systems.” In Pensions in the Public Sector, edited by Olivia S. Mitchell and Edwin C. Hustead, 66-104. Philadelphia: University of Pennsylvania Press, 2001.

James, Herman G. Local Government in the United States. New York: D. Appleton & Company, 1921.

Johnson, Ronald N., and Gary D. Libecap. The Federal Civil Service System and the Problem of Bureaucracy. Chicago: University of Chicago Press, 1994.

Middlekauff, Robert. The Glorious Cause: The American Revolution, 1763-1789. New York: Oxford University Press, 1982.

Millis, Harry A., and Royal E. Montgomery. Labor’s Risk and Social Insurance. New York: McGraw-Hill, 1938.

Mitchell, Olivia S., David McCarthy, Stanley C. Wisniewski, and Paul Zorn. “Developments in State and Local Pension Plans.” In Pensions in the Public Sector, edited by Olivia S. Mitchell and Edwin C. Hustead. Philadelphia: University of Pennsylvania Press, 2001.

Monthly Labor Review, various issues.

Squier, Lee Welling. Old Age Dependency in the United States. New York: Macmillan, 1912

Studenski, Paul. 1920. Teachers’ Pension Systems in the United States: A Critical and Descriptive Study. New York: D. Appleton and Company, 1920

Citation: Craig, Lee. “Public Sector Pensions in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2003. URL http://eh.net/encyclopedia/public-sector-pensions-in-the-united-states/