is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Fire Insurance in the United States

Dalit Baranoff

Fire Insurance before 1810

Marine Insurance

The first American insurers modeled themselves after British marine and fire insurers, who were already well-established by the eighteenth century. In eighteenth-century Britain, individual merchants wrote most marine insurance contracts. Shippers and ship owners were able to acquire insurance through an informal exchange centering on London’s coffeehouses. Edward Lloyd’s Coffee-house, the predecessor of Lloyd’s of London, came to dominate the individual underwriting business by the middle of the eighteenth century.

Similar insurance offices where local merchants could underwrite individual voyages began to appear in a number of American port cities in the 1720s. The trade centered on Philadelphia, where at least fifteen different brokerages helped place insurance in the hands of some 150 private underwriters over the course of the eighteenth century. But only a limited amount of coverage was available. American shippers also could acquire insurance through the agents of Lloyds and other British insurers, but often had to wait months for payments of losses.

Mutual Fire Insurance

When fire insurance first appeared in Britain after the Great London Fire of 1666, mutual societies, in which each policyholder owned a share of the risk, predominated. The earliest American fire insurers followed this model as well. Established in the few urban centers where capital was concentrated, American mutuals were not considered money-making ventures, but rather were outgrowths of volunteer firefighting organizations. In 1735 Charleston residents formed the first American mutual insurance company, the Friendly Society of Mutual Insuring of Homes against Fire. It only lasted until 1741, when a major fire put it out of business.

Benjamin Franklin was the organizing force behind the next, more successful, mutual insurance venture, the Philadelphia Contributionship for the Insurance of Houses from Loss by Fire 1, known familiarly by the name of its symbol, the “Hand in Hand.” By the 1780s, growing demand had led to the formation of other fire mutuals in Philadelphia, New York, Baltimore, Norwich (CT), Charleston, Richmond, Boston, Providence, and elsewhere. (See Table 1.)

Joint-Stock Companies

Joint-stock insurance companies, which raise capital through the sale of shares and distribute dividends, rose to prominence in American fire and marine insurance after the War of Independence. While only a few British insurers were granted the royal charters that allowed them to sell stock and to claim limited liability, insurers in the young United States found it relatively easy to obtain charters from state legislatures eager to promote a domestic insurance industry.

Joint-stock companies first appeared in the marine sector, where demand and the potential for profit were greater. Because they did not rely on the fortunes of any one individual, joint-stock companies provided greater security than private underwriting. In addition to their premium income, joint-stock companies maintained a fixed capital, allowing them to cover larger amounts than mutuals could.

The first successful joint-stock company, the Insurance Company of North America, was formed in 1792 in Philadelphia to sell marine, fire, and life insurance. By 1810, more than seventy such companies had been chartered in the United States. Most of the firms incorporated before 1810 operated primarily in marine insurance, although they were often chartered to handle other lines. (See Table 1.)

Table 1: American Insurance Companies, 1735-1810

1794 Norwich Mutual Fire Insurance Co. (Norwich)
1796 New Haven Insurance Co.
1797 New Haven Insurance Co. (Marine)
1801 Mutual Assurance Co. (New Haven)
1803 Hartford Insurance Co.(M)
1803 Middletown Insurance Co. (Middletown) (M)
1803 Norwich Marine Insurance Co.
1805 Union Insurance Co. (New London) (M)
1810 Hartford Fire Insurance Co.
1787 Baltimore Fire Insurance Co. (Baltimore)
1791 Maryland I. Insurance Co. (Baltimore)
1794 Baltimore Equitable Society (Baltimore)
1795 Baltimore Fire Insurance Co. (Baltimore)
1795 Maryland Insurance Co. (Baltimore)
1796 Charitable Marine Society (Baltimore)
1798 Georgetown Mutual Insurance Co. (Georgetown)
1804 Chesapeake Insurance Co. (Baltimore)
1804 Marine Insurance Co. (Baltimore)
1804 Union Insurance Co. of MD (Baltimore)
1795 Massachusetts Fire and Marine Insurance Co. (Boston)
1798 Massachusetts Mutual Ins. Co. (Boston)
1799 Boston Marine Insurance Co. (Boston)
1799 Newburyport Marine Insurance Co. (Newburyport)
1800 Maine Fire and Marine Ins. Co. (Portland)
1800 Salem Marine Insurance Co. (Salem)
1803 New England Marine Insurance Co. (Boston)
1803 Suffolk Insurance Co. (Boston)
1803 Cumberland Marine and Fire Insurance Co. (Portland, ME)
1803 Essex Fire and Marine Insurance Co. (Salem)
1803 Gloucester Marine Ins. Co. (Gloucester)
1803 Lincoln and Kennebeck Marine Ins. Co. (Wicasset)
1803 Merrimac Marine and Fire Ins. Co. (Newburyport)
1803 Marblehead Marine Insurance Co. (Marblehead)
1803 Nantucket Marine Insurance Co. (Nantucket)
1803 Portland Marine and Fire Insurance Co. (Portland)
1804 North American Insurance Co. (Boston)
1804 Union Insurance Co. (Boston)
1804 Hampshire Mutual Fire Insurance Co. (Northampton)
1804 Kennebunk Marine Ins. Co. (Wells)
1804 Nantucket Union Marine Insurance Co. (Nantucket)
1804 Plymouth Marine Insurance Co. (Plymouth)
1804 Union Marine Insurance Co. (Salem)
1805 Bedford Marine Insurance Co. (New Bedford)
1806 Newburyport Marine Insurance Co. (Newburyport)
1807 Bath Fire and Marine Insurance Co. (Bath)
1807 Middlesex Insurance Co. (Charlestown)
1807 Union Marine and Fire Insurance Co. (Newburyport)
1808 Kennebeck Marine Ins. Co. (Bath)
1809 Beverly Marine Insurance Co. (Beverly)
1809 Marblehead Social (Marblehead)
1809 Social Insurance Co. (Salem)
1752 Philadelphia Contributionship for the Insurance of Houses from Loss by Fire
1784 Mutual Assurance Co. (Philadelphia)
1794 Insurance Co. of North America (Philadelphia)
1794 Insurance Co. of the State of Pennsylvania (Philadelphia)
1803 Phoenix Insurance Co. (Philadelphia)
1803 Philadelphia Insurance Co. (Philadelphia)
1804 Delaware Insurance Co. (Philadelphia)
1804 Union Insurance Co. (Chester County)
1807 Lancaster and Susquehanna Insurance Co.
1809 Marine and Fire Insurance Co. (Philadelphia)
1810 United States Insurance Co. (Philadelphia)
1810 American Fire Insurance Co. (Philadelphia)
1810 Farmers’ Bank of the State of Delaware (Dover)
Rhode Island
1799 Providence Insurance Co.
1800 Washington Insurance Co.
1800 Providence Mutual Fire Insurance Co.
South Carolina
1735 Friendly Society (Charleston) – royal charter
1797 Charleston Insurance Co. (Charleston)
1797 Charleston Mutual Insurance Co. (Charleston)
1805 South Carolina Insurance Co. (Charleston)
1807 Union Insurance Co. (Charleston)
New Hampshire
1799 New Hampshire Insurance Co. (Portsmouth)
New York City
1787 Knickerbocker Fire Insurance Co. (originally Mutual Insurance Co. of the City of New York)
1796 New York Insurance Co.
1796 Insurance Co. of New York
1797 Associated Underwriters
1798 Mutual Assurance Co.
1800 Columbian Insurance Co.
1802 Washington Mutual Assurance Co.
1802 Marine Insurance Co.
1804 Commercial Insurance Co.
1804 Eagle Fire Insurance Co.
1807 Phoenix Insurance Co.
1809 Mutual Insurance Co.
1810 Fireman’s Insurance Co.
1810 Ocean Insurance Co.
North Carolina
1803 Mutual Insurance Co. (Raleigh)
1794 Mutual Assurance Society(Richmond)

The Embargo Act (1807-1809) and the War of 1812 (1812-1814) interrupted shipping, drying up marine insurers’ premiums and forcing them to look for other sources of revenue. These same events also stimulated the development of domestic industries, such as textiles, which created new demand for fire insurance. Together, these events led many marine insurers into the fire field, previously a sideline for most. After 1810, new joint-stock companies appeared whose business centered on fire insurance from the outset. Unlike mutuals, these new fire underwriters insured contents as well as real estate, a growing necessity as Americans’ personal wealth began to expand.


Geographic Diversification

Until the late 1830s, most fire insurers concentrated on their local markets, with only a few experimenting with representation through agents in distant cities. Many state legislatures discouraged “foreign” competition by taxing the premiums of out-of-state insurers. This situation prevailed through 1835, when fire insurers learned a lesson they were not to forget. A devastating fire destroyed New York City’s business district, causing between $15 million and $26 million in damage, bankrupting 23 of the 26 local fire insurance companies. From this point on, fire insurers regarded the geographic diversification of risks as imperative.

Insurers sought to enter new markets in order to reduce their exposure to large-scale conflagrations. They gradually discovered that contracting with agents allowed them to expand broadly, rapidly, and at relatively low cost. Pioneered mainly by companies based in Hartford and Philadelphia, the agency system did not become truly widespread until the 1850s. Once the system began to emerge in earnest, it rapidly took off. By 1855, for example, New York State had authorized 38 out-of-state companies to sell insurance there. Most were fewer than five years old. By 1860, national companies relying on networks of local agents had replaced purely local operations as the mainstay of the industry.


As the agency system grew, so too did competition. By the 1860s, national fire insurance firms competed in hundreds of local markets simultaneously. Low capitalization requirements and the widespread adoption of general incorporation laws provided for easy entry into the field.

Competition forced insurers to base their premiums on short-term costs. As a result, fire insurance rates were inadequate to cover the long-term costs associated with the city-wide conflagrations that might occur unpredictably once or twice in a generation. When another large fire occurred, many consumers would be left with worthless policies.

Aware of this danger, insurers struggled to raise rates through cooperation. Their most notable effort was the National Board of Fire Underwriters. Formed in 1866 with 75 member companies, it established local boards throughout the country to set uniform rates. But by 1870, renewed competition led the members of the National Board to give up the attempt.


Insurance regulation developed during this period to protect consumers from the threat of insurance company insolvency. Beginning with New York (1849) and Massachusetts (1852), a number of states began to codify their insurance laws. Following New York’s lead in 1851, some states adopted $100,000-minimum capitalization requirements. But these rules did little to protect consumers when a large fire resulted in losses in excess of that amount.

By 1860 four states had established insurance departments. Two decades later, insurance departments, headed by a commissioner or superintendent, existed in some 25 states. In states without formal departments, the state treasurer, comptroller, or secretary of state typically oversaw insurance regulation.

State Insurance Departments through 1910
(Departments headed by insurance commissioner or superintendent unless otherwise indicated)

Source: Harry C. Brearley, Fifty Years of a Civilizing Force (1916), 261-174.
Year listed is year department began operating, not year legislation creating it was passed.

  • New Hampshire
  • Vermont (state treasurer served as insurance commissioner)
  • Massachusetts (annual returns required since 1837)
  • New York (comptroller first authorized to prepare reports in 1853, first annual report 1855)
  • Rhode Island
  • Indiana (1852-1865, state auditor headed)
  • Connecticut
  • West Virginia (state auditor supervised 1865 until 1907, when reorganized)
  • California
  • Maine
  • Missouri
  • Kentucky (part of bureau of state auditor.s department)
  • Kansas
  • Michigan
  • Florida
  • Ohio (1867-72, state auditor supervised)
  • Maryland
  • Minnesota
  • Arkansas
  • Nebraska
  • Pennsylvania
  • Tennessee (state treasurer acted as insurance commissioner)
  • Texas
  • Wisconsin (1867-78, secretary of state supervised insurance)
  • Delaware
  • Nevada (1864-1881, state comptroller supervised insurance)
  • Colorado
  • Georgia (1869-1887, insurance supervised by state comptroller general)
  • North Dakota
  • Washington (secretary of state acted as insurance commissioner until 1908)
  • Oklahoma (secretary of territory headed through 1907)
  • New Jersey (1875-1891, secretary of state supervised insurance)
  • Illinois (auditor of public accounts supervised insurance 1869-1893)
  • Utah (1884-1896, supervised by territorial secretary. Supervised by secretary of state until department reorganized in 1909)
  • Alabama (1860-1897, insurance supervised by state auditor)
  • Wyoming (territorial auditor supervised insurance 1868-1896) (1877)
  • South Dakota (1889-1897, state auditor supervised)
  • Louisiana (secretary of state acted as superintendent)
  • Alaska (administered by survey-general of territory)
  • Arizona (1887-1901 supervised by territorial treasurer)
  • Idaho (1891-1901, state treasurer headed)
  • Mississippi (1857-1902, auditor of public accounts supervised insurance)
  • District of Columbia
  • New Mexico (1882-1904, territorial auditor supervised)
  • Virginia (from 1866 auditor of public accounts supervised)
  • South Carolina (1876-1908, comptroller general supervised insurance)
  • Montana (supervised by territorial/state auditor 1883-1909)

The Supreme Court affirmed state supervision of insurance in 1868 in Paul v. Virginia, which found insurance not to be interstate commerce. As a result, it would not be subject to any federal regulations over the coming decades.


Chicago and Boston Fires

The Great Chicago Fire of October 9 and 10, 1871 destroyed over 2,000 acres (nearly 3½ square miles) of the city. With close to 18,000 buildings burned, including 1,500 “substantial business structures,” 100,000 people were left homeless and thousands jobless. Insurance losses totaled between $90 and $100 million. Many firms’ losses exceeded their available assets.

About 200 fire insurance companies did business in Chicago at the time. The fire bankrupted 68 of them. At least one-half of the property in the burnt district was covered by insurance, but as a result of the insurance company failures, Chicago policyholders recovered only about 40 percent of what they were owed.

A year later, on November 9 and 10, 1872, a fire destroyed Boston’s entire mercantile district, an area of 40 acres. Insured losses in this case totaled more than $50 million, bankrupting an additional 32 companies. The rate of insurance coverage was higher in Boston, where commercial property, everywhere more likely to be insured, happened to bear the brunt of the fire. Some 75 percent of ruined buildings and their contents were insured against fire. In this case, policyholders recovered about 70 percent of their insured losses.

Local Boards

After the Chicago and Boston fires revealed the inadequacy of insurance rates, surviving insurers again tried to set rates collectively. By 1875, a revitalized National Board had organized over 1,000 local boards, placing them under the supervision of district organizations. State auxiliary boards oversaw the districts, and the National Board itself was the final arbiter of rates. But this top-down structure encountered resistance from the local agents, long accustomed to setting their own rates. In the midst of the economic downturn that followed the Panic of 1873, the National Board’s efforts again collapsed.

In 1877, the membership took a fresh approach. They voted to dismantle the centralized rating bureaucracy, instead leaving rate-setting to local boards composed of agents. The National Board now focused its attention on promoting fire prevention and collecting statistics. By the mid-1880s, local rate-setting cartels operated in cities throughout the U.S. Regional boards or private companies rated smaller communities outside the jurisdiction of a local board.

The success of the new breed of local rate-setting cartels owed much to the ever-expanding scale of commerce and property, which fostered a system of mutual dependence between the local agents. Although individual agents typically represented multiple companies, they had come routinely to split risks amongst themselves and the several firms they served. Responding to the imperative of diversification, companies rarely covered more than $10,000 on an individual property, or even within one block of a city.

As property values rose, it was not unusual to see single commercial buildings insured by 20 or more firms, each underwriting a $1,000 or $2,000 chunk of a given risk. Insurers who shared their business had few incentives to compete on price. Undercutting other insurers might even cost them future business. When a sufficiently large group of agents joined forces to set minimum prices, they effectively could shut out any agents who refused to follow the tariff.

Cooperative price-setting by local boards allowed insurers to maintain higher rates, taking periodic conflagrations into account as long-term costs. Cooperation also resulted, for the first time, in rates that followed a stable pattern, where aggregate prices reflected aggregate costs, the so-called underwriting cycle.

(Note: The underwriting cycle is illustrated above using combined ratios, which are the ratio of losses and expenses to premium income in any given year. Because combined ratios include dividend payments but not investment income, they are often greater than 100.)

Local boards helped fire insurance companies diversify their risks and stabilize their rates. The companies in turn, supported the local boards. As a result, the local rate-setting boards that formed during the early 1880s proved remarkably durable and successful. Despite brief disruptions in some cities during the severe economic downturn of the mid-1890s, the local boards did not fail.

As an additional benefit, insurers were able to accomplish collectively what they could not afford to do individually: collect and analyze data on a large scale. The “science” of fire insurance remained in its infancy. The local boards inspected property and created detailed rating charts. Some even instituted scheduled rating – a system where property owners were penalized for defects, such as lack of fire doors, and rewarded for improvements. Previously, agents had set rates based on their personal, idiosyncratic knowledge of local conditions. Within the local boards, agents shared both their subjective personal knowledge and objective data. The results were a crude approximation of an actuarial science.

Anti-Compact Laws

Price-setting by local boards was not viewed favorably by many policy-holders who had to pay higher prices for insurance. Since Paul v. Virginia had exempted insurance from federal antitrust laws, consumers encouraged their state legislatures to pass laws outlawing price collusion among insurers. Ohio adopted the first anti-compact law in 1885, followed by Michigan (1887), Arkansas, Nebraska, Texas, and Kansas (1889), Maine, New Hampshire, and Georgia (1891). By 1906, 19 states had anti-compact laws, but they had limited effectiveness. Where open collusion was outlawed, insurers simply established private rating bureaus to set “advisory” rates.

Spread of Insurance

Local boards flourished in prosperous times. During the boom years of the 1880s, new capital flowed into every sector. The increasing concentration of wealth in cities steadily drove the amounts and rates of covered property upward. Between 1880 and 1889, insurance coverage rose by an average rate of 4.6 percent a year, increasing 50 percent overall. By 1890, close to 60 percent of burned property in the U.S. was insured, a figure that would not be exceeded until the 1910s, when upwards of 70 percent of property was insured.

In 1889, the dollar value of property insured against fire in the United States approached $12 billion. Fifteen years later, $20 billion dollars in property was covered.

Baltimore and San Francisco

The ability of higher, more stable prices to insulate industry and society from the consequences of citywide conflagrations can be seen in the strikingly different results following the sequels to Boston and Chicago, which occurred in Baltimore and San Francisco in the early 1900s. The Baltimore Fire of Feb. 7 through 9, 1904 resulted in $55 million in insurance claims, 90 percent of which was paid. Only a few Maryland-based companies went bankrupt.

San Francisco’s disaster dwarfed Baltimore’s. The earthquake that struck the city on April 18, 1906 set off fires that burned for three days, destroying over 500 blocks that contained at least 25,000 buildings. The damages totaled $350 million, some two-thirds covered by insurance. In the end, $225 million was paid out, or around 90 percent of what was owed. Only 20 companies operating in San Francisco were forced to suspend business, some only temporarily.

Improvements in construction and firefighting would put an end to the giant blazes that had plagued America’s cities. But by the middle of the first decade of the twentieth century, cooperative price-setting in fire insurance already had ameliorated the worst economic consequences of these disasters.


State Rate-Setting

Despite the passage of anti-compact legislation, fire insurance in the early 1900s was regulated as much by companies as by state governments. After Baltimore and San Francisco, state governments, recognizing the value of cooperative price-setting, began to abandon anti-compact laws in favor of state involvement in rate-setting which took one of two forms: set rates, or state review of industry-set rates.

Kansas was the first to adopt strict rate regulation in 1909, followed by Texas in 1910 and Missouri in 1911. These laws required insurers to submit their rates for review by the state insurance department, which could overrule them. Contesting the constitutionality of its law, the insurance industry took the State of Kansas to court. In 1914, the Supreme Court of the United States decided German Alliance Insurance Co. v. Ike Lewis, Superintendent of Insurance in favor of Kansas. The Court declared insurance to be a public good, subject to rate regulation.

While the case was pending, New York entered the rating arena in 1911 with a much less restrictive law. New York’s law was greatly influenced by a legislative investigation, the Merritt Committee. The Armstrong Committee’s investigation of New York’s life insurance industry in 1905 had uncovered numerous financial improprieties, leading legislators to call for investigations into the fire insurance industry, where they hoped to discover similar evidence of corruption or profiteering. The Merritt Committee, which met in 1910 and 1911, instead found that most fire insurance companies brought in only modest profits.

The Merritt Committee further concluded that cooperation among firms was often in the public interest, and recommended that insurance boards continue to set rates. The ensuing law mandated state review of rates to prevent discrimination, requiring companies to charge the same rates for the same types of property. The law also required insurance companies to submit uniform statistics on premiums and losses for the first time. Other states soon adopted similar requirements. By the early 1920, nearly thirty states had some form of rate regulation.

Data Collection

New York’s data-collection requirement had far-reaching consequences for the entire fire insurance industry. Because every major insurer in the United States did business in New York (and often a great deal of it), any regulatory act passed there had national implications. And once New York mandated that companies submit data, the imperative for a uniform classification system was born.

In 1914, the industry responded by creating an Actuarial Bureau within the National Board of Fire Underwriters to collect uniformly organized data and submit it to the states. Supported by the National Convention of Insurance Commissioners (today called the National Association of Insurance Commissioners, or NAIC), the Actuarial Bureau was soon able to establish uniform, industry-wide classification standards. The regular collection of uniform data enabled the development of modern actuarial science in the fire field.

1920 to the Present

Federal Regulation

Through the 1920s and 1930s, property insurance rating continued as it had before, with various rating bureaus determining the rates that insurers were to charge, and the states reviewing or approving them. In 1944, the Supreme Court decided a federal antitrust suit against the Southeastern Underwriters Association, which set rates in a number of southern states. The Supreme Court found the SEUA to be in violation of the Sherman Act, thereby overturning Paul v. Virginia. The industry had become subject to federal regulation for the first time.

Within a year, Congress had passed the McCarran-Ferguson Act, allowing the states to continue regulating insurance so long as they met certain federal requirements. The law also granted the industry a limited exemption from antitrust statutes. The Act gave the National Association of Insurance Commissioners three years to develop model rating laws for the states to adopt.

State Rating Laws

In 1946, the NAIC adopted model rate laws for fire and casualty insurance that required “prior approval” of rates by the states before they could be used by insurers. While most of the industry supported this requirement as a way to prevent competition, a group of “independent” insurers opposed prior approval and instead supported “file and use” rates.

By the 1950s, all states had passed rating laws, although not necessarily the model laws. Some allowed insurers to file deviations from bureau rates, while others required bureau membership and strict prior approval of rates. Most regulatory activity through the late 1950s involved the industry’s attempts to protect the bureau rating system.

The bureaus’ tight hold on rates was soon to loosen. In 1959, an investigation into bureau practices by a U.S. Senate Antitrust subcommittee (the O’Mahoney Committee) found that competition should be the main regulator of the industry. As a result, some states began to make it easier for insurers to deviate from prior approval rates.

During the 1960s, two different systems of property/casualty insurance regulation developed. While many states abandoned prior approval in favor of competitive rating, others strengthened strict rating laws. At the same time, the many rating bureaus that had provided rates for different states began to consolidate. By the 1970s, the rates that these combined rating bureaus provided were officially only advisory. Insurers could choose whether to use them or develop their own rates.

Although membership in rating bureaus is no longer mandatory, advisory organizations continue to play an important part in property/casualty insurance by providing required statistics to the states. They also allow new firms easy access to rating data. The Insurance Services Office (ISO), one of the largest “bureaus,” became a for-profit corporation in 1997, and is no longer controlled by the insurance industry. Still, even in its current, mature state, the property/casualty field still functions largely according to the patterns set in fire insurance by the 1920s.

References and Further Reading:

Bainbridge, John. Biography of an Idea: The Story of Mutual Fire and Casualty Insurance. New York: Doubleday, 1952.

Baranoff, Dalit. “Shaped By Risk: Fire Insurance in America 1790-1920.” Ph.D. dissertation, Johns Hopkins University, 2003.

Brearley, Harry Chase. Fifty Years of a Civilizing Force: An Historical and Critical Study of the Work of the National Board of Fire Underwriters. New York: Frederick A. Stokes Company, 1916.

Grant, H. Roger. Insurance Reform: Consumer Action in the Progressive Era. Ames: Iowa State University Press, 1979.

Harrington, Scott E. “Insurance Rate Regulation in the Twentieth Century.” Journal of Risk and Insurance 19, no. 2 (2000): 204-18.

Lilly, Claude C. “A History of Insurance Regulation in the United States.” CPCU Annals 29 (1976): 99-115.

Perkins, Edwin J. American Public Finance and Financial Services, 1700-1815. Columbus: Ohio State University Press, 1994.

Pomeroy, Earl and Carole Olson Gates. “State and Federal Regulation of the Business of Insurance.” Journal of Risk and Insurance 19, no. 2 (2000): 179-88.

Tebeau, Mark. Eating Smoke: Fire in Urban America, 1800-1950. Baltimore: Johns Hopkins University Press, 2003.

Wagner, Tim. “Insurance Rating Bureaus.” Journal of Risk and Insurance 19, no. 2 (2000): 189-203.

1 The name appears in various sources as either the “Contributionship” or the “Contributorship.”

Citation: Baranoff, Dalit. “Fire Insurance in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL

Slavery in the United States

Jenny Bourne, Carleton College

Slavery is fundamentally an economic phenomenon. Throughout history, slavery has existed where it has been economically worthwhile to those in power. The principal example in modern times is the U.S. South. Nearly 4 million slaves with a market value estimated to be between $3.1 and $3.6 billion lived in the U.S. just before the Civil War. Masters enjoyed rates of return on slaves comparable to those on other assets; cotton consumers, insurance companies, and industrial enterprises benefited from slavery as well. Such valuable property required rules to protect it, and the institutional practices surrounding slavery display a sophistication that rivals modern-day law and business.


Not long after Columbus set sail for the New World, the French and Spanish brought slaves with them on various expeditions. Slaves accompanied Ponce de Leon to Florida in 1513, for instance. But a far greater proportion of slaves arrived in chains in crowded, sweltering cargo holds. The first dark-skinned slaves in what was to become British North America arrived in Virginia — perhaps stopping first in Spanish lands — in 1619 aboard a Dutch vessel. From 1500 to 1900, approximately 12 million Africans were forced from their homes to go westward, with about 10 million of them completing the journey. Yet very few ended up in the British colonies and young American republic. By 1808, when the trans-Atlantic slave trade to the U.S. officially ended, only about 6 percent of African slaves landing in the New World had come to North America.

Slavery in the North

Colonial slavery had a slow start, particularly in the North. The proportion there never got much above 5 percent of the total population. Scholars have speculated as to why, without coming to a definite conclusion. Some surmise that indentured servants were fundamentally better suited to the Northern climate, crops, and tasks at hand; some claim that anti-slavery sentiment provided the explanation. At the time of the American Revolution, fewer than 10 percent of the half million slaves in the thirteen colonies resided in the North, working primarily in agriculture. New York had the greatest number, with just over 20,000. New Jersey had close to 12,000 slaves. Vermont was the first Northern region to abolish slavery when it became an independent republic in 1777. Most of the original Northern colonies implemented a process of gradual emancipation in the late eighteenth and early nineteenth centuries, requiring the children of slave mothers to remain in servitude for a set period, typically 28 years. Other regions above the Mason-Dixon line ended slavery upon statehood early in the nineteenth century — Ohio in 1803 and Indiana in 1816, for instance.

Population of the Original Thirteen Colonies, selected years by type

1750 1750 1790 1790 1790 1810 1810 1810 1860 1860 1860


White Black White Free Slave White Free Slave White Free Slave
Nonwhite Nonwhite Nonwhite
108,270 3,010 232,236 2,771 2,648 255,179 6,453 310 451,504 8,643 Connecticut
27,208 1,496 46,310 3,899 8,887 55,361 13,136 4,177 90,589 19,829 1,798 Delaware
4,200 1,000 52,886 398 29,264 145,414 1,801 105,218 591,550 3,538 462,198 Georgia
97,623 43,450 208,649 8,043 103,036 235,117 33,927 111,502 515,918 83,942 87,189 Maryland
183,925 4,075 373,187 5,369 465,303 6,737 1,221,432 9,634 Massachusetts
26,955 550 141,112 630 157 182,690 970 325,579 494 New Hampshire
66,039 5,354 169,954 2,762 11,423 226,868 7,843 10,851 646,699 25,318 New Jersey
65,682 11,014 314,366 4,682 21,193 918,699 25,333 15,017 3,831,590 49,145 New York
53,184 19,800 289,181 5,041 100,783 376,410 10,266 168,824 629,942 31,621 331,059 North Carolina
116,794 2,872 317,479 6,531 3,707 786,804 22,492 795 2,849,259 56,956 Pennsylvania
29,879 3,347 64,670 3,484 958 73,214 3,609 108 170,649 3,971 Rhode Island
25,000 39,000 140,178 1,801 107,094 214,196 4,554 196,365 291,300 10,002 402,406 South Carolina
129,581 101,452 442,117 12,866 292,627 551,534 30,570 392,518 1,047,299 58,154 490,865 Virginia
934,340 236,420 2,792,325 58,277 681,777 4,486,789 167,691 1,005,685 12,663,310 361,247 1,775,515 United States

Source: Historical Statistics of the U.S. (1970), Franklin (1988).

Slavery in the South

Throughout colonial and antebellum history, U.S. slaves lived primarily in the South. Slaves comprised less than a tenth of the total Southern population in 1680 but grew to a third by 1790. At that date, 293,000 slaves lived in Virginia alone, making up 42 percent of all slaves in the U.S. at the time. South Carolina, North Carolina, and Maryland each had over 100,000 slaves. After the American Revolution, the Southern slave population exploded, reaching about 1.1 million in 1810 and over 3.9 million in 1860.

Population of the South 1790-1860 by type

Year White Free Nonwhite Slave
1790 1,240,454 32,523 654,121
1800 1,691,892 61,575 851,532
1810 2,118,144 97,284 1,103,700
1820 2,867,454 130,487 1,509,904
1830 3,614,600 175,074 1,983,860
1840 4,601,873 207,214 2,481,390
1850 6,184,477 235,821 3,200,364
1860 8,036,700 253,082 3,950,511

Source: Historical Statistics of the U.S. (1970).

Slave Ownership Patterns

Despite their numbers, slaves typically comprised a minority of the local population. Only in antebellum South Carolina and Mississippi did slaves outnumber free persons. Most Southerners owned no slaves and most slaves lived in small groups rather than on large plantations. Less than one-quarter of white Southerners held slaves, with half of these holding fewer than five and fewer than 1 percent owning more than one hundred. In 1860, the average number of slaves residing together was about ten.

Slaves as a Percent of the Total Population
selected years, by Southern state

1750 1790 1810 1860
State Black/total Slave/total Slave/total Slave/total
population population population population
Alabama 45.12
Arkansas 25.52
Delaware 5.21 15.04 5.75 1.60
Florida 43.97
Georgia 19.23 35.45 41.68 43.72
Kentucky 16.87 19.82 19.51
Louisiana 46.85
Maryland 30.80 32.23 29.30 12.69
Mississippi 55.18
Missouri 9.72
North Carolina 27.13 25.51 30.39 33.35
South Carolina 60.94 43.00 47.30 57.18
Tennessee 17.02 24.84
Texas 30.22
Virginia 43.91 39.14 40.27 30.75
Overall 37.97 33.95 33.25 32.27

Sources: Historical Statistics of the United States (1970), Franklin (1988).

Holdings of Southern Slaveowners
by states, 1860

State Total Held 1 Held 2 Held 3 Held 4 Held 5 Held 1-5 Held 100- Held 500+
slaveholders slave slaves Slaves slaves slaves slaves 499 slaves slaves
AL 33,730 5,607 3,663 2,805 2,329 1,986 16,390 344
AR 11,481 2,339 1,503 1,070 894 730 6,536 65 1
DE 587 237 114 74 51 34 510
FL 5,152 863 568 437 365 285 2,518 47
GA 41,084 6,713 4,335 3,482 2,984 2,543 20,057 211 8
KY 38,645 9,306 5,430 4,009 3,281 2,694 24,720 7
LA 22,033 4,092 2,573 2,034 1,536 1,310 11,545 543 4
MD 13,783 4,119 1,952 1,279 1,023 815 9,188 16
MS 30,943 4,856 3,201 2,503 2,129 1,809 14,498 315 1
MO 24,320 6,893 3,754 2,773 2,243 1,686 17,349 4
NC 34,658 6,440 4,017 3,068 2,546 2,245 18,316 133
SC 26,701 3,763 2,533 1,990 1,731 1,541 11,558 441 8
TN 36,844 7,820 4,738 3,609 3,012 2,536 21,715 47
TX 21,878 4,593 2,874 2,093 1,782 1,439 12,781 54
VA 52,128 11,085 5,989 4,474 3,807 3,233 28,588 114
TOTAL 393,967 78,726 47,244 35,700 29,713 24,886 216,269 2,341 22

Source: Historical Statistics of the United States (1970).

Rapid Natural Increase in U.S. Slave Population

How did the U.S. slave population increase nearly fourfold between 1810 and 1860, given the demise of the trans-Atlantic trade? They enjoyed an exceptional rate of natural increase. Unlike elsewhere in the New World, the South did not require constant infusions of immigrant slaves to keep its slave population intact. In fact, by 1825, 36 percent of the slaves in the Western hemisphere lived in the U.S. This was partly due to higher birth rates, which were in turn due to a more equal ratio of female to male slaves in the U.S. relative to other parts of the Americas. Lower mortality rates also figured prominently. Climate was one cause; crops were another. U.S. slaves planted and harvested first tobacco and then, after Eli Whitney’s invention of the cotton gin in 1793, cotton. This work was relatively less grueling than the tasks on the sugar plantations of the West Indies and in the mines and fields of South America. Southern slaves worked in industry, did domestic work, and grew a variety of other food crops as well, mostly under less abusive conditions than their counterparts elsewhere. For example, the South grew half to three-quarters of the corn crop harvested between 1840 and 1860.


Central to the success of slavery are political and legal institutions that validate the ownership of other persons. A Kentucky court acknowledged the dual character of slaves in Turner v. Johnson (1838): “[S]laves are property and must, under our present institutions, be treated as such. But they are human beings, with like passions, sympathies, and affections with ourselves.” To construct slave law, lawmakers borrowed from laws concerning personal property and animals, as well as from rules regarding servants, employees, and free persons. The outcome was a set of doctrines that supported the Southern way of life.

The English common law of property formed a foundation for U.S. slave law. The French and Spanish influence in Louisiana — and, to a lesser extent, Texas — meant that Roman (or civil) law offered building blocks there as well. Despite certain formal distinctions, slave law as practiced differed little from common-law to civil-law states. Southern state law governed roughly five areas: slave status, masters’ treatment of slaves, interactions between slaveowners and contractual partners, rights and duties of noncontractual parties toward others’ slaves, and slave crimes. Federal law and laws in various Northern states also dealt with matters of interstate commerce, travel, and fugitive slaves.

Interestingly enough, just as slave law combined elements of other sorts of law, so too did it yield principles that eventually applied elsewhere. Lawmakers had to consider the intelligence and volition of slaves as they crafted laws to preserve property rights. Slavery therefore created legal rules that could potentially apply to free persons as well as to those in bondage. Many legal principles we now consider standard in fact had their origins in slave law.

Legal Status Of Slaves And Blacks

By the end of the seventeenth century, the status of blacks — slave or free — tended to follow the status of their mothers. Generally, “white” persons were not slaves but Native and African Americans could be. One odd case was the offspring of a free white woman and a slave: the law often bound these people to servitude for thirty-one years. Conversion to Christianity could set a slave free in the early colonial period, but this practice quickly disappeared.

Skin Color and Status

Southern law largely identified skin color with status. Those who appeared African or of African descent were generally presumed to be slaves. Virginia was the only state to pass a statute that actually classified people by race: essentially, it considered those with one quarter or more black ancestry as black. Other states used informal tests in addition to visual inspection: one-quarter, one-eighth, or one-sixteenth black ancestry might categorize a person as black.

Even if blacks proved their freedom, they enjoyed little higher status than slaves except, to some extent, in Louisiana. Many Southern states forbade free persons of color from becoming preachers, selling certain goods, tending bar, staying out past a certain time of night, or owning dogs, among other things. Federal law denied black persons citizenship under the Dred Scott decision (1857). In this case, Chief Justice Roger Taney also determined that visiting a free state did not free a slave who returned to a slave state, nor did traveling to a free territory ensure emancipation.

Rights And Responsibilities Of Slave Masters

Southern masters enjoyed great freedom in their dealings with slaves. North Carolina Chief Justice Thomas Ruffin expressed the sentiments of many Southerners when he wrote in State v. Mann (1829): “The power of the master must be absolute, to render the submission of the slave perfect.” By the nineteenth century, household heads had far more physical power over their slaves than their employees. In part, the differences in allowable punishment had to do with the substitutability of other means of persuasion. Instead of physical coercion, antebellum employers could legally withhold all wages if a worker did not complete all agreed-upon services. No such alternate mechanism existed for slaves.

Despite the respect Southerners held for the power of masters, the law — particularly in the thirty years before the Civil War — limited owners somewhat. Southerners feared that unchecked slave abuse could lead to theft, public beatings, and insurrection. People also thought that hungry slaves would steal produce and livestock. But masters who treated slaves too well, or gave them freedom, caused consternation as well. The preamble to Delaware’s Act of 1767 conveys one prevalent view: “[I]t is found by experience, that freed [N]egroes and mulattoes are idle and slothful, and often prove burdensome to the neighborhood wherein they live, and are of evil examples to slaves.” Accordingly, masters sometimes fell afoul of the criminal law not only when they brutalized or neglected their slaves, but also when they indulged or manumitted slaves. Still, prosecuting masters was extremely difficult, because often the only witnesses were slaves or wives, neither of whom could testify against male heads of household.

Law of Manumission

One area that changed dramatically over time was the law of manumission. The South initially allowed masters to set their slaves free because this was an inherent right of property ownership. During the Revolutionary period, some Southern leaders also believed that manumission was consistent with the ideology of the new nation. Manumission occurred only rarely in colonial times, increased dramatically during the Revolution, then diminished after the early 1800s. By the 1830s, most Southern states had begun to limit manumission. Allowing masters to free their slaves at will created incentives to emancipate only unproductive slaves. Consequently, the community at large bore the costs of young, old, and disabled former slaves. The public might also run the risk of having rebellious former slaves in its midst.

Antebellum U.S. Southern states worried considerably about these problems and eventually enacted restrictions on the age at which slaves could be free, the number freed by any one master, and the number manumitted by last will. Some required former masters to file indemnifying bonds with state treasurers so governments would not have to support indigent former slaves. Some instead required former owners to contribute to ex-slaves’ upkeep. Many states limited manumissions to slaves of a certain age who were capable of earning a living. A few states made masters emancipate their slaves out of state or encouraged slaveowners to bequeath slaves to the Colonization Society, which would then send the freed slaves to Liberia. Former slaves sometimes paid fees on the way out of town to make up for lost property tax revenue; they often encountered hostility and residential fees on the other end as well. By 1860, most Southern states had banned in-state and post-mortem manumissions, and some had enacted procedures by which free blacks could voluntarily become slaves.

Other Restrictions

In addition to constraints on manumission, laws restricted other actions of masters and, by extension, slaves. Masters generally had to maintain a certain ratio of white to black residents upon plantations. Some laws barred slaves from owning musical instruments or bearing firearms. All states refused to allow slaves to make contracts or testify in court against whites. About half of Southern states prohibited masters from teaching slaves to read and write although some of these permitted slaves to learn rudimentary mathematics. Masters could use slaves for some tasks and responsibilities, but they typically could not order slaves to compel payment, beat white men, or sample cotton. Nor could slaves officially hire themselves out to others, although such prohibitions were often ignored by masters, slaves, hirers, and public officials. Owners faced fines and sometimes damages if their slaves stole from others or caused injuries.

Southern law did encourage benevolence, at least if it tended to supplement the lash and shackle. Court opinions in particular indicate the belief that good treatment of slaves could enhance labor productivity, increase plantation profits, and reinforce sentimental ties. Allowing slaves to control small amounts of property, even if statutes prohibited it, was an oft-sanctioned practice. Courts also permitted slaves small diversions, such as Christmas parties and quilting bees, despite statutes that barred slave assemblies.

Sale, Hire, And Transportation Of Slaves

Sales of Slaves

Slaves were freely bought and sold across the antebellum South. Southern law offered greater protection to slave buyers than to buyers of other goods, in part because slaves were complex commodities with characteristics not easily ascertained by inspection. Slave sellers were responsible for their representations, required to disclose known defects, and often liable for unknown defects, as well as bound by explicit contractual language. These rules stand in stark contrast to the caveat emptor doctrine applied in antebellum commodity sales cases. In fact, they more closely resemble certain provisions of the modern Uniform Commercial Code. Sales law in two states stands out. South Carolina was extremely pro-buyer, presuming that any slave sold at full price was sound. Louisiana buyers enjoyed extensive legal protection as well. A sold slave who later manifested an incurable disease or vice — such as a tendency to escape frequently — could generate a lawsuit that entitled the purchaser to nullify the sale.

Hiring Out Slaves

Slaves faced the possibility of being hired out by their masters as well as being sold. Although scholars disagree about the extent of hiring in agriculture, most concur that hired slaves frequently worked in manufacturing, construction, mining, and domestic service. Hired slaves and free persons often labored side by side. Bond and free workers both faced a legal burden to behave responsibly on the job. Yet the law of the workplace differed significantly for the two: generally speaking, employers were far more culpable in cases of injuries to slaves. The divergent law for slave and free workers does not necessarily imply that free workers suffered. Empirical evidence shows that nineteenth-century free laborers received at least partial compensation for the risks of jobs. Indeed, the tripartite nature of slave-hiring arrangements suggests why antebellum laws appeared as they did. Whereas free persons had direct work and contractual relations with their bosses, slaves worked under terms designed by others. Free workers arguably could have walked out or insisted on different conditions or wages. Slaves could not. The law therefore offered substitute protections. Still, the powerful interests of slaveowners also may mean that they simply were more successful at shaping the law. Postbellum developments in employment law — North and South — in fact paralleled earlier slave-hiring law, at times relying upon slave cases as legal precedents.

Public Transportation

Public transportation also figured into slave law: slaves suffered death and injury aboard common carriers as well as traveled as legitimate passengers and fugitives. As elsewhere, slave-common carrier law both borrowed from and established precedents for other areas of law. One key doctrine originating in slave cases was the “last-clear-chance rule.” Common-carrier defendants that had failed to offer slaves — even negligent slaves — a last clear chance to avoid accidents ended up paying damages to slaveowners. Slaveowner plaintiffs won several cases in the decade before the Civil War when engineers failed to warn slaves off railroad tracks. Postbellum courts used slave cases as precedents to entrench the last-clear-chance doctrine.

Slave Control: Patrollers And Overseers

Society at large shared in maintaining the machinery of slavery. In place of a standing police force, Southern states passed legislation to establish and regulate county-wide citizen patrols. Essentially, Southern citizens took upon themselves the protection of their neighbors’ interests as well as their own. County courts had local administrative authority; court officials appointed three to five men per patrol from a pool of white male citizens to serve for a specified period. Typical patrol duty ranged from one night per week for a year to twelve hours per month for three months. Not all white men had to serve: judges, magistrates, ministers, and sometimes millers and blacksmiths enjoyed exemptions. So did those in the higher ranks of the state militia. In many states, courts had to select from adult males under a certain age, usually 45, 50, or 60. Some states allowed only slaveowners or householders to join patrols. Patrollers typically earned fees for captured fugitive slaves and exemption from road or militia duty, as well as hourly wages. Keeping order among slaves was the patrollers’ primary duty. Statutes set guidelines for appropriate treatment of slaves and often imposed fines for unlawful beatings. In rare instances, patrollers had to compensate masters for injured slaves. For the most part, however, patrollers enjoyed quasi-judicial or quasi-executive powers in their dealings with slaves.

Overseers commanded considerable control as well. The Southern overseer was the linchpin of the large slave plantation. He ran daily operations and served as a first line of defense in safeguarding whites. The vigorous protests against drafting overseers into military service during the Civil War reveal their significance to the South. Yet slaves were too valuable to be left to the whims of frustrated, angry overseers. Injuries caused to slaves by overseers’ cruelty (or “immoral conduct”) usually entitled masters to recover civil damages. Overseers occasionally confronted criminal charges as well. Brutality by overseers naturally generated responses by their victims; at times, courts reduced murder charges to manslaughter when slaves killed abusive overseers.

Protecting The Master Against Loss: Slave Injury And Slave Stealing

Whether they liked it or not, many Southerners dealt daily with slaves. Southern law shaped these interactions among strangers, awarding damages more often for injuries to slaves than injuries to other property or persons, shielding slaves more than free persons from brutality, and generating convictions more frequently in slave-stealing cases than in other criminal cases. The law also recognized more offenses against slaveowners than against other property owners because slaves, unlike other property, succumbed to influence.

Just as assaults of slaves generated civil damages and criminal penalties, so did stealing a slave to sell him or help him escape to freedom. Many Southerners considered slave stealing worse than killing fellow citizens. In marked contrast, selling a free black person into slavery carried almost no penalty.

The counterpart to helping slaves escape — picking up fugitives — also created laws. Southern states offered rewards to defray the costs of capture or passed statutes requiring owners to pay fees to those who caught and returned slaves. Some Northern citizens worked hand-in-hand with their Southern counterparts, returning fugitive slaves to masters either with or without the prompting of law. But many Northerners vehemently opposed the peculiar institution. In an attempt to stitch together the young nation, the federal government passed the first fugitive slave act in 1793. To circumvent its application, several Northern states passed personal liberty laws in the 1840s. Stronger federal fugitive slave legislation then passed in 1850. Still, enough slaves fled to freedom — perhaps as many as 15,000 in the decade before the Civil War — with the help (or inaction) of Northerners that the profession of “slave-catching” evolved. This occupation was often highly risky — enough so that such men could not purchase life insurance coverage — and just as often highly lucrative.

Slave Crimes

Southern law governed slaves as well as slaveowners and their adversaries. What few due process protections slaves possessed stemmed from desires to grant rights to masters. Still, slaves faced harsh penalties for their crimes. When slaves stole, rioted, set fires, or killed free people, the law sometimes had to subvert the property rights of masters in order to preserve slavery as a social institution.

Slaves, like other antebellum Southern residents, committed a host of crimes ranging from arson to theft to homicide. Other slave crimes included violating curfew, attending religious meetings without a master’s consent, and running away. Indeed, a slave was not permitted off his master’s farm or business without his owner’s permission. In rural areas, a slave was required to carry a written pass to leave the master’s land.

Southern states erected numerous punishments for slave crimes, including prison terms, banishment, whipping, castration, and execution. In most states, the criminal law for slaves (and blacks generally) was noticeably harsher than for free whites; in others, slave law as practiced resembled that governing poorer white citizens. Particularly harsh punishments applied to slaves who had allegedly killed their masters or who had committed rebellious acts. Southerners considered these acts of treason and resorted to immolation, drawing and quartering, and hanging.


Market prices for slaves reflect their substantial economic value. Scholars have gathered slave prices from a variety of sources, including censuses, probate records, plantation and slave-trader accounts, and proceedings of slave auctions. These data sets reveal that prime field hands went for four to six hundred dollars in the U.S. in 1800, thirteen to fifteen hundred dollars in 1850, and up to three thousand dollars just before Fort Sumter fell. Even controlling for inflation, the prices of U.S. slaves rose significantly in the six decades before South Carolina seceded from the Union. By 1860, Southerners owned close to $4 billion worth of slaves. Slavery remained a thriving business on the eve of the Civil War: Fogel and Engerman (1974) projected that by 1890 slave prices would have increased on average more than 50 percent over their 1860 levels. No wonder the South rose in armed resistance to protect its enormous investment.

Slave markets existed across the antebellum U.S. South. Even today, one can find stone markers like the one next to the Antietam battlefield, which reads: “From 1800 to 1865 This Stone Was Used as a Slave Auction Block. It has been a famous landmark at this original location for over 150 years.” Private auctions, estate sales, and professional traders facilitated easy exchange. Established dealers like Franklin and Armfield in Virginia, Woolfolk, Saunders, and Overly in Maryland, and Nathan Bedford Forrest in Tennessee prospered alongside itinerant traders who operated in a few counties, buying slaves for cash from their owners, then moving them overland in coffles to the lower South. Over a million slaves were taken across state lines between 1790 and 1860 with many more moving within states. Some of these slaves went with their owners; many were sold to new owners. In his monumental study, Michael Tadman (1989) found that slaves who lived in the upper South faced a very real chance of being sold for profit. From 1820 to 1860, he estimated that an average of 200,000 slaves per decade moved from the upper to the lower South, most via sales. A contemporary newspaper, The Virginia Times, calculated that 40,000 slaves were sold in the year 1830.

Determinants of Slave Prices

The prices paid for slaves reflected two economic factors: the characteristics of the slave and the conditions of the market. Important individual features included age, sex, childbearing capacity (for females), physical condition, temperament, and skill level. In addition, the supply of slaves, demand for products produced by slaves, and seasonal factors helped determine market conditions and therefore prices.

Age and Price

Prices for both male and female slaves tended to follow similar life-cycle patterns. In the U.S. South, infant slaves sold for a positive price because masters expected them to live long enough to make the initial costs of raising them worthwhile. Prices rose through puberty as productivity and experience increased. In nineteenth-century New Orleans, for example, prices peaked at about age 22 for females and age 25 for males. Girls cost more than boys up to their mid-teens. The genders then switched places in terms of value. In the Old South, boys aged 14 sold for 71 percent of the price of 27-year-old men, whereas girls aged 14 sold for 65 percent of the price of 27-year-old men. After the peak age, prices declined slowly for a time, then fell off rapidly as the aging process caused productivity to fall. Compared to full-grown men, women were worth 80 to 90 percent as much. One characteristic in particular set some females apart: their ability to bear children. Fertile females commanded a premium. The mother-child link also proved important for pricing in a different way: people sometimes paid more for intact families.

Source: Fogel and Engerman (1974)

Other Characteristics and Price

Skills, physical traits, mental capabilities, and other qualities also helped determine a slave’s price. Skilled workers sold for premiums of 40-55 percent whereas crippled and chronically ill slaves sold for deep discounts. Slaves who proved troublesome — runaways, thieves, layabouts, drunks, slow learners, and the like — also sold for lower prices. Taller slaves cost more, perhaps because height acts as a proxy for healthiness. In New Orleans, light-skinned females (who were often used as concubines) sold for a 5 percent premium.

Fluctuations in Supply

Prices for slaves fluctuated with market conditions as well as with individual characteristics. U.S. slave prices fell around 1800 as the Haitian revolution sparked the movement of slaves into the Southern states. Less than a decade later, slave prices climbed when the international slave trade was banned, cutting off legal external supplies. Interestingly enough, among those who supported the closing of the trans-Atlantic slave trade were several Southern slaveowners. Why this apparent anomaly? Because the resulting reduction in supply drove up the prices of slaves already living in the U.S and, hence, their masters’ wealth. U.S. slaves had high enough fertility rates and low enough mortality rates to reproduce themselves, so Southern slaveowners did not worry about having too few slaves to go around.

Fluctuations in Demand

Demand helped determine prices as well. The demand for slaves derived in part from the demand for the commodities and services that slaves provided. Changes in slave occupations and variability in prices for slave-produced goods therefore created movements in slave prices. As slaves replaced increasingly expensive indentured servants in the New World, their prices went up. In the period 1748 to 1775, slave prices in British America rose nearly 30 percent. As cotton prices fell in the 1840s, Southern slave prices also fell. But, as the demand for cotton and tobacco grew after about 1850, the prices of slaves increased as well.

Interregional Price Differences

Differences in demand across regions led to transitional regional price differences, which in turn meant large movements of slaves. Yet because planters experienced greater stability among their workforce when entire plantations moved, 84 percent of slaves were taken to the lower South in this way rather than being sold piecemeal.

Time of Year and Price

Demand sometimes had to do with the time of year a sale took place. For example, slave prices in the New Orleans market were 10 to 20 percent higher in January than in September. Why? September was a busy time of year for plantation owners: the opportunity cost of their time was relatively high. Prices had to be relatively low for them to be willing to travel to New Orleans during harvest time.

Expectations and Prices

One additional demand factor loomed large in determining slave prices: the expectation of continued legal slavery. As the American Civil War progressed, prices dropped dramatically because people could not be sure that slavery would survive. In New Orleans, prime male slaves sold on average for $1381 in 1861 and for $1116 in 1862. Burgeoning inflation meant that real prices fell considerably more. By war’s end, slaves sold for a small fraction of their 1860 price.

Source: Data supplied by Stanley Engerman and reported in Walton and Rockoff (1994).


That slavery was profitable seems almost obvious. Yet scholars have argued furiously about this matter. On one side stand antebellum writers such as Hinton Rowan Helper and Frederick Law Olmstead, many antebellum abolitionists, and contemporary scholars like Eugene Genovese (at least in his early writings), who speculated that American slavery was unprofitable, inefficient, and incompatible with urban life. On the other side are scholars who have marshaled masses of data to support their contention that Southern slavery was profitable and efficient relative to free labor and that slavery suited cities as well as farms. These researchers stress the similarity between slave markets and markets for other sorts of capital.

Consensus That Slavery Was Profitable

This battle has largely been won by those who claim that New World slavery was profitable. Much like other businessmen, New World slaveowners responded to market signals — adjusting crop mixes, reallocating slaves to more profitable tasks, hiring out idle slaves, and selling slaves for profit. One well-known instance shows that contemporaneous free labor thought that urban slavery may even have worked too well: employees of the Tredegar Iron Works in Richmond, Virginia, went out on their first strike in 1847 to protest the use of slave labor at the Works.

Fogel and Engerman’s Time on the Cross

Carrying the banner of the “slavery was profitable” camp is Nobel laureate Robert Fogel. Perhaps the most controversial book ever written about American slavery is Time on the Cross, published in 1974 by Fogel and co-author Stanley Engerman. These men were among the first to use modern statistical methods, computers, and large datasets to answer a series of empirical questions about the economics of slavery. To find profit levels and rates of return, they built upon the work of Alfred Conrad and John Meyer, who in 1958 had calculated similar measures from data on cotton prices, physical yield per slave, demographic characteristics of slaves (including expected lifespan), maintenance and supervisory costs, and (in the case of females) number of children. To estimate the relative efficiency of farms, Fogel and Engerman devised an index of “total factor productivity,” which measured the output per average unit of input on each type of farm. They included in this index controls for quality of livestock and land and for age and sex composition of the workforce, as well as amounts of output, labor, land, and capital

Time on the Cross generated praise — and considerable criticism. A major critique appeared in 1976 as a collection of articles entitled Reckoning with Slavery. Although some contributors took umbrage at the tone of the book and denied that it broke new ground, others focused on flawed and insufficient data and inappropriate inferences. Despite its shortcomings, Time on the Cross inarguably brought people’s attention to a new way of viewing slavery. The book also served as a catalyst for much subsequent research. Even Eugene Genovese, long an ardent proponent of the belief that Southern planters had held slaves for their prestige value, finally acknowledged that slavery was probably a profitable enterprise. Fogel himself refined and expanded his views in a 1989 book, Without Consent or Contract.

Efficiency Estimates

Fogel’s and Engerman’s research led them to conclude that investments in slaves generated high rates of return, masters held slaves for profit motives rather than for prestige, and slavery thrived in cities and rural areas alike. They also found that antebellum Southern farms were 35 percent more efficient overall than Northern ones and that slave farms in the New South were 53 percent more efficient than free farms in either North or South. This would mean that a slave farm that is otherwise identical to a free farm (in terms of the amount of land, livestock, machinery and labor used) would produce output worth 53 percent more than the free. On the eve of the Civil War, slavery flourished in the South and generated a rate of economic growth comparable to that of many European countries, according to Fogel and Engerman. They also discovered that, because slaves constituted a considerable portion of individual wealth, masters fed and treated their slaves reasonably well. Although some evidence indicates that infant and young slaves suffered much worse conditions than their freeborn counterparts, teenaged and adult slaves lived in conditions similar to — sometimes better than — those enjoyed by many free laborers of the same period.

Transition from Indentured Servitude to Slavery

One potent piece of evidence supporting the notion that slavery provides pecuniary benefits is this: slavery replaces other labor when it becomes relatively cheaper. In the early U.S. colonies, for example, indentured servitude was common. As the demand for skilled servants (and therefore their wages) rose in England, the cost of indentured servants went up in the colonies. At the same time, second-generation slaves became more productive than their forebears because they spoke English and did not have to adjust to life in a strange new world. Consequently, the balance of labor shifted away from indentured servitude and toward slavery.

Gang System

The value of slaves arose in part from the value of labor generally in the antebellum U.S. Scarce factors of production command economic rent, and labor was by far the scarcest available input in America. Moreover, a large proportion of the reward to owning and working slaves resulted from innovative labor practices. Certainly, the use of the “gang” system in agriculture contributed to profits in the antebellum period. In the gang system, groups of slaves perfomed synchronized tasks under the watchful overseer’s eye, much like parts of a single machine. Masters found that treating people like machinery paid off handsomely.

Antebellum slaveowners experimented with a variety of other methods to increase productivity. They developed an elaborate system of “hand ratings” in order to improve the match between the slave worker and the job. Hand ratings categorized slaves by age and sex and rated their productivity relative to that of a prime male field hand. Masters also capitalized on the native intelligence of slaves by using them as agents to receive goods, keep books, and the like.

Use of Positive Incentives

Masters offered positive incentives to make slaves work more efficiently. Slaves often had Sundays off. Slaves could sometimes earn bonuses in cash or in kind, or quit early if they finished tasks quickly. Some masters allowed slaves to keep part of the harvest or to work their own small plots. In places, slaves could even sell their own crops. To prevent stealing, however, many masters limited the products that slaves could raise and sell, confining them to corn or brown cotton, for example. In antebellum Louisiana, slaves even had under their control a sum of money called a peculium. This served as a sort of working capital, enabling slaves to establish thriving businesses that often benefited their masters as well. Yet these practices may have helped lead to the downfall of slavery, for they gave slaves a taste of freedom that left them longing for more.

Slave Families

Masters profited from reproduction as well as production. Southern planters encouraged slaves to have large families because U.S. slaves lived long enough — unlike those elsewhere in the New World — to generate more revenue than cost over their lifetimes. But researchers have found little evidence of slave breeding; instead, masters encouraged slaves to live in nuclear or extended families for stability. Lest one think sentimentality triumphed on the Southern plantation, one need only recall the willingness of most masters to sell if the bottom line was attractive enough.

Profitability and African Heritage

One element that contributed to the profitability of New World slavery was the African heritage of slaves. Africans, more than indigenous Americans, were accustomed to the discipline of agricultural practices and knew metalworking. Some scholars surmise that Africans, relative to Europeans, could better withstand tropical diseases and, unlike Native Americans, also had some exposure to the European disease pool.

Ease of Identifying Slaves

Perhaps the most distinctive feature of Africans, however, was their skin color. Because they looked different from their masters, their movements were easy to monitor. Denying slaves education, property ownership, contractual rights, and other things enjoyed by those in power was simple: one needed only to look at people to ascertain their likely status. Using color was a low-cost way of distinguishing slaves from free persons. For this reason, the colonial practices that freed slaves who converted to Christianity quickly faded away. Deciphering true religious beliefs is far more difficult than establishing skin color. Other slave societies have used distinguishing marks like brands or long hair to denote slaves, yet color is far more immutable and therefore better as a cheap way of keeping slaves separate. Skin color, of course, can also serve as a racist identifying mark even after slavery itself disappears.

Profit Estimates

Slavery never generated superprofits, because people always had the option of putting their money elsewhere. Nevertheless, investment in slaves offered a rate of return — about 10 percent — that was comparable to returns on other assets. Slaveowners were not the only ones to reap rewards, however. So too did cotton consumers who enjoyed low prices and Northern entrepreneurs who helped finance plantation operations.

Exploitation Estimates

So slavery was profitable; was it an efficient way of organizing the workforce? On this question, considerable controversy remains. Slavery might well have profited masters, but only because they exploited their chattel. What is more, slavery could have locked people into a method of production and way of life that might later have proven burdensome.

Fogel and Engerman (1974) claimed that slaves kept about ninety percent of what they produced. Because these scholars also found that agricultural slavery produced relatively more output for a given set of inputs, they argued that slaves may actually have shared in the overall material benefits resulting from the gang system. Other scholars contend that slaves in fact kept less than half of what they produced and that slavery, while profitable, certainly was not efficient. On the whole, current estimates suggest that the typical slave received only about fifty percent of the extra output that he or she produced.

Did Slavery Retard Southern Economic Development?

Gavin Wright (1978) called attention as well to the difference between the short run and the long run. He noted that slaves accounted for a very large proportion of most masters’ portfolios of assets. Although slavery might have seemed an efficient means of production at a point in time, it tied masters to a certain system of labor which might not have adapted quickly to changed economic circumstances. This argument has some merit. Although the South’s growth rate compared favorably with that of the North in the antebellum period, a considerable portion of wealth was held in the hands of planters. Consequently, commercial and service industries lagged in the South. The region also had far less rail transportation than the North. Yet many plantations used the most advanced technologies of the day, and certain innovative commercial and insurance practices appeared first in transactions involving slaves. What is more, although the South fell behind the North and Great Britain in its level of manufacturing, it compared favorably to other advanced countries of the time. In sum, no clear consensus emerges as to whether the antebellum South created a standard of living comparable to that of the North or, if it did, whether it could have sustained it.

Ultimately, the South’s system of law, politics, business, and social customs strengthened the shackles of slavery and reinforced racial stereotyping. As such, it was undeniably evil. Yet, because slaves constituted valuable property, their masters had ample incentives to take care of them. And, by protecting the property rights of masters, slave law necessarily sheltered the persons embodied within. In a sense, the apologists for slavery were right: slaves sometimes fared better than free persons because powerful people had a stake in their well-being.

Conclusion: Slavery Cannot Be Seen As Benign

But slavery cannot be thought of as benign. In terms of material conditions, diet, and treatment, Southern slaves may have fared as well in many ways as the poorest class of free citizens. Yet the root of slavery is coercion. By its very nature, slavery involves involuntary transactions. Slaves are property, whereas free laborers are persons who make choices (at times constrained, of course) about the sort of work they do and the number of hours they work.

The behavior of former slaves after abolition clearly reveals that they cared strongly about the manner of their work and valued their non-work time more highly than masters did. Even the most benevolent former masters in the U.S. South found it impossible to entice their former chattels back into gang work, even with large wage premiums. Nor could they persuade women back into the labor force: many female ex-slaves simply chose to stay at home. In the end, perhaps slavery is an economic phenomenon only because slave societies fail to account for the incalculable costs borne by the slaves themselves.


For studies pertaining to the economics of slavery, see particularly Aitken, Hugh, editor. Did Slavery Pay? Readings in the Economics of Black Slavery in the United States. Boston: Houghton-Mifflin, 1971.

Barzel, Yoram. “An Economic Analysis of Slavery.” Journal of Law and Economics 20 (1977): 87-110.

Conrad, Alfred H., and John R. Meyer. The Economics of Slavery and Other Studies. Chicago: Aldine, 1964.

David, Paul A., Herbert G. Gutman, Richard Sutch, Peter Temin, and Gavin Wright. Reckoning with Slavery: A Critical Study in the Quantitative History of American Negro Slavery. New York: Oxford University Press, 1976

Fogel , Robert W. Without Consent or Contract. New York: Norton, 1989.

Fogel, Robert W., and Stanley L. Engerman. Time on the Cross: The Economics of American Negro Slavery. New York: Little, Brown, 1974.

Galenson, David W. Traders, Planters, and Slaves: Market Behavior in Early English America. New York: Cambridge University Press, 1986

Kotlikoff, Laurence. “The Structure of Slave Prices in New Orleans, 1804-1862.” Economic Inquiry 17 (1979): 496-518.

Ransom, Roger L., and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. New York: Cambridge University Press, 1977.

Ransom, Roger L., and Richard Sutch “Capitalists Without Capital” Agricultural History 62 (1988): 133-160.

Vedder, Richard K. “The Slave Exploitation (Expropriation) Rate.” Explorations in Economic History 12 (1975): 453-57.

Wright, Gavin. The Political Economy of the Cotton South: Households, Markets, and Wealth in the Nineteenth Century. New York: Norton, 1978.

Yasuba, Yasukichi. “The Profitability and Viability of Slavery in the U.S.” Economic Studies Quarterly 12 (1961): 60-67.

For accounts of slave trading and sales, see
Bancroft, Frederic. Slave Trading in the Old South. New York: Ungar, 1931. Tadman, Michael. Speculators and Slaves. Madison: University of Wisconsin Press, 1989.

For discussion of the profession of slave catchers, see
Campbell, Stanley W. The Slave Catchers. Chapel Hill: University of North Carolina Press, 1968.

To read about slaves in industry and urban areas, see
Dew, Charles B. Slavery in the Antebellum Southern Industries. Bethesda: University Publications of America, 1991.

Goldin, Claudia D. Urban Slavery in the American South, 1820-1860: A Quantitative History. Chicago: University of Chicago Press,1976.

Starobin, Robert. Industrial Slavery in the Old South. New York: Oxford University Press, 1970.

For discussions of masters and overseers, see
Oakes, James. The Ruling Race: A History of American Slaveholders. New York: Knopf, 1982.

Roark, James L. Masters Without Slaves. New York: Norton, 1977.

Scarborough, William K. The Overseer: Plantation Management in the Old South. Baton Rouge, Louisiana State University Press, 1966.

On indentured servitude, see
Galenson, David. “Rise and Fall of Indentured Servitude in the Americas: An Economic Analysis.” Journal of Economic History 44 (1984): 1-26.

Galenson, David. White Servitude in Colonial America: An Economic Analysis. New York: Cambridge University Press, 1981.

Grubb, Farley. “Immigrant Servant Labor: Their Occupational and Geographic Distribution in the Late Eighteenth Century Mid-Atlantic Economy.” Social Science History 9 (1985): 249-75.

Menard, Russell R. “From Servants to Slaves: The Transformation of the Chesapeake Labor System.” Southern Studies 16 (1977): 355-90.

On slave law, see
Fede, Andrew. “Legal Protection for Slave Buyers in the U.S. South.” American Journal of Legal History 31 (1987). Finkelman, Paul. An Imperfect Union: Slavery, Federalism, and Comity. Chapel Hill: University of North Carolina, 1981.

Finkelman, Paul. Slavery, Race, and the American Legal System, 1700-1872. New York: Garland, 1988.

Finkelman, Paul, ed. Slavery and the Law. Madison: Madison House, 1997.

Flanigan, Daniel J. The Criminal Law of Slavery and Freedom, 1800-68. New York: Garland, 1987.

Morris, Thomas D., Southern Slavery and the Law: 1619-1860. Chapel Hill: University of North Carolina Press, 1996.

Schafer, Judith K. Slavery, The Civil Law, and the Supreme Court of Louisiana. Baton Rouge: Louisiana State University Press, 1994.

Tushnet, Mark V. The American Law of Slavery, 1810-60: Considerations of Humanity and Interest. Princeton: Princeton University Press, 1981.

Wahl, Jenny B. The Bondsman’s Burden: An Economic Analysis of the Common Law of Southern Slavery. New York: Cambridge University Press, 1998.

Other useful sources include
Berlin, Ira, and Philip D. Morgan, eds. The Slave’s Economy: Independent Production by Slaves in the Americas. London: Frank Cass, 1991.

Berlin, Ira, and Philip D. Morgan, eds, Cultivation and Culture: Labor and the Shaping of Slave Life in the Americas. Charlottesville, University Press of Virginia, 1993.

Elkins, Stanley M. Slavery: A Problem in American Institutional and Intellectual Life. Chicago: University of Chicago Press, 1976.

Engerman, Stanley, and Eugene Genovese. Race and Slavery in the Western Hemisphere: Quantitative Studies. Princeton: Princeton University Press, 1975.

Fehrenbacher, Don. Slavery, Law, and Politics. New York: Oxford University Press, 1981.

Franklin, John H. From Slavery to Freedom. New York: Knopf, 1988.

Genovese, Eugene D. Roll, Jordan, Roll. New York: Pantheon, 1974.

Genovese, Eugene D. The Political Economy of Slavery: Studies in the Economy and Society of the Slave South . Middletown, CT: Wesleyan, 1989.

Hindus, Michael S. Prison and Plantation. Chapel Hill: University of North Carolina Press, 1980.

Margo, Robert, and Richard Steckel. “The Heights of American Slaves: New Evidence on Slave Nutrition and Health.” Social Science History 6 (1982): 516-538.

Phillips, Ulrich B. American Negro Slavery: A Survey of the Supply, Employment and Control of Negro Labor as Determined by the Plantation Regime. New York: Appleton, 1918.

Stampp, Kenneth M. The Peculiar Institution: Slavery in the Antebellum South. New York: Knopf, 1956.

Steckel, Richard. “Birth Weights and Infant Mortality Among American Slaves.” Explorations in Economic History 23 (1986): 173-98.

Walton, Gary, and Hugh Rockoff. History of the American Economy. Orlando: Harcourt Brace, 1994, chapter 13.

Whaples, Robert. “Where Is There Consensus among American Economic Historians?” Journal of Economic History 55 (1995): 139-154.

Data can be found at
U.S. Bureau of the Census, Historical Statistics of the United States, 1970, collected in ICPSR study number 0003, “Historical Demographic, Economic and Social Data: The United States, 1790-1970,” located at

Citation: Bourne, Jenny. “Slavery in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL

Economic History of Retirement in the United States

Joanna Short, Augustana College

One of the most striking changes in the American labor market over the twentieth century has been the virtual disappearance of older men from the labor force. Moen (1987) and Costa (1998) estimate that the labor force participation rate of men age 65 and older declined from 78 percent in 1880 to less than 20 percent in 1990 (see Table 1). In recent decades, the labor force participation rate of somewhat younger men (age 55-64) has been declining as well. When coupled with the increase in life expectancy over this period, it is clear that men today can expect to spend a much larger proportion of their lives in retirement, relative to men living a century ago.

Table 1

Labor Force Participation Rates of Men Age 65 and Over

Year Labor Force Participation Rate (percent)
1850 76.6
1860 76.0
1870 —–
1880 78.0
1890 73.8
1900 65.4
1910 58.1
1920 60.1
1930 58.0
1940 43.5
1950 47.0
1960 40.8
1970 35.2
1980 24.7
1990 18.4
2000 17.5

Sources: Moen (1987), Costa (1998), Bureau of Labor Statistics

Notes: Prior to 1940, ‘gainful employment’ was the standard the U.S. Census used to determine whether or not an individual was working. This standard is similar to the ‘labor force participation’ standard used since 1940. With the exception of the figure for 2000, the data in the table are based on the gainful employment standard.

How can we explain the rise of retirement? Certainly, the development of government programs like Social Security has made retirement more feasible for many people. However, about half of the total decline in the labor force participation of older men from 1880 to 1990 occurred before the first Social Security payments were made in 1940. Therefore, factors other than the Social Security program have influenced the rise of retirement.

In addition to the increase in the prevalence of retirement over the twentieth century, the nature of retirement appears to have changed. In the late nineteenth century, many retirements involved a few years of dependence on children at the end of life. Today, retirement is typically an extended period of self-financed independence and leisure. This article documents trends in the labor force participation of older men, discusses the decision to retire, and examines the causes of the rise of retirement including the role of pensions and government programs.

Trends in U.S. Retirement Behavior

Trends by Gender

Research on the history of retirement focuses on the behavior of men because retirement, in the sense of leaving the labor force permanently in old age after a long career, is a relatively new phenomenon among women. Goldin (1990) concludes that “even as late as 1940, most young working women exited the labor force on marriage, and only a small minority would return.” The employment of married women accelerated after World War II, and recent evidence suggests that the retirement behavior of men and women is now very similar. Gendell (1998) finds that the average age at exit from the labor force in the U.S. was virtually identical for men and women from 1965 to 1995.

Trends by Race and Region

Among older men at the beginning of the twentieth century, labor force participation rates varied greatly by race, region of residence, and occupation. In the early part of the century, older black men were much more likely to be working than older white men. In 1900, for example, 84.1 percent of black men age 65 and over and 64.4 percent of white men were in the labor force. The racial retirement gap remained at about twenty percentage points until 1920, then narrowed dramatically by 1950. After 1950, the racial retirement gap reversed. In recent decades older black men have been slightly less likely to be in the labor force than older white men (see Table 2).

Table 2

Labor Force Participation Rates of Men Age 65 and Over, by Race

Labor Force Participation Rate (percent)
Year White Black
1880 76.7 87.3
1890 —- —-
1900 64.4 84.1
1910 58.5 86.0
1920 57.0 76.8
1930 —- —-
1940 44.1 54.6
1950 48.7 51.3
1960 40.3 37.3
1970 36.6 33.8
1980 27.1 23.7
1990 18.6 15.7
2000 17.8 16.6

Sources: Costa (1998), Bureau of Labor Statistics

Notes: Census data are unavailable for the years 1890 and 1930.

With the exception of the figures for 2000, participation rates are based on the gainful employment standard

Similarly, the labor force participation rate of men age 65 and over living in the South was higher than that of men living in the North in the early twentieth century. In 1900, for example, the labor force participation rate for older Southerners was sixteen percentage points higher than for Northerners. The regional retirement gap began to narrow between 1910 and 1920, and narrowed substantially by 1940 (see Table 3).

Table 3

Labor Force Participation Rates of Men Age 65 and Over, by Region

Labor Force Participation Rate (percent)
Year North South
1880 73.7 85.2
1890 —- —-
1900 66.0 82.9
1910 56.6 72.8
1920 58.8 69.9
1930 —- —-
1940 42.8 49.4
1950 43.2 42.9

Source: Calculated from Ruggles and Sobek, Integrated Public Use Microdata Series for 1880, 1900, 1910, 1920, 1940, and 1950, Version 2.0, 1997

Note: North includes the New England, Middle Atlantic, and North Central regions

South includes the South Atlantic and South Central regions

Differences in retirement behavior by race and region of residence are related. One reason Southerners appear less likely to retire in the late nineteenth and early twentieth centuries is that a relatively large proportion of Southerners were black. In 1900, 90 percent of black households were located in the South (see Maloney on African Americans in this Encyclopedia). In the early part of the century, black men were effectively excluded from skilled occupations. The vast majority worked for low pay as tenant farmers or manual laborers. Even controlling for race, southern per capita income lagged behind the rest of the nation well into the twentieth century. Easterlin (1971) estimates that in 1880, per capita income in the South was only half that in the Midwest, and per capita income remained less than 70 percent of the Midwestern level until 1950. Lower levels of income among blacks, and in the South as a whole during this period, may have made it more difficult for these men to accumulate resources sufficient to rely on in retirement.

Trends by Occupation

Older men living on farms have long been more likely to be working than men living in nonfarm households. In 1900, for example, 80.6 percent of farm residents and 62.7 percent of nonfarm residents over the age of 65 were in the labor force. Durand (1948), Graebner (1980), and others have suggested that older farmers could remain in the labor force longer than urban workers because of help from children or hired labor. Urban workers, on the other hand, were frequently forced to retire once they became physically unable to keep up with the pace of industry.

Despite the large difference in the labor force participation rates of farm and nonfarm residents, the actual gap in the retirement rates of farmers and nonfarmers was not that great. Confusion on this issue stems from the fact that the labor force participation rate of farm residents does not provide a good representation of the retirement behavior of farmers. Moen (1994) and Costa (1995a) point out that farmers frequently moved off the farm in retirement. When the comparison is made by occupation, farmers have labor force participation rates only slightly higher than laborers or skilled workers. Lee (2002) finds that excluding the period 1900-1910 (a period of exceptional growth in the value of farm property), the labor force participation rate of older farmers was on average 9.3 percentage points higher than that of nonfarmers from 1880-1940.

Trends in Living Arrangements

In addition to the overall rise of retirement, and the closing of differences in retirement behavior by race and region, over the twentieth century retired men became much more independent. In 1880, nearly half of retired men lived with children or other relatives. Today, fewer than 5 percent of retired men live with relatives. Costa (1998) finds that between 1910 and 1940, men who were older, had a change in marital status (typically from married to widowed), or had low income were much more likely to live with family members as a dependent. Rising income appears to explain most of the movement away from coresidence, suggesting that the elderly have always preferred to live by themselves, but they have only recently had the means to do so.

Explaining Trends in the Retirement Decision

One way to understand the rise of retirement is to consider the individual retirement decision. In order to retire permanently from the labor force, one must have enough resources to live on to the end of the expected life span. In retirement, one can live on pension income, accumulated savings, and anticipated contributions from family and friends. Without at least the minimum amount of retirement income necessary to survive, the decision-maker has little choice but to remain in the labor force. If the resource constraint is met, individuals choose to retire once the net benefits of retirement (e.g., leisure time) exceed the net benefits of working (labor income less the costs associated with working). From this model, we can predict that anything that increases the costs associated with working, such as advancing age, an illness, or a disability, will increase the probability of retirement. Similarly, an increase in pension income increases the probability of retirement in two ways. First, an increase in pension income makes it more likely the resource constraint will be satisfied. In addition, higher pension income makes it possible to enjoy more leisure in retirement, thereby increasing the net benefits of retirement.

Health Status

Empirically, age, disability, and pension income have all been shown to increase the probability that an individual is retired. In the context of the individual model, we can use this observation to explain the overall rise of retirement. Disability, for example, has been shown to increase the probability of retirement, both today and especially in the past. However, it is unlikely that the rise of retirement was caused by increases in disability rates — advances in health have made the overall population much healthier. Costa (1998), for example, shows that chronic conditions were much more prevalent for the elderly born in the nineteenth century than for men born in the twentieth century.

The Decline of Agriculture

Older farmers are somewhat more likely to be in the labor force than nonfarmers. Furthermore, the proportion of people employed in agriculture has declined steadily, from 51 percent of the work force in 1880, to 17 percent in 1940, to about 2 percent today (Lebergott, 1964). Therefore, as argued by Durand (1948), the decline in agriculture could explain the rise in retirement. Lee (2002) finds, though, that the decline of agriculture only explains about 20 percent of the total rise of retirement from 1880 to 1940. Since most of the shift away from agricultural work occurred before 1940, the decline of agriculture explains even less of the retirement trend since 1940. Thus, the occupational shift away from farming explains part of the rise of retirement. However, the underlying trend has been a long-term increase in the probability of retirement within all occupations.

Rising Income: The Most Likely Explanation

The most likely explanation for the rise of retirement is the overall increase in income, both from labor market earnings and from pensions. Costa (1995b) has shown that the pension income received by Union Army veterans in the early twentieth century had a strong effect on the probability that the veteran was retired. Over the period from 1890 to 1990, economic growth has led to nearly an eightfold increase in real gross domestic product (GDP) per capita. In 1890, GDP per capita was $3430 (in 1996 dollars), which is comparable to the levels of production in Morocco or Jamaica today. In 1990, real GDP per capita was $26,889. On average, Americans today enjoy a standard of living commensurate with eight times the income of Americans living a century ago. More income has made it possible to save for an extended retirement.

Rising income also explains the closing of differences in retirement behavior by race and region by the 1950s. Early in the century blacks and Southerners earned much lower income than Northern whites, but these groups made substantial gains in earnings by 1950. In the second half of the twentieth century, the increasing availability of pension income has also made retirement more attractive. Expansions in Social Security benefits, Medicare, and growth in employer-provided pensions all serve to increase the income available to people in retirement.

Costa (1998) has found that income is now less important to the decision to retire than it once was. In the past, only the rich could afford to retire. Income is no longer a binding constraint. One reason is that Social Security provides a safety net for those who are unable or unwilling to save for retirement. Another reason is that leisure has become much cheaper over the last century. Television, for example, allows people to enjoy concerts and sporting events at a very low price. Golf courses and swimming pools, once available only to the rich, are now publicly provided. Meanwhile, advances in health have allowed people to enjoy leisure and travel well into old age. All of these factors have made retirement so much more attractive that people of all income levels now choose to leave the labor force in old age.

Financing Retirement

Rising income also provided the young with a new strategy for planning for old age and retirement. Ransom and Sutch (1986a,b) and Sundstrom and David (1988) hypothesize that in the nineteenth century men typically used the promise of a bequest as an incentive for children to help their parents in old age. As more opportunities for work off the farm became available, children left home and defaulted on the implicit promise to care for retired parents. Children became an unreliable source of old age support, so parents stopped relying on children — had fewer babies — and began saving (in bank accounts) for retirement.

To support the “babies-to-bank accounts” theory, Sundstrom and David look for evidence of an inheritance-for-old age support bargain between parents and children. They find that many wills, particularly in colonial New England and some ethnic communities in the Midwest, included detailed clauses specifying the care of the surviving parent. When an elderly parent transferred property directly to a child, the contracts were particularly specific, often specifying the amount of food and firewood with which the parent was to be supplied. There is also some evidence that people viewed children and savings as substitute strategies for retirement planning. Haines (1985) uses budget studies from northern industrial workers in 1890 and finds a negative relationship between the number of children and the savings rate. Short (2001) conducts similar studies for southern men that indicate the two strategies were not substitutes until at least 1920. This suggests that the transition from babies to bank accounts occurred later in the South, only as income began to approach northern levels.

Pensions and Government Retirement Programs

Military and Municipal Pensions (1781-1934)

In addition to the rise in labor market income, the availability of pension income greatly increased with the development of Social Security and the expansion of private (employer-provided) pensions. In the U.S., public (government-provided) pensions originated with the military pensions that have been available to disabled veterans and widows since the colonial era. Military pensions became available to a large proportion of Americans after the Civil War, when the federal government provided pensions to Union Army widows and veterans disabled in the war. The Union Army pension program expanded greatly as a result of the Pension Act of 1890. As a result of this law, pensions were available for all veterans age 65 and over who had served more than 90 days and were honorably discharged, regardless of current employment status. In 1900, about 20 percent of all white men age 55 and over received a Union Army pension. The Union Army pension was generous even by today’s standards. Costa (1995b) finds that the average pension replaced about 30 percent of the income of a laborer. At its peak of nearly one million pensioners in 1902, the program consumed about 30 percent of the federal budget.

Each of the formerly Confederate states also provided pensions to its Confederate veterans. Most southern states began paying pensions to veterans disabled in the war and to war widows around 1880. These pensions were gradually liberalized to include most poor or disabled veterans and their widows. Confederate veteran pensions were much less generous than Union Army pensions. By 1910, the average Confederate pension was only about one-third the amount awarded to the average Union veteran.

By the early twentieth century, state and municipal governments also began paying pensions to their employees. Most major cities provided pensions for their firemen and police officers. By 1916, 33 states had passed retirement provisions for teachers. In addition, some states provided limited pensions to poor elderly residents. By 1934, 28 states had established these pension programs (See Craig in this Encyclopedia for more on public pensions).

Private Pensions (1875-1934)

As military and civil service pensions became available to more men, private firms began offering pensions to their employees. The American Express Company developed the first formal pension in 1875. Railroads, among the largest employers in the country, also began providing pensions in the late nineteenth century. Williamson (1992) finds that early pension plans, like that of the Pennsylvania Railroad, were funded entirely by the employer. Thirty years of service were required to qualify for a pension, and retirement was mandatory at age 70. Because of the lengthy service requirement and mandatory retirement provision, firms viewed pensions as a way to reduce labor turnover and as a more humane way to remove older, less productive employees. In addition, the 1926 Revenue Act excluded from current taxation all income earned in pension trusts. This tax advantage provided additional incentive for firms to provide pensions. By 1930, a majority of large firms had adopted pension plans, covering about 20 percent of all industrial workers.

In the early twentieth century, labor unions also provided pensions to their members. By 1928, thirteen unions paid pension benefits. Most of these were craft unions, whose members were typically employed by smaller firms that did not provide pensions.

Most private pensions survived the Great Depression. Exceptions were those plans that were funded under a ‘pay as you go’ system — where benefits were paid out of current earnings, rather than from built-up reserves. Many union pensions were financed under this system, and hence failed in the 1930s. Thanks to strong political allies, the struggling railroad pensions were taken over by the federal government in 1937.

Social Security (1935-1991)

The Social Security system was designed in 1935 to extend pension benefits to those not covered by a private pension plan. The Social Security Act consisted of two programs, Old Age Assistance (OAA) and Old Age Insurance (OAI). The OAA program provided federal matching funds to subsidize state old age pension programs. The availability of federal funds quickly motivated many states to develop a pension program or to increase benefits. By 1950, 22 percent of the population age 65 and over received OAA benefits. The OAA program peaked at this point, though, as the newly liberalized OAI program began to dominate Social Security. The OAI program is administered by the federal government, and financed by payroll taxes. Retirees (and later, survivors, dependents of retirees, and the disabled) who have paid into the system are eligible to receive benefits. The program remained small until 1950, when coverage was extended to include farm and domestic workers, and average benefits were increased by 77 percent. In 1965, the Social Security Act was amended to include Medicare, which provides health insurance to the elderly. The Social Security program continued to expand in the late 1960s and early 1970s — benefits increased 13 percent in 1968, another 15 percent in 1969, and 20 percent in 1972.

In the late 1970s and early 1980s Congress was finally forced to slow the growth of Social Security benefits, as the struggling economy introduced the possibility that the program would not be able to pay beneficiaries. In 1977, the formula for determining benefits was adjusted downward. Reforms in 1983 included the delay of a cost-of-living adjustment, the taxation of up to half of benefits, and payroll tax increases.

Today, Social Security benefits are the main source of retirement income for most retirees. Poterba, Venti, and Wise (1994) find that Social Security wealth was three times as large as all the other financial assets of those age 65-69 in 1991. The role of Social Security benefits in the budgets of elderly households varies greatly. In elderly households with less than $10,000 in income in 1990, 75 percent of income came from Social Security. Higher income households gain larger shares of income from earnings, asset income, and private pensions. In households with $30,000 to $50,000 in income, less than 30 percent was derived from Social Security.

The Growth of Private Pensions (1935-2000)

Even in the shadow of the Social Security system, employer-provided pensions continued to grow. The Wage and Salary Act of 1942 froze wages in an attempt to contain wartime inflation. In order to attract employees in a tight labor market, firms increasingly offered generous pensions. Providing pensions had the additional benefit that the firm’s contributions were tax deductible. Therefore, pensions provided firms with a convenient tax shelter from high wartime tax rates. From 1940 to 1960, the number of people covered by private pensions increased from 3.7 million to 23 million, or to nearly 30 percent of the labor force.

In the 1960s and 1970s, the federal government acted to regulate private pensions, and to provide tax incentives (like those for employer-provided pensions) for those without access to private pensions to save for retirement. Since 1962, the self-employed have been able to establish ‘Keogh plans’ — tax deferred accounts for retirement savings. In 1974, the Employment Retirement Income Security Act (ERISA) regulated private pensions to ensure their solvency. Under this law, firms are required to follow funding requirements and to insure against unexpected events that could cause insolvency. To further level the playing field, ERISA provided those not covered by a private pension with the option of saving in a tax-deductible Individual Retirement Account (IRA). The option of saving in a tax-advantaged IRA was extended to everyone in 1981.

Over the last thirty years, the type of pension plan that firms offer employees has shifted from ‘defined benefit’ to ‘defined contribution’ plans. Defined benefit plans, like Social Security, specify the amount of benefits the retiree will receive. Defined contribution plans, on the other hand, specify only how much the employer will contribute to the plan. Actual benefits then depend on the performance of the pension investments. The switch from defined benefit to defined contribution plans therefore shifts the risk of poor investment performance from the employer to the employee. The employee stands to benefit, though, because the high long-run average returns on stock market investments may lead to a larger retirement nest egg. Recently, 401(k) plans have become a popular type of pension plan, particularly in the service industries. These plans typically involve voluntary employee contributions that are tax deductible to the employee, employer matching of these contributions, and more choice as far as how the pension is invested.

Summary and Conclusions

The retirement pattern we see today, typically involving decades of self-financed leisure, developed gradually over the last century. Economic historians have shown that rising labor market and pension income largely explain the dramatic rise of retirement. Rather than being pushed out of the labor force because of increasing obsolescence, older men have increasingly chosen to use their rising income to finance an earlier exit from the labor force. In addition to rising income, the decline of agriculture, advances in health, and the declining cost of leisure have contributed to the popularity of retirement. Rising income has also provided the young with a new strategy for planning for old age and retirement. Instead of being dependent on children in retirement, men today save for their own, more independent, retirement.


Achenbaum, W. Andrew. Social Security: Visions and Revisions. New York: Cambridge University Press, 1986. Bureau of Labor Statistics, cpsaat3.pdf

Costa, Dora L. The Evolution of Retirement: An American Economic History, 1880-1990. Chicago: University of Chicago Press, 1998.

Costa, Dora L. “Agricultural Decline and the Secular Rise in Male Retirement Rates.” Explorations in Economic History 32, no. 4 (1995a): 540-552.

Costa, Dora L. “Pensions and Retirement: Evidence from Union Army Veterans.” Quarterly Journal of Economics 110, no. 2 (1995b): 297-319.

Durand, John D. The Labor Force in the United States 1890-1960. New York: Gordon and Breach Science Publishers, 1948.

Easterlin, Richard A. “Interregional Differences in per Capita Income, Population, and Total Income, 1840-1950.” In Trends in the American Economy in the Nineteenth Century: A Report of the National Bureau of Economic Research, Conference on Research in Income and Wealth. Princeton, NJ: Princeton University Press, 1960.

Easterlin, Richard A. “Regional Income Trends, 1840-1950.” In The Reinterpretation of American Economic History, edited by Robert W. Fogel and Stanley L. Engerman. New York: Harper & Row, 1971.

Gendell, Murray. “Trends in Retirement Age in Four Countries, 1965-1995.” Monthly Labor Review 121, no. 8 (1998): 20-30.

Glasson, William H. Federal Military Pensions in the United States. New York: Oxford University Press, 1918.

Glasson, William H. “The South’s Pension and Relief Provisions for the Soldiers of the

Confederacy.” Publications of the North Carolina Historical Commission, Bulletin no. 23, Raleigh, 1918.

Goldin, Claudia. Understanding the Gender Gap: An Economic History of American Women. New York: Oxford University Press, 1990.

Graebner, William. A History of Retirement: The Meaning and Function of an American Institution, 1885-1978. New Haven: Yale University Press, 1980.

Haines, Michael R. “The Life Cycle, Savings, and Demographic Adaptation: Some Historical Evidence for the United States and Europe.” In Gender and the Life Course, edited by Alice S. Rossi, pp. 43-63. New York: Aldine Publishing Co., 1985.

Kingson, Eric R. and Edward D. Berkowitz. Social Security and Medicare: A Policy Primer. Westport, CT: Auburn House, 1993.

Lebergott, Stanley. Manpower in Economic Growth. New York: McGraw Hill, 1964.

Lee, Chulhee. “Sectoral Shift and the Labor-Force Participation of Older Males in the United States, 1880-1940.” Journal of Economic History 62, no. 2 (2002): 512-523.

Maloney, Thomas N. “African Americans in the Twentieth Century.” EH.Net Encyclopedia, edited by Robert Whaples, Jan 18, 2002.

Moen, Jon R. Essays on the Labor Force and Labor Force Participation Rates: The United States from 1860 through 1950. Ph.D. dissertation, University of Chicago, 1987.

Moen, Jon R. “Rural Nonfarm Households: Leaving the Farm and the Retirement of Older Men, 1860-1980.” Social Science History 18, no. 1 (1994): 55-75.

Ransom, Roger and Richard Sutch. “Babies or Bank Accounts, Two Strategies for a More Secure Old Age: The Case of Workingmen with Families in Maine, 1890.” Paper prepared for presentation at the Eleventh Annual Meeting of the Social Science History Association, St. Louis, 1986a.

Ransom, Roger L. and Richard Sutch. “Did Rising Out-Migration Cause Fertility to Decline in Antebellum New England? A Life-Cycle Perspective on Old-Age Security Motives, Child Default, and Farm-Family Fertility.” California Institute of Technology, Social Science Working Paper, no. 610, April 1986b.

Ruggles, Steven and Matthew Sobek, et al. Integrated Public Use Microdata Series: Version 2.0. Minneapolis: Historical Census Projects, University of Minnesota, 1997.

Short, Joanna S. “The Retirement of the Rebels: Georgia Confederate Pensions and Retirement Behavior in the New South.” Ph.D. dissertation, Indiana University, 2001.

Sundstrom, William A. and Paul A. David. “Old-Age Security Motives, Labor Markets, and Farm Family Fertility in Antebellum America.” Explorations in Economic History 25, no. 2 (1988): 164-194.

Williamson, Samuel H. “United States and Canadian Pensions before 1930: A Historical Perspective.” In Trends in Pensions, U.S. Department of Labor, Vol. 2, 1992, pp. 34-45.

Williamson, Samuel H. The Development of Industrial Pensions in the United States during the Twentieth Century. World Bank, Policy Research Department, 1995.

Citation: Short, Joanna. “Economic History of Retirement in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. September 30, 2002. URL

Public Sector Pensions in the United States

Lee A. Craig, North Carolina State University


Although employer-provided retirement plans are a relatively recent phenomenon in the private sector, dating from the late nineteenth century, public sector plans go back much further in history. From the Roman Empire to the rise of the early-modern nation state, rulers and legislatures have provided pensions for the workers who administered public programs. Military pensions, in particular, have a long history, and they have often been used as a key element to attract, retain, and motivate military personnel. In the United States, pensions for disabled and retired military personnel predate the signing of the U.S. Constitution.

Like military pensions, pensions for loyal civil servants date back centuries. Prior to the nineteenth century, however, these pensions were typically handed out on a case-by-case basis; except for the military, there were few if any retirement plans or systems with well-defined rules for qualification, contributions, funding, and so forth. Most European countries maintained some type of formal pension system for their public sector workers by the late nineteenth century. Although a few U.S. municipalities offered plans prior to 1900, most public sector workers were not offered pensions until the first decades of the twentieth century. Teachers, firefighters, and police officers were typically the first non-military workers to receive a retirement plan as part of their compensation.

By 1930, pension coverage in the public sector was relatively widespread in the United States, with all federal workers being covered by a pension and an increasing share of state and local employees included in pension plans. In contrast, pension coverage in the private sector during the first three decades of the twentieth century remained very low, perhaps as low as 10 to 12 percent of the labor force (Clark, Craig, and Wilson 2003). Even today, pension coverage is much higher in the public sector than it is in the private sector. Over 90 percent of public sector workers are covered by an employer-provided pension plan, whereas only about half of the private sector work force is covered (Employee Benefit Research Institute 1997).

It should be noted that although today the term “pension” generally refers to cash payments received after the termination of one’s working years, typically in the form of an annuity, historically, a much wider range of retiree benefits, survivor’s annuities, and disability benefits were also referred to as pensions. In the United States, for example, the initial army and navy pension systems were primarily disability plans. However, disability was often liberally defined and included superannuation or the inability to perform regular duties due to infirmities associated with old age. In fact, every disability plan created for U.S. war veterans eventually became an old-age pension plan, and the history of these plans often reflected broader economic and social trends.

Early Military Pensions

Ancient Rome

Military pensions date from antiquity. Almost from its founding, the Roman Republic offered pensions to its successful military personnel; however, these payments, which often took the form of land or special appropriations, were generally ad hoc and typically based on the machinations of influential political cliques. As a result, on more than one occasion, a pension served as little more than a bribe to incite soldiers to serve as the personal troops of the politicians who secured the pension. No small amount of the turmoil accompanying the Republic’s decline can be attributed to this flaw in Roman public finance.

After establishing the Empire, Augustus, who knew a thing or two about the politics and economics of military issues, created a formal pension plan (13 BC): Veteran legionnaires were to receive a pension upon the completion of sixteen years in a legion and four years in the military reserves. This was a true retirement plan designed to reward and mollify veterans returning from Rome’s frontier campaigns. The original Augustan pension suffered from the fact that it was paid from general revenues (and Augustus’ own generous contributions), and in 5 AD (6 AD according to some sources), Augustus established a special fund (aeririum militare) from which retiring soldiers were paid. Although the length of service was also increased from sixteen years on active duty to twenty (and five years in the reserves), the pension system was explicitly funded through a five percent tax on inheritances and a one percent tax on all transactions conducted through auctions — essentially a sales tax. Retiring legionnaires were to receive 3,000 denarii; centurions received considerably larger stipends (Crook 1996). In the first century AD, a lump-sum payment of 3,000 denarii would have represented a substantial amount of money — at least by working class standards. A single denarius equaled roughly a days’ wage for a common laborer; so at an eight percent discount rate (Homer and Sylla 1991), the pension would have yielded an annuity of roughly 66 to 75 percent of a laborer’s annual earnings. Curiously, the basic parameters of the Augustan pension system look much like those of modern public sector pension plans. Although the state pension system perished with Rome, the key features — twenty to twenty-five years of service to quality and a “replacement rate” of 66 to 75 percent — would reemerge more than a thousand years later to become benchmarks for modern public sector plans.

Early-modern Europe

The Roman pension system collapsed, or perhaps withered away is the better term, with Rome itself, and for nearly a thousand years military service throughout Western Civilization was based on personal allegiance within a feudal hierarchy. During the Middle Ages, there were no military pensions strictly comparable to the Roman system, but with the establishment of the nation state came the reemergence of standing armies led by professional soldiers. Like the legions of Imperial Rome, these armies owed their allegiance to a state rather than to a person. The establishment of standardized systems of military pensions followed very shortly thereafter, beginning as early as the sixteenth century in England. During its 1592-93 session, Parliament established “reliefe for Souldiours … [who] adventured their lives and lost their limbs or disabled their bodies” in the service of the Crown (quoted in Clark, Craig, and Wilson 2003, p. 29). Annual pensions were not to exceed ten pounds for “private soldiers,” or twenty pounds for a “lieutenant.” Although one must be cautious in the use of income figures and exchange rates from that era, an annuity of ten pounds would have roughly equaled fifty gold dollars (at subsequent exchange rates), which was the equivalent of per capita income a century or so later, making the pension generous by contemporary standards.

These pensions were nominally disability payments not retirement pensions, though governments often awarded the latter on a case-by-case basis, and by the eighteenth century all of the other early-modern Great Powers — France, Austria, Spain, and Prussia — maintained some type of military pensions for their officer castes. These public pensions were not universally popular. Indeed, they were often viewed as little more than spoils. Samuel Johnson famously described a public pension as “generally understood to mean pay given to a state-hireling for treason to his country” (quoted in Clark, Craig, and Wilson 2003, 29). By the early nineteenth century, Britain, France, Prussia, and Spain all had formal retirement plans for their military personnel. The benchmark for these plans was the British “half-pay” system in which retired, disabled or otherwise unemployed officers received roughly fifty percent of their base pay. This was fairly lucrative compared to the annuities received by their continental counterparts.

Military Pensions in the United States

Prior to the American Revolution, Britain’s American colonies provided pensions to disabled men who were injured defending the colonists and their property from the French, the Spanish, and the natives. During the Revolutionary War the colonies extended this coverage to the members of their militias. Several colonies maintained navies, and they also offered pensions to their naval personnel. Independent of the actions of the colonial legislatures, the Continental Congress established pensions for its army (1776) and naval forces (1775). U.S. military pensions have been continuously provided, in one form or another ever since.

Revolutionary War Era

Although initially these were all strictly disability plans, in order to keep the troops in the field during the crucial months leading up to the Battle of Yorktown (1781), Congress authorized the payment of a life annuity, equal to one-half base pay, to all officers remaining in the service for the duration of the Revolution. It was not long before Congress and the officers in question realized that the national governments’ cash-flow situation and the present value of its future revenues were insufficient to meet this promise. Ultimately, the leaders of the disgruntled officers met at Newburgh, New York and pressed their demands on Congress, and in the spring of 1783, Congress converted the life annuities to a fixed-term payment equal to full pay for five years. Even these more limited obligations were not fully paid to qualifying veterans, and only the direct intervention of George Washington defused a potential coup (Ferguson 1961; Middlekauff 1982). The Treaty of Paris was signed in September of 1783, and the Continental Army was furloughed shortly thereafter. The officers’ pension claims were subsequently met to a degree by special interest-bearing “commutation certificates” — bonds, essentially. It took another eight years before the Constitution and Alexander Hamilton’s financial reforms placed the new federal government in a position to honor these obligations by the issuance of the new (consolidated) federal debt. However, because of the country’s precarious financial situation, between the Revolution and the consolidation of the debt, many embittered officers sold their “commutation” bonds in the secondary market at a steep discount.

In addition to a “regular” army pension plan, every war from the Revolution through the Indian Wars of the late-nineteenth century, saw the creation of a pension plan for the veterans of that particular war. Although every one of those plans was initially a disability plan, they were all eventually converted into an old-age pension plan — though this conversion often took a long time. The Revolutionary War plan became a general retirement plan in 1832 — 49 years after the Treaty of Paris ended the war. At that time every surviving veteran of the Revolutionary War received a pension equal to 100 percent of his base pay at the end of the war. Similarly, it was 56 years after the War of 1812, before survivors of that war were given retirement pensions.

Severance Pay

As for a retirement plan for the “regular” army, there was none until the Civil War; however, soldiers who were discharged after 1800 were given three months’ pay as severance. Officers were initially offered the same severance package as enlisted personnel, but in 1802, officers began receiving one months’ pay for each year of service over three years. Hence an officer with twelve years of service earning, say, $40 a month could, theoretically, convert his severance into an annuity, which at a six percent rate of interest would pay $2.40 a month, or less than $30 a year. This was substantially less than a prime farmhand could expect to earn and a pittance compared to that of, say, a British officer. Prior to the onset of the War of 1812, Congress supplemented these disability and severance packages with a type of retirement pension. Any soldier who enlisted for five years and who was honorably discharged would receive, in addition to his three months’ severance, 160 acres of land from the so-called military reserve. If he was killed in action or died in the service, his widow or heir(s) would receive the same benefit. The reservation price of public land at that time was $2.00 per acre ($1.64 for cash). So, the severance package would have been worth roughly $350, which, annuitized at six percent, would have yielded less than $2.00 a month in perpetuity. This was an ungenerous settlement by almost any standard. Of course in a nation of small farmers, a 160 acres might have represented a good start for a young cash-poor farmhand just out of the army.

The Army Develops a Retirement Plan

The Civil War resulted in a fundamental change in this system. Seeking the power to cull the active list of officers, the Lincoln administration persuaded Congress to pass the first general army retirement law. All officers could apply for retirement after 40 years of service, and a formal retirement board could retire any officer (after 40 years of service) who was deemed incapable of field service. There was a limit put on the number of officers who could be retired in this manner. Congress amended the law several times over the next few decades, with the key changes coming in 1870 and 1882. Taken together, these acts established 30 years as the minimum service requirement, 75 percent of base pay as the standard pension, and age 64 as the mandatory retirement age. This was the basic army pension plan until 1920, when Congress established the “up-or-out” policy in which an officer who was not deemed to be on track for promotion was retired. As such, he was to receive a retirement benefit equal to 2.5 percent multiplied by years of service not to exceed 75 percent of his base pay at the time of retirement. Although the maximum was reduced to 60 percent in 1924, it was subsequently increased back to 75 percent, and the service requirement was reduced to 20 years. As such, this remains the basic plan for military personnel to this day (Hustead and Hustead 2001).

Except for the disability plans that were eventually converted to old-page pensions, prior to 1885 the army retirement plan was only available to commissioned officers; however, in that year Congress created the first systematic retirement plan for enlisted personnel in the U.S. Army. Like the officers’ plan, it permitted retirement upon the completion of 30 years service at 75 percent of base pay. With the subsequent reduction in the minimum service requirement to 20 years, the enlisted plan merged with that for officers.

Naval Pensions

Until after World War I, the army and the navy maintained separate pension plans for their officers. The Continental Navy created a pension plan for its officers and seamen in 1775, even before an army plan was established. In the following year the navy plan was merged with the first army pension plan, and it too was eventually converted to a retirement plan for surviving veterans in 1832. The first disability pension plan for “regular” navy personnel was created in 1799. Officers’ benefits were not to exceed half-pay, while those for seamen and marines were not to exceed $5.00 a month, which was roughly 33 percent of an unskilled seaman’s base pay or 25 percent of that of a hired laborer in the private sector.

Except for the eventual conversion of the war pensions to retirement plans, there was no formal retirement plan for naval personnel until 1855. In that year Congress created a review board composed of five officers from each of the following ranks: captain, commander, and lieutenant. The board was to identify superannuated officers or those generally found to be unfit for service, and at the discretion of the Secretary of the Navy, the officers were to be placed on the reserve list at half-pay subject to the approval of the President. Before the plan had much impact the Civil War intervened, and in 1861 Congress established the essential features of the navy retirement plan, which were to remain in effect throughout the rest of the century. Like the army plan, retirement could occur through one of two ways: Either a retirement board could find the officer incapable of continuing on active duty, or after 40 years of service an officer could apply for retirement. In either case, officers on the retired list remained subject to recall; they were entitled to wear their uniforms; they were subject to the Articles of War and courts-martial; and they received 75 percent of their base pay. However, just as with the army certain constraints on the length of the retired list limited the effectiveness of the act.

In 1899, largely at the urging of then Assistant Secretary of the Navy Theodore Roosevelt, the navy adopted a rather Byzantine scheme for identifying and forcibly retiring officers deemed unfit to continue on active duty. Retirement (or “plucking”) boards were responsible for identifying those to be retired. Officers could avoid the ignominy of forced retirement by volunteering to retire, and there was a ceiling on the number who could be retired by the boards. In addition, all officers retired under this plan were to receive 75 percent of the sea pay of the next rank above that which they held at the time of retirement. (This last feature was amended in 1912, and officers simply received three-fourths of the pay of the rank in which they retired.) During the expansion of the navy leading up to America’s participation in the World War I, the plan was further amended, and in 1915 the president was authorized, with the advice and consent of the Senate, to reinstate any officer involuntarily retired under the 1899 act.

Still, the navy continued to struggle with its superannuated officers. In 1908, Congress finally granted naval officers the right to retire voluntarily at 75 percent of the active-duty pay upon the completion of 30 years of service. In 1916, navy pension rules were again altered, and this time a basic principle – “up or out” (with a pension) – was established, a principle which continues to this day. There were four basic components that differentiated the new navy pension plan from earlier ones. First, promotion to the ranks of rear admiral, captain, and commander were based on the recommendations of a promotion board. Prior to that time, promotions were based solely on seniority. Second, the officers on the active list were to be distributed among the ranks according to percentages that were not to exceed certain limits; thus, there was a limit placed on the number of officers who could be promoted to a certain rank. Third, age limits were placed on officers in each grade. Officers who obtained a certain age in a certain rank were retired with their pay equal to 2.5 percent multiplied by the number of years in service, with the maximum not to exceed 75 percent of their final active-duty pay. For example, a commander who reached age 50 and who had not been selected for promotion to captain, would be placed on the retired list. If he had served 25 years, then he would receive 62.5 percent of his base pay upon retirement. Finally, the act also imposed the same mandatory retirement provision on naval personnel as the 1882 (amended in 1890) act imposed on army personnel, with age 64 being established as the universal age of retirement in the armed forces of the United States.

These plans applied to naval officers only; however, in 1867 Congress authorized the retirement of seamen and marines who had served 20 or more years and who had become infirm as a result of old-age. These veterans would receive one-half their base pay for life. In addition, the act allowed any seaman or marine who had served 10 or more years and subsequently become disabled to apply to the Secretary of the Navy for a “suitable amount of relief” up to one-half base pay from the navy’s pension fund (see below). In 1899, the retirement act of 1885, which covered enlisted army personnel, was extended to enlisted navy personnel, with a few minor differences, which were eliminated in 1907. From that year, all enlisted personnel in both services were entitled to voluntarily retire at 75 percent of their pay and other allowances after 30 years’ of service, subsequently reduced to 20 years.

Funding U.S. Military Pensions

The history of pensions, particularly public sector pensions, cannot be easily separated from the history of pension finance. The creation of a pension plan coincides with the simultaneous creation of pension liabilities, and the parameters of the plan establish the size and the timing of those liabilities. U.S. Army pensions have always been funded on a “pay-as-you-go” basis from the general revenues of the U.S. Treasury. Thus army pensions have always been simply one more liability of the federal government. Despite the occasional accounting gimmick, the general revenues and obligations of the federal government are highly fungible, and so discussing the actuarial properties of the U.S. Army pension plan is like discussing the actuarial properties of the Department of Agriculture or the salaries of F.B.I. agents. However, until well into the twentieth century, this was not the case with navy pensions. They were long paid from a specific fund established separately from the general accounts of the treasury, and thus, their history is quite different from that of the army’s pensions.

From its inception in 1775, the navy’s pension plan for officers and seamen was financed with monies from the sale of captured prizes — enemy ships and those of other states carrying contraband. This funding mechanism meant that the flow of revenues needed to finance the navy’s pension liabilities were very erratic over time, fluctuating with the fortunes of war and peace. To manage these monies, the Continental Congress (and later the U.S. Congress) established the navy pension fund and allowed the trustees of this fund to invest the monies in a wide range of assets, including private equities. The history of the management of this pension fund illustrates many of the problems that can arise when public pension monies are used to purchase private assets. These include the loss of a substantial proportion of its assets on bad investments in private equities, the treasury’s bailout of the fund for these losses, and investment decisions that were influenced by political pressure. In addition there is evidence of gross malfeasance on the part of the agents of the fund, including trading on their on accounts, insider trading, and outright fraud.

Excluding a brief interlude just prior to the Civil War, the navy pension fund had a colorful history, lasting nearly one hundred and fifty years. Between its establishment in 1775 and 1842, it went bankrupt no less than three times, being bailed out by Congress each time. By 1842, there was little opportunity to continue to replenish the fund with fresh prize monies, and Congress, temporarily as it turned out, converted the navy pensions to a pay-as-you-go system, like army pensions. With the onset of the Civil War, the Union Navy’s blockade of Confederate ports created new prize opportunities, and the fund was reestablished, and navy pensions were once again paid from the prize fund. The fund subsequently accumulated an enormous balance. Like the antebellum losses of the fund, its postbellum surplus became something of a political football, and after much acrimonious debate, Congress took much of the fund’s balance and turned it over to the treasury. Still, the remnants of the fund persisted into the 1930s (Clark, Craig, and Wilson 2003).

Federal Civil Service Pensions

Like military pensions, pensions for loyal civil servants date back centuries; however, pension plans are of a more recent vintage, generally dating from the nineteenth century in Europe. In the United States, the federal government did not adopt a universal pension plan for civilian employees until 1920. This is not to say that there were no federal pensions before 1920. Pensions were available for some retiring civil servants, but Congress created them on a case-by-case basis. In the year before the federal pension plan went into effect, for example, there were 1,467 special acts of Congress either granting a new pension (912) or increasing the payments on old pensions (555) (Clark, Craig, and Wilson 2003). This process was as inefficient as it was capricious. Ending this system became a key objective of Congressional reforms.

The movement to create public sector pension plans at the turn of the twentieth century reflected the broader growth of the welfare state, particularly in Europe. As part of the progressive movement, many progressives envisioned the nascent European “cradle-to-grave” programs as the precursor of a better society, one with a new social covenant between the state and its people. Old-age pensions would fill the last step before the grave. Although the ultimate goal of this movement, universal old-age pensions, would not be realized until the creation of the social security system during the Great Depression, the initial objective was to have the government supply old-age security to its own workers. To support the movement in the United States, proponents of universal old-age pensions pointed out that by the early twentieth century, thirty-two countries around the world, including most of the European states and many regimes considered to be reactionary on social issues, had some type of old-age pension for their non-military public employees. If the Russians could humanely treat their superannuated civil servants, the argument went, why couldn’t the United States.

Establishing the Civil Service System

In the United States, the key to the creation of a civil service pension plan was the creation of a civil service. Prior to the late nineteenth century, the vast majority of federal employees were patronage employees — that is they served at the leisure of an elected or appointed official. With the tremendous growth of the number of such employees in the nineteenth century, the costs of the patronage system eventually outweighed the benefits derived from it. For example, over the century as a whole the number of post offices grew from 906 to 44,848; federal revenues grew from $3 million to over $400 million; and non-military employment went from 1,000 to 100,000. Indeed, the federal labor force nearly doubled in the 1870s alone (Johnson and Libecap 1994). The growth rates of these indicators of the size of the public sector are large even when compared to the dramatic fourteen-fold increase in U.S. population between 1800 and 1900. As a result, in 1883 Congress passed the Pendleton Act, which created the federal civil service, and which was passed largely, though not entirely, along party lines. As the party in power, the Republicans saw the conversion of federal employment from patronage to “merit” as an opportunity to gain the lifetime loyalty of an entire cohort of federal workers. In other words, by converting patronage jobs to civil service jobs, the party in power attempted to create lifetime tenure for its patronage workers. Of course, once in their civil service jobs, protected from the harshest effects of the market and the spoils system, federal workers simply did not want to retire — or put another way, many tended to retire on the job — and thus the conversion from patronage to civil service led to an abundance of superannuated federal workers. Thus began the quest for a federal pension plan.

Passage of the Federal Employees Retirement Act

A bill providing pensions for non-military employees of the federal government was introduced in every session of Congress between 1900 and 1920. Representatives of workers’ groups, the executive branch, the United States Civil Service Commission and inquiries conducted by congressional committees all requested or recommended the adoption of retirement plans for civil-service employees. While the political dynamics between these parties was often subtle and complex, the campaigns culminated in the passage of the Federal Employees Retirement Act on May 22, 1920 (Craig 1995). The key features of the original act of 1920 included:

  • All classified civil service employees qualified for a pension after reaching age 70 and rendering at least 15 years of service. Mechanics, letter carriers, and post office clerks were eligible for a pension after reaching age 65, and railway clerks qualified at age 62.
  • The ages at which employees qualified were also mandatory retirement ages. An employee could, however, be retained for two years beyond the mandatory age if his department head and the head of the Civil Service Commission approved.
  • All eligible employees were required to contribute two and one-half percent of their salaries or wages towards the payment of pensions.
  • The pension benefit was determined by the number of years of service. Class A employees were those who had served 30 or more years. Their benefit was 60 percent of their average annual salary during the last ten years of service. The benefits were scaled down through Class F employees (at least 15 years but less than 18 years of service). They received 30 percent of their average annual salary during the last ten years of service.

Although subsequently revised, this plan remains one of the two main civil service pension plans in the United States, and it served as something of a model for many subsequent pension plans in the United States. The other, newer federal plan, established in 1983, is a hybrid. That is, it has a traditional defined benefit component, a defined contribution component, and a Social Security component (Hustead and Hustead 2001).

State and Local Pensions

Decades before the states or the federal government provided civilian workers with a pension plan, several large American cities established plans for at least some of their employees. Until the first decades of the twentieth century, however, these plans were generally limited to three groups of employees: police officers, firefighters, and teachers. New York City established the first such plan for its police officers in 1857. Like the early military plans, the New York City police pension plan was a disability plan until a retirement feature was added in 1878 (Mitchell et al. 2001). Only a few other (primarily large) cities joined New York with a plan before 1900. In contrast, municipal workers in Austria-Hungary, Belgium, France, Germany, the Netherlands, Spain, Sweden, and the United Kingdom were covered by retirement plans by 1910 (Squier 1912).

Despite the relatively late start, the subsequent growth of such plans in the United States was rapid. By 1916, 159 cities had a plan for one or more of these groups of workers, and 21 of those cities included other municipal employees in some type of pension coverage (Monthly Labor Review, 1916). In 1917, 85 percent of cities with 100,000 or more residents paid some form of police pension; as did 66 percent of those with populations between 50,000 and 100,000; and 50 percent of cities with population between 30,000 and 50,000 had some pension liability (James 1921). These figures do not mean that all of these cities had a formal retirement plan. They only indicate that a city had at least $1 of pension liability. This liability could have been from a disability pension, a forced savings plan, or a discretionary pension. Still, by 1928, the Monthly Labor Review (April, 1928) could characterize police and fire plans as “practically universal”. At that time, all cities with populations of over 400,000 had a pension plan for either police officers or firefighters or both. Only one did not have a plan for police officers, and only one did not have a plan for firefighters. Several of those cities also had plans for their other municipal employees, and some cities maintained pension plans for their public school teachers separately from state teachers’ plans, which are reviewed below.

Eventually, some states also began to establish pension plans for state employees; however, initially these plans were primarily limited to teachers. Massachusetts established the first retirement pension plan for general state employees in 1911. The plan required workers to pay up to 5 percent of their salaries to a trust fund. Benefits were payable upon retirement. Workers were eligible to retire at age 60, and retirement was mandatory at age 70. At the time of retirement, the state purchased an annuity equal to twice the accumulated value (with interest) of the employee’s contribution. The calculation of the appropriate interest rate was, in many cases, not straightforward. Sometimes market rates or yields from a portfolio of assets were employed; sometimes a rate was simply established by legislation (see below). The Massachusetts plan initially became something of a model for subsequent public-sector pensions, but it was soon replaced by what became the standard public sector, defined benefit plan, much like the federal plan described above, in which the pension annuity was based on years of service and end-of-career earnings. Curiously, the Massachusetts plan resembled in some respects what have been referred to more recently as cash balance plans — hybrid plans that contain elements of both defined benefit and defined contribution plans.

Relative to the larger municipalities, the states were, in general, quite slow to adopt pension plans for their employees. As late as 1929, only six states had anything like a civil service pension plan for their (non-teacher) employees (Millis and Montgomery 1938). The record shows that pensions for state and local civil servants are for the most part, twentieth-century developments. However, after individual municipalities began adopting plans for their teachers in the early twentieth century, the states moved fairly aggressively in the 1910s and 1920s to create or consolidate plans for their other teachers. By the late 1920s, 21 states had formal retirement plans for their public school teachers (Clark, Craig, and Wilson 2003). On the one hand, this summary of state and local pension plans suggests that of all of the political units in the United States, the states themselves were the slowest to create pension plans for their civil service workers. However, this observation is slightly misleading. In 1930, 40 percent of all state and local employees were schoolteachers, and the 21 states that maintained a plan for their teachers included the most populous states at the time. While public sector pensions at the state and local level were far from universal by the 1920s, they did cover a substantial proportion of public sector workers, and that proportion was growing rapidly in the early decades of the twentieth century.

Funding State and Local Pensions

No discussion of the public sector pension plans would be complete without addressing the way in which the various plans were funded. The term “funded pension” is often used to mean a pension plan that had a specific source of revenues dedicated to pay for the plan’s liabilities. Historically, most public sector pension plans required some contribution from the employees covered by the plan, and in a sense, this contribution “funded” the plan; however, the term “funded” is more often taken to mean that the pension plan receives a stream of public funds from, for example, a specific source, such a share of property tax revenues. In addition, the term “actuarially sound” is often used to describe a pension plan in which the present value of tangible assets roughly equaled the present value of expected liabilities. Whereas one would logically expect an actuarially sound plan to be a funded plan, indeed a “fully funded” plan, a funded plan need not be actuarially sound, because it is possible that the flow of funds was simply too small to sufficiently cover liabilities.

Many early state and local plans were not funded at all; and fewer still were actuarially sound. Of course, in another sense, public sector pension plans are implicitly funded to the extent that they are backed by the coercive powers of the state. Through their monopoly of taxation, financially solvent and militarily successful states will be able to rely on their tax bases to fund their pension liabilities. Although this is exactly how most of the early state and local plans were ultimately financed, this is not what is typically meant by the term “funded plan”. Still, an important part of the history of state and local pensions revolves around exactly what happened to the funds (mostly employee contributions) that were maintained on behalf of the public sector workers.

Although the maintenance and operation of the state and local pension funds varied greatly during this early period, most plans required a contribution from workers, and this contribution was to be deposited in a so-called “annuity fund.” The assets of the fund were to be “invested” in various ways. In some cases the funds were invested “in accordance with the laws of the state governing the investment of savings bank funds.” In others the investments of the fund were to be credited “regular interest”, which was defined as, “the rate determined by the retirement board, and shall be substantially that which is actually earned by the fund of the retirement association.” This “rate” varied from state to state. In Connecticut, for example, it was literally a realized rate – i.e. a market rate. In Massachusetts, it was initially set at 3 percent by the retirement board, but subsequently it became a realized rate, which turned out to be roughly 4 percent in the late 1910s. In Pennsylvania, law set the rate at 4 percent. In addition, all three states created a “pension fund”, which contained the state’s contribution to the workers’ retirement annuity. In Connecticut and Massachusetts, this fund simply consisted of “such amounts as shall be appropriated by the general assembly from time to time.” In other words, the state’s share of the pension was on a “pay-as-you-go” basis. In Pennsylvania, however, the state actually contributed 2.8 percent of a teacher’s salary semi-annually to the state pension fund (Clark, Craig, and Wilson 2003).

By the late 1920s some states were basing their contributions to their teachers’ pension fund on actuarial calculations. The first states to adopt such plans were New Jersey, Ohio, and Vermont (Studenski 1920). What this meant in practice was that the state essentially estimated its expected future liability based on a worker’s experience, age, earnings, life expectancy, and so forth, and then deposited that amount into the pension fund. This was originally referred to as a “scientific” pension plan. These were truly funded and actuarially sound defined benefit plans.

As noted, several of the early plans paid an annuity based on the performance of the pension fund. The return on the fund’s portfolio is important because it would ultimately determine the soundness of the funding scheme and in some case the actual annuity the worker would receive. Even the funded, defined benefit plans based the worker’s and the employer’s contributions on expected earnings on the invested funds. How did these early state and local pension funds manage the assets they held? Several state plans restricted the plans to holding only those assets that could be held by state chartered mutual savings banks. Typically, these banks could hold federal, state, or local government debt. In most states, they could usually hold debt issued by private corporations and occasionally private equities. In the first half of the twentieth century, there were 19 states that chartered mutual savings banks. They were overwhelmingly in the Northeast, Midwest, and Far West — the same regions in which state and local pension plans were most prevalent. However, in most cases the corporate securities were limited to those on a so-called “legal list,” which was supposed to contain only the safest corporate investments. Admission to the legal list was based on a compilation of corporate assets, earnings, dividends, prior default records and so forth. The objective was to provide a list that consisted of the bluest of blue chip corporate securities. In the early decades of the twentieth century, these lists were dominated by railroad and public-utility issues (Hickman 1958). States, such as Massachusetts that did not restrict investments to those held by mutual savings banks, placed similar limits on state pension funds. Massachusetts limited investments to those that could be made in state-established “sinking funds”. Ohio explicitly limited its pension funds to U.S. debt, Ohio state debt, and the debt of any “county, village, city, or school district of the state of Ohio” (Studenski 1920).

Collectively, the objective of these restrictions was risk minimization — though the economics of that choice is not as simple it might appear. Cities and states that invested in their own municipal bonds faced an inherent moral hazard. Specifically, public employees might be forced to contribute a proportion of their earnings to their pension funds. If the city then purchased debt at par from itself for the pension fund when that debt might for various reasons not circulate at par on the open market, then the city could be tempted to go to the pension fund rather than the market for funds. This process would tend to insulate the city from the discipline of the market, which would in turn tend to cause the city to over-invest in activities financed in this way. Thus, the pension funds, actually the workers themselves, would essentially be forced to subsidize other city operations. In practice, the main beneficiaries would have been the contractors whose activities were funded by the workers’ pensions funds. At the time, these would have included largely sewer, water, and road projects. The Chicago police pension fund offers an example of the problem. An audit of the fund in 1912 reported: “It is to be regretted that there are no complete statistical records showing the operation of this fund in the city of Chicago.” As a recent history of pensions noted, “It is hard to imagine that the records were simply misplaced by accident” (Clark, Craig, and Wilson 2003, 213). Thus, like the U.S. Navy pension fund, the agents of these municipal and state funds faced a moral hazard that scholars are still analyzing more than a century later.


Clark, Robert L., Lee A. Craig, and Jack W. Wilson. A History of Public Sector Pensions. Philadelphia: University of Pennsylvania Press, 2003.

Craig, Lee A. “The Political Economy of Public-Private Compensation Differentials: The Case of Federal Pensions.” Journal of Economic History 55 (1995): 304-320.

Crook, J. A. “Augustus: Power, Authority, Achievement.” In The Cambridge Ancient History, edited by Alan K. Bowman, Edward Champlin, and Andrew Lintoff. Cambridge: Cambridge University Press, 1996.

Employee Benefit Research Institute. EBRI Databook on Employee Benefits. Washington, D. C.: EBRI, 1997.

Ferguson, E. James. Power of the Purse: A History of American Public Finance. Chapel Hill, NC: University of North Carolina Press, 1961.

Hustead, Edwin C., and Toni Hustead. “Federal Civilian and Military Retirement Systems.” In Pensions in the Public Sector, edited by Olivia S. Mitchell and Edwin C. Hustead, 66-104. Philadelphia: University of Pennsylvania Press, 2001.

James, Herman G. Local Government in the United States. New York: D. Appleton & Company, 1921.

Johnson, Ronald N., and Gary D. Libecap. The Federal Civil Service System and the Problem of Bureaucracy. Chicago: University of Chicago Press, 1994.

Middlekauff, Robert. The Glorious Cause: The American Revolution, 1763-1789. New York: Oxford University Press, 1982.

Millis, Harry A., and Royal E. Montgomery. Labor’s Risk and Social Insurance. New York: McGraw-Hill, 1938.

Mitchell, Olivia S., David McCarthy, Stanley C. Wisniewski, and Paul Zorn. “Developments in State and Local Pension Plans.” In Pensions in the Public Sector, edited by Olivia S. Mitchell and Edwin C. Hustead. Philadelphia: University of Pennsylvania Press, 2001.

Monthly Labor Review, various issues.

Squier, Lee Welling. Old Age Dependency in the United States. New York: Macmillan, 1912

Studenski, Paul. 1920. Teachers’ Pension Systems in the United States: A Critical and Descriptive Study. New York: D. Appleton and Company, 1920

Citation: Craig, Lee. “Public Sector Pensions in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2003. URL

Life Insurance in the United States through World War I

Sharon Ann Murphy

The first American life insurance enterprises can be traced back to the late colonial period. The Presbyterian Synods in Philadelphia and New York set up the Corporation for Relief of Poor and Distressed Widows and Children of Presbyterian Ministers in 1759; the Episcopalian ministers organized a similar fund in 1769. In the half century from 1787 to 1837, twenty-six companies offering life insurance to the general public opened their doors, but they rarely survived more than a couple of years and sold few policies [Figures 1 and 2]. The only early companies to experience much success in this line of business were the Pennsylvania Company for Insurances on Lives and Granting Annuities (chartered 1812), the Massachusetts Hospital Life Insurance Company (1818), the Baltimore Life Insurance Company (1830), the New York Life Insurance and Trust Company (1830), and the Girard Life Insurance, Annuity and Trust Company of Pennsylvania (1836). [See Table 1.]

Despite this tentative start, the life insurance industry did make some significant strides beginning in the 1830s [Figure 2]. Life insurance in force (the total death benefit payable on all existing policies) grew steadily from about $600,000 in 1830 to just under $5 million a decade later, with New York Life and Trust policies accounting for more than half of this latter amount. Over the next five years insurance in force almost tripled to $14.5 million before surging by 1850 to just under $100 million of life insurance spread among 48 companies. The top three companies – the Mutual Life Insurance Company of New York (1842), the Mutual Benefit Life Insurance Company of New Jersey (1845), and the Connecticut Mutual Life Insurance Company (1846) – accounted for more than half of this amount. The sudden success of life insurance during the 1840s can be attributed to two main developments – changes in legislation impacting life insurance and a shift in the corporate structure of companies towards mutualization.

Married Women’s Acts

Life insurance companies targeted women and children as the main beneficiaries of insurance, despite the fact that the majority of women were prevented by law from gaining the protection offered in the unfortunate event of their husband’s death. The first problem was that companies strictly adhered to the common law idea of insurable interest which required that any person taking out insurance on the life of another have a specific monetary interest in that person’s continued life; “affection” (i.e. the relationship of husband and wife or parent and child) was not considered adequate evidence of insurable interest. Additionally, married women could not enter into contracts on their own and therefore could not take out life insurance policies either on themselves (for the benefit of their children or husband) or directly on their husbands (for their own benefit). One way around this problem was for the husband to take out the policy on his own life and assign his wife or children as the beneficiaries. This arrangement proved to be flawed, however, since the policy was considered part of the husband’s estate and therefore could be claimed by any creditors of the insured.

New York’s 1840 Law

This dilemma did not pass unnoticed by promoters of life insurance who viewed it as one of the main stumbling blocks to the growth of the industry. The New York Life and Trust stood at the forefront of a campaign to pass a state law enabling women to procure life insurance policies protected from the claims of creditors. The law, which passed the New York state legislature on April 1, 1840, accomplished four important tasks. First, it established the right of a woman to enter into a contract of insurance on the life of her husband “by herself and in her name, or in the name of any third person, with his assent, as her trustee.” Second, that insurance would be “free from the claims of the representatives of her husband, or of any of his creditors” unless the annual premiums on the policy exceeded $300 (approximately the premium required to take out the maximum $10,000 policy on the life of a 40 year old). Third, in the event of the wife predeceasing the husband, the policy reverted to the children who were granted the same protection from creditors. Finally, as the law was interpreted by both companies and the courts, wives were not required to prove their monetary interest in the life of the insured, establishing for the first time an instance of insurable interest independent of pecuniary interest in the life of another.

By December of 1840, Maryland had enacted an identical law – copied word for word from the New York statute. The Massachusetts legislation of 1844 went one step further by protecting from the claims of creditors all policies procured “for the benefit of a married woman, whether effected by her, her husband, or any other person.” The 1851 New Jersey law was the most stringent, limiting annual premiums to only $100. In those states where a general law did not exist, new companies often had the New York law inserted into their charter, with these provisions being upheld by the state courts. For example, the Connecticut Mutual Life Insurance Company (1846), the North Carolina Mutual Life Insurance Company (1849), and the Jefferson Life Insurance Company of Cincinnati, Ohio (1850) all provided this protection in their charters despite the silence of their respective states on the issue.


The second important development of the 1840s was the emergence of mutual life insurance companies in which any annual profits were redistributed to the policyholders rather than to stockholders. Although mutual insurance was not a new concept – the Society for Equitable Assurances on Lives and Survivorships of London had been operating under the mutual plan since its establishment in 1762 and American marine and fire companies were commonly organized as mutuals – the first American mutual life companies did not begin issuing policies until the early 1840s. The main impetus for this shift to mutualization was the panic of 1837 and the resulting financial crisis, which combined to dampen the enthusiasm of investors for projects ranging from canals and railroads to banks and insurance companies. Between 1838 and 1846, only one life insurance company was able to raise the capital essential for organization on a stock basis. On the other hand, mutuals required little initial capital, relying instead on the premium payments from high-volume sales to pay any death claims. The New England Mutual Life Insurance Company (1835) issued its first policy in 1844 and the Mutual Life Insurance Company of New York (1842) began operation in 1843; at least fifteen more mutuals were chartered by 1849.

Aggressive Marketing

In order to achieve the necessary sales volume, mutual companies began to aggressively promote life insurance through advertisements, editorials, pamphlets, and soliciting agents. These marketing tactics broke with the traditionally staid practices of banks and insurance companies whereby advertisements generally had provided only the location of the local office and agents passively had accepted applications from customers who inquired directly at their office.

Advantages of Mutuality

The mutual marketing campaigns not only advanced life insurance in general but mutuality in particular, which held widespread appeal for the public at large. Policyholders who could not afford to own stock in a proprietary insurance company could now share in the financial success of the mutual companies, with any annual profits (the excess of invested premium income over death payments) being redistributed to the policyholders, often in the form of reduced premium payments. The rapid success of life insurance during the late 1840s, as seen in Figure 3, thus can be attributed both to this active marketing as well as to the appeal of mutual insurance itself.

Regulation and Stagnation after 1849

While many of these companies operated on a sound financial basis, the ease of formation opened the field to several fraudulent or fiscally unsound companies. Stock institutions, concerned both for the reputation of life insurance in general as well as with self-preservation, lobbied the New York state legislature for a law to limit the operation of mutual companies. On April 10, 1849 the legislature passed a law requiring all new insurance companies either incorporating or planning to do business in New York to possess $100,000 of capital stock. Two years later, the legislature passed a more stringent law obligating all life insurance companies to deposit $100,000 with the Comptroller of New York. While this capital requirement was readily met by most stock companies and by the more established New York-based mutual companies, it effectively dampened the movement toward mutualization until the 1890s. Additionally, twelve out-of-state companies ceased doing business in New York altogether, leaving only the New England Mutual and the Mutual Benefit of New Jersey to compete with the New York companies in one of the largest markets. These laws were also largely responsible for the decade-long stagnation in insurance sales beginning in 1849 [Figure 3].

The Civil War and Its Aftermath

By the end of the 1850s life insurance sales again began to increase, climbing to almost $200 million by 1862 before tripling to just under $600 million by the end of the Civil War; life insurance in force peaked at $2 billion in 1871 [Figures 3 and 4]. Several factors contributed to this renewed success. First, the establishment of insurance departments in Massachusetts (1856) and New York (1859) to oversee the operation of fire, marine, and life insurance companies stimulated public confidence in the financial soundness of the industry. Additionally, in 1861 the Massachusetts legislature passed a non-forfeiture law, which forbade companies from terminating policies for lack of premium payment. Instead, the law stipulated that policies be converted to term life policies and that companies pay any death claims that occurred during this term period [term policies are issued only for a stipulated number of years, require reapplication on a regular basis, and consequently command significantly lower annual premiums which rise rapidly with age]. This law was further strengthened in 1880 when Massachusetts mandated that policyholders have the additional option of receiving a cash surrender value for a forfeited policy.

The Civil War was another factor in this resurgence. Although the industry had no experience with mortality during war – particularly a war on American soil – and most policies contained clauses that voided them in the case of military service, several major companies decided to ensure war risks for an additional premium rate of 2% to 5%. While most companies just about broke even on these soldiers’ policies, the goodwill and publicity engendered with the payment of each death claim combined with a generally heightened awareness of mortality to greatly increase interest in life insurance. In the immediate postbellum period, investment in most industries increased dramatically and life insurance was no exception. Whereas only 43 companies existed on the eve of the war, the newfound popularity of life insurance resulted in the establishment of 107 companies between 1865 and 1870 [Figure 1].


The other major innovation in life insurance occurred in 1867 when the Equitable Life Assurance Society (1859) began issuing tontine or deferred dividend policies. While a portion of each premium payment went directly towards an ordinary insurance policy, another portion was deposited in an investment fund with a set maturity date (usually 10, 15, or 20 years) and a restricted group of participants. The beneficiaries of deceased policyholders received only the face value of the standard life component while participants who allowed their policy to lapse either received nothing or only a small cash surrender value. At the end of the stipulated period, the dividends that had accumulated in the fund were divided among the remaining participants. Agents often promoted these policies with inflated estimates of future returns – and always assured the potential investor that he would be a beneficiary of the high lapse rate and not one of the lapsing participants. Estimates indicate that approximately two-thirds of all life insurance policies in force in 1905 – at the height of the industry’s power – were deferred dividend plans.

Reorganization and Innovation

The success and profitability of life insurance companies bred stiff competition during the 1860s; the resulting market saturation and a general economic downtown combined to push the industry into a severe depression during the 1870s. While the more well-established companies such as the Mutual Life Insurance Company of New York, the New York Life Insurance Company (1843), and the Equitable Life Assurance Society were strong enough to weather the depression with few problems, most of the new corporations organized during the 1860s were unable to survive the downturn. All told, 98 life insurance companies went out of business between 1868 and 1877, with 46 ceasing operations during the depression years of 1871 to 1874 [Figure 1]. Of these, 32 failed outright, resulting in $35 million of losses for policyholders. It was 1888 before the amount of insurance in force surpassed that of its peak in 1870 [Figure 4].

Assessment and Fraternal Insurance Companies

Taking advantage of these problems within the industry were numerous assessment and fraternal benefit societies. Assessment or cooperative companies, as they were sometimes called, were associations in which each member was assessed a flat fee to provide the death benefit when another member died rather than paying an annual premium. The two main problems with these organizations were the uncertain number of assessments each year and the difficulty of maintaining membership levels. As members aged and death rates rose, the assessment societies found it difficult to recruit younger members willing to take on the increasing risks of assessments. By the turn of the century, most assessment companies had collapsed or reorganized as mutual companies.

Fraternal organizations were voluntary associations of people affiliated through ethnicity, religion, profession, or some other tie. Although fraternal societies had existed throughout the history of the United States, it was only in the postbellum era that they mushroomed in number and emerged as a major provider of life insurance, mainly for working-class Americans. While many fraternal societies initially issued insurance on an assessment basis, most soon switched to mutual insurance. By the turn of the century, the approximately 600 fraternal societies in existence provided over $5 billion in life insurance to their members, making them direct competitors of the major stock and mutual companies. Just 5 years later, membership was over 6 million with $8 billion of insurance in force [Figure 4].

Industrial Life Insurance

For the few successful life insurance companies organized during the 1860s and 1870s, innovation was the only means of avoiding failure. Aware that they could not compete with the major companies in a tight market, these emerging companies concentrated on markets previously ignored by the larger life insurance organizations – looking instead to the example of the fraternal benefit societies. Beginning in the mid-1870s, companies such as the John Hancock Company (1862), the Metropolitan Life Insurance Company (1868), and the Prudential Insurance Company of America (1875) started issuing industrial life insurance. Industrial insurance, which began in England in the late 1840s, targeted lower income families by providing policies in amounts as small as $100, as opposed to the thousands of dollars normally required for ordinary insurance. Premiums ranging from $0.05 to $0.65 were collected on a weekly basis, often by agents coming door-to-door, instead of on an annual, semi-annual, or quarterly basis by direct remittance to the company. Additionally, medical examinations were often not required and policies could be written to cover all members of the family instead of just the main breadwinner. While the number of policies written skyrocketed to over 51 million by 1919, industrial insurance remained only a fraction of the amount of life insurance in force throughout the period [Figures 4 and 5].

International Expansion

The major life insurance companies also quickly expanded into the global market. While numerous firms ventured abroad as early as the 1860s and 1870s, the most rapid international growth occurred between 1885 and 1905. By 1900, the Equitable was providing insurance in almost 100 nations and territories, the New York Life in almost 50 and the Mutual in about 20. The international premium income (excluding Canada) of these Big Three life insurance companies amounted to almost $50 million in 1905, covering over $1 billion of insurance in force.

The Armstrong Committee Investigation

In response to a multitude of newspaper articles portraying extravagant spending and political payoffs by executives at the Equitable Life Assurance Society – all at the expense of their policyholders – Superintendent Francis Hendricks of the New York Insurance Department reluctantly conducted an investigation of the company in 1905. His report substantiated these allegations and prompted the New York legislature to create a special committee, known as the Armstrong Committee, to examine the conduct of all life insurance companies operating within the state. Appointed chief counsel of the investigation was future United States Supreme Court Chief Justice Charles Evans Hughes. Among the abuses uncovered by the committee were interlocking directorates, the creation of subsidiary financial institutions to evade restrictions on investments, the use of proxy voting to frustrate policyholder control of mutuals, unlimited company expenses, tremendous spending for lobbying activities, rebating (the practice of returning to a new client a portion of their first premium payment as an incentive to take out a policy), the encouragement of policy lapses, and the condoning of “twisting” (a practice whereby agents misrepresented and libeled rival firms in order to convince a policyholder to sacrifice their existing policy and replace it with one from that agent). Additionally, the committee severely chastised the New York Insurance Department for permitting such malpractice to occur and recommended the enactment of a wide array of reform measures. These revelations induced numerous other states to conduct their own investigations, including New Jersey, Massachusetts, Ohio, Missouri, Wisconsin, Tennessee, Kentucky, Minnesota, and Nebraska.

New Regulations

In 1907, the New York legislature responded to the committee’s report by issuing a series of strict regulations specifying acceptable investments, limiting lobbying practices and campaign contributions, democratizing management through the elimination of proxy voting, standardizing policy forms, and limiting agent activities including rebating and twisting. Most devastating to the industry, however, were the prohibition of deferred dividend policies and the requirement of regular dividend payments to policyholders. Nineteen other states followed New York’s lead in adopting similar legislation but the dominance of New York in the insurance industry enabled it to assert considerable influence over a large percentage of the industry. The state invoked the Appleton Rule, a 1901 administrative rule devised by New York Deputy Superintendent of Insurance Henry D. Appleton that required life insurance companies to comply with New York legislation both in New York and in all other states in which they conducted business, as a condition of doing business in New York. As the Massachusetts insurance commissioner immediately recognized, “In a certain sense [New York’s] supervision will be a national supervision, as its companies do business in all the states.” The rule was officially incorporated into New York’s insurance laws in 1939 and remained both in effect and highly effective until the 1970s.

Continued Growth in the Early Twentieth Century

The Armstrong hearings and the ensuing legislation renewed public confidence in the safety of life insurance, resulting in a surge of new company organizations not seen since the 1860s. Whereas only 106 companies existed in 1904, another 288 were established in the ten years from 1905 to 1914 [Figure 1]. Life insurance in force likewise rose rapidly, increasing from $20 billion on the eve of the hearings to almost $46 billion by the end of World War I, with the share insured by the fraternal and assessment societies decreasing from 40% to less than a quarter [Figure 5].

Group Insurance

One major innovation to occur during these decades was the development of group insurance. In 1911 the Equitable Life Assurance Society wrote a policy covering the 125 employees of the Pantasote Leather Company, requiring neither individual applications nor medical examinations. The following year, the Equitable organized a group department to promote this new product and soon was insuring the employees of Montgomery Ward Company. By 1919, 29 companies wrote group policies, which amounted to over a half billion dollars worth of life insurance in force.

War Risk Insurance

Not included in Figure 5 is the War Risk insurance issued by the United States government during World War I. Beginning in April 1917, all active military personnel received a $4,500 insurance policy payable by the federal government in the case of death or disability. In October of the same year, the government began selling low-cost term life and disability insurance, without medical examination, to all active members of the military. War Risk insurance proved to be extremely popular during the war, reaching over $40 billion of life insurance in force by 1919. In the aftermath of the war, these term policies quickly declined to under $3 billion of life insurance in force, with many servicemen turning instead to the whole life policies offered by the stock and mutual companies. As was the case after the Civil War, life insurance sales rose dramatically after World War I, peaking at $117 billion of insurance in force in 1930. By the eve of the Great Depression there existed over 120 million life insurance policies – approximately equivalent to one policy for every man, woman, and child living in the United States at that time.

(Sharon Ann Murphy is a Ph.D. Candidate at the Corcoran Department of History, University of Virginia.)

References and Further Reading

Buley, R. Carlyle. The American Life Convention, 1906-1952: A Study in the History of Life Insurance. New York: Appleton-Century-Crofts, Inc., 1953.

Grant, H. Roger. Insurance Reform: Consumer Action in the Progressive Era. Ames, Iowa: Iowa State University Press, 1988.

Keller, Morton. The Life Insurance Enterprise, 1885-1910: A Study in the Limits of Corporate Power. Cambridge, MA: Belknap Press, 1963.

Kimball, Spencer L. Insurance and Public Policy: A Study in the Legal Implications of Social and Economic Public Policy, Based on Wisconsin Records 1835-1959. Madison, WI: University of Wisconsin Press, 1960.

Merkel, Philip L. “Going National: The Life Insurance Industry’s Campaign for Federal Regulation after the Civil War.” Business History Review 65 (Autumn 1991): 528-553.

North, Douglass. “Capital Accumulation in Life Insurance between the Civil War and the Investigation of 1905.” In Men in Business: Essays on the Historical Role of the Entrepreneur, edited by William Miller, 238-253. New York: Harper & Row Publishers, 1952.

Ransom, Roger L., and Richard Sutch. “Tontine Insurance and the Armstrong Investigation: A Case of Stifled Innovation, 1868-1905.” Journal of Economic History 47, no. 2 (June 1987): 379-390.

Stalson, J. Owen. Marketing Life Insurance: Its History in America. Cambridge, MA: Harvard University Press, 1942.

Table 1

Early American Life Insurance Companies, 1759-1844

Company Year Chartered Terminated Insurance in Force in 1840
Corp. for the Relief of Poor and Distressed Widows and Children of Presbyterian Ministers (Presbyterian Ministers Fund) 1759
Corporation for the Relief of the Widows and Children of Clergymen in the Communion of the Church of England in America (Episcopal Ministers Fund) 1769
Insurance Company of the State of Pennsylvania 1794 1798
Insurance Company of North America, PA 1794 1798
United Insurance Company, NY 1798 1802
New York Insurance Company 1798 1802
Pennsylvania Company for Insurances on Lives and Granting Annuities 1812 1872* 691,000
New York Mechanics Life & Fire 1812 1813
Dutchess County Fire, Marine & Life, NY 1814 1818
Massachusetts Hospital Life Insurance Company 1818 1867* 342,000
Union Insurance Company, NY 1818 1840
Aetna Insurance Company (mainly fire insurance; separate life company chartered in 1853) 1820 1853
Farmers Loan & Trust Company, NY 1822 1843
Baltimore Life Insurance Company 1830 1867 750,000 (est.)
New York Life Insurance & Trust Company 1830 1865* 2,880,000
Lawrenceburg Insurance Company 1832 1836
Mississippi Insurance Company 1833 1837
Protection Insurance Company, Mississippi 1833 1837
Ohio Life Ins. & Trust Co. (life policies appear to have been reinsured with New York Life & Trust in the late 1840s) 1834 1857 54,000
New England Mutual Life Insurance Company, Massachusetts (did not begin issuing policies until 1844) 1835 0
Ocean Mutual, Louisiana 1835 1839
Southern Life & Trust, Alabama 1836 1840
American Life Insurance & Trust Company, Baltimore 1836 1840
Girard Life Insurance, Annuity & Trust Company, Pennsylvania 1836 1894 723,000
Missouri Life & Trust 1837 1841
Missouri Mutual 1837 1841
Globe Life Insurance, Trust & Annuity Company, Pennsylvania 1837 1857
Odd Fellow Life Insurance and Trust Company, Pennsylvania 1840 1857
National of Pennsylvania 1841 1852
Mutual Life Insurance Company of New York 1842
New York Life Insurance Company 1843
State Mutual Life Assurance Company, Massachusetts 1844

*Date company ceased writing life insurance.

Citation: Murphy, Sharon. “Life Insurance in the United States through World War I”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2002. URL

A History of Futures Trading in the United States

Joseph Santos, South Dakota State University

Many contemporary [nineteenth century] critics were suspicious of a form of business in which one man sold what he did not own to another who did not want it… Morton Rothstein (1966)

Anatomy of a Futures Market

The Futures Contract

A futures contract is a standardized agreement between a buyer and a seller to exchange an amount and grade of an item at a specific price and future date. The item or underlying asset may be an agricultural commodity, a metal, mineral or energy commodity, a financial instrument or a foreign currency. Because futures contracts are derived from these underlying assets, they belong to a family of financial instruments called derivatives.

Traders buy and sell futures contracts on an exchange – a marketplace that is operated by a voluntary association of members. The exchange provides buyers and sellers the infrastructure (trading pits or their electronic equivalent), legal framework (trading rules, arbitration mechanisms), contract specifications (grades, standards, time and method of delivery, terms of payment) and clearing mechanisms (see section titled The Clearinghouse) necessary to facilitate futures trading. Only exchange members are allowed to trade on the exchange. Nonmembers trade through commission merchants – exchange members who service nonmember trades and accounts for a fee.

The September 2004 light sweet crude oil contract is an example of a petroleum (mineral) future. It trades on the New York Mercantile exchange (NYM). The contract is standardized – every one is an agreement to trade 1,000 barrels of grade light sweet crude in September, on a day of the seller’s choosing. As of May 25, 2004 the contract sold for $40,120=$40.12x1000 and debits Member S’s margin account the same amount.

The Clearinghouse

The clearinghouse is the counterparty to every trade – its members buy every contract that traders sell on the exchange and sell every contract that traders buy on the exchange. Absent a clearinghouse, traders would interact directly, and this would introduce two problems. First, traders. concerns about their counterparty’s credibility would impede trading. For example, Trader A might refuse to sell to Trader B, who is supposedly untrustworthy.

Second, traders would lose track of their counterparties. This would occur because traders typically settle their contractual obligations by offset – traders buy/sell the contracts that they sold/bought earlier. For example, Trader A sells a contract to Trader B, who sells a contract to Trader C to offset her position, and so on.

The clearinghouse eliminates both of these problems. First, it is a guarantor of all trades. If a trader defaults on a futures contract, the clearinghouse absorbs the loss. Second, clearinghouse members, and not outside traders, reconcile offsets at the end of trading each day. Margin accounts and a process called marking-to-market all but assure the clearinghouse’s solvency.

A margin account is a balance that a trader maintains with a commission merchant in order to offset the trader’s daily unrealized loses in the futures markets. Commission merchants also maintain margins with clearinghouse members, who maintain them with the clearinghouse. The margin account begins as an initial lump sum deposit, or original margin.

To understand the mechanics and merits of marking-to-market, consider that the values of the long and short positions of an existing futures contract change daily, even though futures trading is a zero-sum game – a buyer’s gain/loss equals a seller’s loss/gain. So, the clearinghouse breaks even on every trade, while its individual members. positions change in value daily.

With this in mind, suppose Trader B buys a 5,000 bushel soybean contract for $9.70 from Trader S. Technically, Trader B buys the contract from Clearinghouse Member S and Trader S sells the contract to Clearinghouse Member B. Now, suppose that at the end of the day the contract is priced at $9.71. That evening the clearinghouse marks-to-market each member’s account. That is to say, the clearinghouse credits Member B’s margin account $50 and debits Member S’s margin account the same amount.

Member B is now in a position to draw on the clearinghouse $50, while Member S must pay the clearinghouse a $50 variation margin – incremental margin equal to the difference between a contract’s price and its current market value. In turn, clearinghouse members debit and credit accordingly the margin accounts of their commission merchants, who do the same to the margin accounts of their clients (i.e., traders). This iterative process all but assures the clearinghouse a sound financial footing. In the unlikely event that a trader defaults, the clearinghouse closes out the position and loses, at most, the trader’s one day loss.

Active Futures Markets

Futures exchanges create futures contracts. And, because futures exchanges compete for traders, they must create contracts that appeal to the financial community. For example, the New York Mercantile Exchange created its light sweet crude oil contract in order to fill an unexploited niche in the financial marketplace.

Not all contracts are successful and those that are may, at times, be inactive – the contract exists, but traders are not trading it. For example, of all contracts introduced by U.S. exchanges between 1960 and 1977, only 32% traded in 1980 (Stein 1986, 7). Consequently, entire exchanges can become active – e.g., the New York Futures Exchange opened in 1980 – or inactive – e.g., the New Orleans Exchange closed in 1983 (Leuthold 1989, 18). Government price supports or other such regulation can also render trading inactive (see Carlton 1984, 245).

Futures contracts succeed or fail for many reasons, but successful contracts do share certain basic characteristics (see for example, Baer and Saxon 1949, 110-25; Hieronymus 1977, 19-22). To wit, the underlying asset is homogeneous, reasonably durable, and standardized (easily describable); its supply and demand is ample, its price is unfettered, and all relevant information is available to all traders. For example, futures contracts have never derived from, say, artwork (heterogeneous and not standardized) or rent-controlled housing rights (supply, and hence price is fettered by regulation).

Purposes and Functions

Futures markets have three fundamental purposes. The first is to enable hedgers to shift price risk – asset price volatility – to speculators in return for basis risk – changes in the difference between a futures price and the cash, or current spot price of the underlying asset. Because basis risk is typically less than asset price risk, the financial community views hedging as a form of risk management and speculating as a form of risk taking.

Generally speaking, to hedge is to take opposing positions in the futures and cash markets. Hedgers include (but are not restricted to) farmers, feedlot operators, grain elevator operators, merchants, millers, utilities, export and import firms, refiners, lenders, and hedge fund managers (see Peck 1985, 13-21). Meanwhile, to speculate is to take a position in the futures market with no counter-position in the cash market. Speculators may not be affiliated with the underlying cash markets.

To demonstrate how a hedge works, assume Hedger A buys, or longs, 5,000 bushels of corn, which is currently worth $2.40 per bushel, or $12,000=$2.40×5000; the date is May 1st and Hedger A wishes to preserve the value of his corn inventory until he sells it on June 1st. To do so, he takes a position in the futures market that is exactly opposite his position in the spot – current cash – market. For example, Hedger A sells, or shorts, a July futures contract for 5,000 bushels of corn at a price of $2.50 per bushel; put differently, Hedger A commits to sell in July 5,000 bushels of corn for $12,500=$2.50×5000. Recall that to sell (buy) a futures contract means to commit to sell (buy) an amount and grade of an item at a specific price and future date.

Absent basis risk, Hedger A’s spot and futures markets positions will preserve the value of the 5,000 bushels of corn that he owns, because a fall in the spot price of corn will be matched penny for penny by a fall in the futures price of corn. For example, suppose that by June 1st the spot price of corn has fallen five cents to $2.35 per bushel. Absent basis risk, the July futures price of corn has also fallen five cents to $2.45 per bushel.

So, on June 1st, Hedger A sells his 5,000 bushels of corn and loses $250=($2.35-$2.40)x5000 in the spot market. At the same time, he buys a July futures contract for 5,000 bushels of corn and gains $250=($2.50-$2.45)x5000 in the futures market. Notice, because Hedger A has both sold and bought a July futures contract for 5,000 bushels of corn, he has offset his commitment in the futures market.

This example of a textbook hedge – one that eliminates price risk entirely – is instructive but it is also a bit misleading because: basis risk exists; hedgers may choose to hedge more or less than 100% of their cash positions; and hedgers may cross hedge – trade futures contracts whose underlying assets are not the same as the assets that the hedger owns. So, in reality hedgers cannot immunize entirely their cash positions from market fluctuations and in some cases they may not wish to do so. Again, the purpose of a hedge is not to avoid risk, but rather to manage or even profit from it.

The second fundamental purpose of a futures market is to facilitate firms’ acquisitions of operating capital – short term loans that finance firms’ purchases of intermediate goods such as inventories of grain or petroleum. For example, lenders are relatively more likely to finance, at or near prime lending rates, hedged (versus non-hedged) inventories. The futures contact is an efficient form of collateral because it costs only a fraction of the inventory’s value, or the margin on a short position in the futures market.

Speculators make the hedge possible because they absorb the inventory’s price risk; for example, the ultimate counterparty to the inventory dealer’s short position is a speculator. In the absence of futures markets, hedgers could only engage in forward contracts – unique agreements between private parties, who operate independently of an exchange or clearinghouse. Hence, the collateral value of a forward contract is less than that of a futures contract.3

The third fundamental purpose of a futures market is to provide information to decision makers regarding the market’s expectations of future economic events. So long as a futures market is efficient – the market forms expectations by taking into proper consideration all available information – its forecasts of future economic events are relatively more reliable than an individual’s. Forecast errors are expensive, and well informed, highly competitive, profit-seeking traders have a relatively greater incentive to minimize them.

The Evolution of Futures Trading in the U.S.

Early Nineteenth Century Grain Production and Marketing

Into the early nineteenth century, the vast majority of American grains – wheat, corn, barley, rye and oats – were produced throughout the hinterlands of the United States by producers who acted primarily as subsistence farmers – agricultural producers whose primary objective was to feed themselves and their families. Although many of these farmers sold their surplus production on the market, most lacked access to large markets, as well as the incentive, affordable labor supply, and myriad technologies necessary to practice commercial agriculture – the large scale production and marketing of surplus agricultural commodities.

At this time, the principal trade route to the Atlantic seaboard was by river through New Orleans4; though the South was also home to terminal markets – markets of final destination – for corn, provisions and flour. Smaller local grain markets existed along the tributaries of the Ohio and Mississippi Rivers and east-west overland routes. The latter were used primarily to transport manufactured (high valued and nonperishable) goods west.

Most farmers, and particularly those in the East North Central States – the region consisting today of Illinois, Indiana, Michigan, Ohio and Wisconsin – could not ship bulk grains to market profitably (Clark 1966, 4, 15).5 Instead, most converted grains into relatively high value flour, livestock, provisions and whiskies or malt liquors and shipped them south or, in the case of livestock, drove them east (14).6 Oats traded locally, if at all; their low value-to-weight ratios made their shipment, in bulk or otherwise, prohibitive (15n).

The Great Lakes provided a natural water route east to Buffalo but, in order to ship grain this way, producers in the interior East North Central region needed local ports to receive their production. Although the Erie Canal connected Lake Erie to the port of New York by 1825, water routes that connected local interior ports throughout northern Ohio to the Canal were not operational prior to the mid-1830s. Indeed, initially the Erie aided the development of the Old Northwest, not because it facilitated eastward grain shipments, but rather because it allowed immigrants and manufactured goods easy access to the West (Clark 1966, 53).

By 1835 the mouths of rivers and streams throughout the East North Central States had become the hubs, or port cities, from which farmers shipped grain east via the Erie. By this time, shippers could also opt to go south on the Ohio River and then upriver to Pittsburgh and ultimately to Philadelphia, or north on the Ohio Canal to Cleveland, Buffalo and ultimately, via the Welland Canal, to Lake Ontario and Montreal (19).

By 1836 shippers carried more grain north on the Great Lakes and through Buffalo, than south on the Mississippi through New Orleans (Odle 1964, 441). Though, as late as 1840 Ohio was the only state/region who participated significantly in the Great Lakes trade. Illinois, Indiana, Michigan, and the region of modern day Wisconsin either produced for their respective local markets or relied upon Southern demand. As of 1837 only 4,107 residents populated the “village” of Chicago, which became an official city in that year (Hieronymus 1977, 72).7

Antebellum Grain Trade Finance in the Old Northwest

Before the mid-1860s, a network of banks, grain dealers, merchants, millers and commission houses – buying and selling agents located in the central commodity markets – employed an acceptance system to finance the U.S. grain trade (see Clark 1966, 119; Odle 1964, 442). For example, a miller who required grain would instruct an agent in, say, New York to establish, on the miller’s behalf, a line of credit with a merchant there. The merchant extended this line of credit in the form of sight drafts, which the merchant made payable, in sixty or ninety days, up to the amount of the line of credit.

With this credit line established, commission agents in the hinterland would arrange with grain dealers to acquire the necessary grain. The commission agent would obtain warehouse receipts – dealer certified negotiable titles to specific lots and quantities of grain in store – from dealers, attach these to drafts that he drew on the merchant’s line of credit, and discount these drafts at his local bank in return for banknotes; the local bank would forward these drafts on to the New York merchant’s bank for redemption. The commission agents would use these banknotes to advance – lend – grain dealers roughly three quarters of the current market value of the grain. The commission agent would pay dealers the remainder (minus finance and commission fees) when the grain was finally sold in the East. That is, commission agents and grain dealers entered into consignment contracts.

Unfortunately, this approach linked banks, grain dealers, merchants, millers and commission agents such that the “entire procedure was attended by considerable risk and speculation, which was assumed by both the consignee and consignor” (Clark 1966, 120). The system was reasonably adequate if grain prices went unchanged between the time the miller procured the credit and the time the grain (bulk or converted) was sold in the East, but this was rarely the case. The fundamental problem with this system of finance was that commission agents were effectively asking banks to lend them money to purchase as yet unsold grain. To be sure, this inadequacy was most apparent during financial panics, when many banks refused to discount these drafts (Odle 1964, 447).

Grain Trade Finance in Transition: Forward Contracts and Commodity Exchanges

In 1848 the Illinois-Michigan Canal connected the Illinois River to Lake Michigan. The canal enabled farmers in the hinterlands along the Illinois River to ship their produce to merchants located along the river. These merchants accumulated, stored and then shipped grain to Chicago, Milwaukee and Racine. At first, shippers tagged deliverables according to producer and region, while purchasers inspected and chose these tagged bundles upon delivery. Commercial activity at the three grain ports grew throughout the 1850s. Chicago emerged as a dominant grain (primarily corn) hub later that decade (Pierce 1957, 66).8

Amidst this growth of Lake Michigan commerce, a confluence of innovations transformed the grain trade and its method of finance. By the 1840s, grain elevators and railroads facilitated high volume grain storage and shipment, respectively. Consequently, country merchants and their Chicago counterparts required greater financing in order to store and ship this higher volume of grain.9 And, high volume grain storage and shipment required that inventoried grains be fungible – of such a nature that one part or quantity could be replaced by another equal part or quantity in the satisfaction of an obligation. For example, because a bushel of grade No. 2 Spring Wheat was fungible, its price did not depend on whether it came from Farmer A, Farmer B, Grain Elevator C, or Train Car D.

Merchants could secure these larger loans more easily and at relatively lower rates if they obtained firm price and quantity commitments from their buyers. So, merchants began to engage in forward (not futures) contracts. According to Hieronymus (1977), the first such “time contract” on record was made on March 13, 1851. It specified that 3,000 bushels of corn were to be delivered to Chicago in June at a price of one cent below the March 13th cash market price (74).10

Meanwhile, commodity exchanges serviced the trade’s need for fungible grain. In the 1840s and 1850s these exchanges emerged as associations for dealing with local issues such as harbor infrastructure and commercial arbitration (e.g., Detroit in 1847, Buffalo, Cleveland and Chicago in 1848 and Milwaukee in 1849) (see Odle 1964). By the 1850s they established a system of staple grades, standards and inspections, all of which rendered inventory grain fungible (Baer and Saxon 1949, 10; Chandler 1977, 211). As collection points for grain, cotton, and provisions, they weighed, inspected and classified commodity shipments that passed from west to east. They also facilitated organized trading in spot and forward markets (Chandler 1977, 211; Odle 1964, 439).11

The largest and most prominent of these exchanges was the Board of Trade of the City of Chicago, a grain and provisions exchange established in 1848 by a State of Illinois corporate charter (Boyle 1920, 38; Lurie 1979, 27); the exchange is known today as the Chicago Board of Trade (CBT). For at least its first decade, the CBT functioned as a meeting place for merchants to resolve contract disputes and discuss commercial matters of mutual concern. Participation was part-time at best. The Board’s first directorate of 25 members included “a druggist, a bookseller, a tanner, a grocer, a coal dealer, a hardware merchant, and a banker” and attendance was often encouraged by free lunches (Lurie 1979, 25).

However, in 1859 the CBT became a state- (of Illinois) chartered private association. As such, the exchange requested and received from the Illinois legislature sanction to establish rules “for the management of their business and the mode in which it shall be transacted, as they may think proper;” to arbitrate over and settle disputes with the authority as “if it were a judgment rendered in the Circuit Court;” and to inspect, weigh and certify grain and grain trades such that these certifications would be binding upon all CBT members (Lurie 1979, 27).

Nineteenth Century Futures Trading

By the 1850s traders sold and resold forward contracts prior to actual delivery (Hieronymus 1977, 75). A trader could not offset, in the futures market sense of the term, a forward contact. Nonetheless, the existence of a secondary market – market for extant, as opposed to newly issued securities – in forward contracts suggests, if nothing else, speculators were active in these early time contracts.

On March 27, 1863, the Chicago Board of Trade adopted its first rules and procedures for trade in forwards on the exchange (Hieronymus 1977, 76). The rules addressed contract settlement, which was (and still is) the fundamental challenge associated with a forward contract – finding a trader who was willing to take a position in a forward contract was relatively easy to do; finding that trader at the time of contract settlement was not.

The CBT began to transform actively traded and reasonably homogeneous forward contracts into futures contracts in May, 1865. At this time, the CBT: restricted trade in time contracts to exchange members; standardized contract specifications; required traders to deposit margins; and specified formally contract settlement, including payments and deliveries, and grievance procedures (Hieronymus 1977, 76).

The inception of organized futures trading is difficult to date. This is due, in part, to semantic ambiguities – e.g., was a “to arrive” contract a forward contract or a futures contract or neither? However, most grain trade historians agree that storage (grain elevators), shipment (railroad), and communication (telegraph) technologies, a system of staple grades and standards, and the impetus to speculation provided by the Crimean and U.S. Civil Wars enabled futures trading to ripen by about 1874, at which time the CBT was the U.S.’s premier organized commodities (grain and provisions) futures exchange (Baer and Saxon 1949, 87; Chandler 1977, 212; CBT 1936, 18; Clark 1966, 120; Dies 1925, 15; Hoffman 1932, 29; Irwin 1954, 77, 82; Rothstein 1966, 67).

Nonetheless, futures exchanges in the mid-1870s lacked modern clearinghouses, with which most exchanges began to experiment only in the mid-1880s. For example, the CBT’s clearinghouse got its start in 1884, and a complete and mandatory clearing system was in place at the CBT by 1925 (Hoffman 1932, 199; Williams 1982, 306). The earliest formal clearing and offset procedures were established by the Minneapolis Grain Exchange in 1891 (Peck 1985, 6).

Even so, rudiments of a clearing system – one that freed traders from dealing directly with one another – were in place by the 1870s (Hoffman 1920, 189). That is to say, brokers assumed the counter-position to every trade, much as clearinghouse members would do decades later. Brokers settled offsets between one another, though in the absence of a formal clearing procedure these settlements were difficult to accomplish.

Direct settlements were simple enough. Here, two brokers would settle in cash their offsetting positions between one another only. Nonetheless, direct settlements were relatively uncommon because offsetting purchases and sales between brokers rarely balanced with respect to quantity. For example, B1 might buy a 5,000 bushel corn future from B2, who then might buy a 6,000 bushel corn future from B1; in this example, 1,000 bushels of corn remain unsettled between B1 and B2. Of course, the two brokers could offset the remaining 1,000 bushel contract if B2 sold a 1,000 bushel corn future to B1. But what if B2 had already sold a 1,000 bushel corn future to B3, who had sold a 1,000 bushel corn future to B1? In this case, each broker’s net futures market position is offset, but all three must meet in order to settle their respective positions. Brokers referred to such a meeting as a ring settlement. Finally, if, in this example, B1 and B3 did not have positions with each other, B2 could settle her position if she transferred her commitment (which she has with B1) to B3. Brokers referred to this method as a transfer settlement. In either ring or transfer settlements, brokers had to find other brokers who held and wished to settle open counter-positions. Often brokers used runners to search literally the offices and corridors for the requisite counter-parties (see Hoffman 1932, 185-200).

Finally, the transformation in Chicago grain markets from forward to futures trading occurred almost simultaneously in New York cotton markets. Forward contracts for cotton traded in New York (and Liverpool, England) by the 1850s. And, like Chicago, organized trading in cotton futures began on the New York Cotton Exchange in about 1870; rules and procedures formalized the practice in 1872. Futures trading on the New Orleans Cotton Exchange began around 1882 (Hieronymus 1977, 77).

Other successful nineteenth century futures exchanges include the New York Produce Exchange, the Milwaukee Chamber of Commerce, the Merchant’s Exchange of St. Louis, the Chicago Open Board of Trade, the Duluth Board of Trade, and the Kansas City Board of Trade (Hoffman 1920, 33; see Peck 1985, 9).

Early Futures Market Performance


Data on grain futures volume prior to the 1880s are not available (Hoffman 1932, 30). Though in the 1870s “[CBT] officials openly admitted that there was no actual delivery of grain in more than ninety percent of contracts” (Lurie 1979, 59). Indeed, Chart 1 demonstrates that trading was relatively voluminous in the nineteenth century.

An annual average of 23,600 million bushels of grain futures traded between 1884 and 1888, or eight times the annual average amount of crops produced during that period. By comparison, an annual average of 25,803 million bushels of grain futures traded between 1966 and 1970, or four times the annual average amount of crops produced during that period. In 2002, futures volume outnumbered crop production by a factor of eleven.

The comparable data for cotton futures are presented in Chart 2. Again here, trading in the nineteenth century was significant. To wit, by 1879 futures volume had outnumbered production by a factor of five, and by 1896 this factor had reached eight.

Price of Storage

Nineteenth century observers of early U.S. futures markets either credited them for stabilizing food prices, or discredited them for wagering on, and intensifying, the economic hardships of Americans (Baer and Saxon 1949, 12-20, 56; Chandler 1977, 212; Ferris 1988, 88; Hoffman 1932, 5; Lurie 1979, 53, 115). To be sure, the performance of early futures markets remains relatively unexplored. The extant research on the subject has generally examined this performance in the context of two perspectives on the theory of efficiency: the price of storage and futures price efficiency more generally.

Holbrook Working pioneered research into the price of storage – the relationship, at a point in time, between prices (of storable agricultural commodities) applicable to different future dates (Working 1949, 1254).12 For example, what is the relationship between the current spot price of wheat and the current September 2004 futures price of wheat? Or, what is the relationship between the current September 2004 futures price of wheat and the current May 2005 futures price of wheat?

Working reasoned that these prices could not differ because of events that were expected to occur between these dates. For example, if the May 2004 wheat futures price is less than the September 2004 price, this cannot be due to, say, the expectation of a small harvest between May 2004 and September 2004. On the contrary, traders should factor such an expectation into both May and September prices. And, assuming that they do, then this difference can only reflect the cost of carrying – storing – these commodities over time.13 Though this strict interpretation has since been modified somewhat (see Peck 1985, 44).

So, for example, the September 2004 price equals the May 2004 price plus the cost of storing wheat between May 2004 and September 2004. If the difference between these prices is greater or less than the cost of storage, and the market is efficient, arbitrage will bring the difference back to the cost of storage – e.g., if the difference in prices exceeds the cost of storage, then traders can profit if they buy the May 2004 contract, sell the September 2004 contract, take delivery in May and store the wheat until September. Working (1953) demonstrated empirically that the theory of the price of storage could explain quite satisfactorily these inter-temporal differences in wheat futures prices at the CBT as early as the late 1880s (Working 1953, 556).

Futures Price Efficiency

Many contemporary economists tend to focus on futures price efficiency more generally (for example, Beck 1994; Kahl and Tomek 1986; Kofi 1973; McKenzie, et al. 2002; Tomek and Gray, 1970). That is to say, do futures prices shadow consistently (but not necessarily equal) traders’ rational expectations of future spot prices? Here, the research focuses on the relationship between, say, the cash price of wheat in September 2004 and the September 2004 futures price of wheat quoted two months earlier in July 2004.

Figure 1illustrates the behavior of corn futures prices and their corresponding spot prices between 1877 and 1890. The data consist of the average month t futures price in the last full week of month t-2 and the average cash price in the first full week of month t.

The futures price and its corresponding spot price need not be equal; futures price efficiency does not mean that the futures market is clairvoyant. But, a difference between the two series should exist only because of an unpredictable forecast error and a risk premium – futures prices may be, say, consistently below the expected future spot price if long speculators require an inducement, or premium, to enter the futures market. Recent work finds strong evidence that these early corn (and corresponding wheat) futures prices are, in the long run, efficient estimates of their underlying spot prices (Santos 2002, 35). Although these results and Working’s empirical studies on the price of storage support, to some extent, the notion that early U.S. futures markets were efficient, this question remains largely unexplored by economic historians.

The Struggle for Legitimacy

Nineteenth century America was both fascinated and appalled by futures trading. This is apparent from the litigation and many public debates surrounding its legitimacy (Baer and Saxon 1949, 55; Buck 1913, 131, 271; Hoffman 1932, 29, 351; Irwin 1954, 80; Lurie 1979, 53, 106). Many agricultural producers, the lay community and, at times, legislatures and the courts, believed trading in futures was tantamount to gambling. The difference between the latter and speculating, which required the purchase or sale of a futures contract but not the shipment or delivery of the commodity, was ostensibly lost on most Americans (Baer and Saxon 1949, 56; Ferris 1988, 88; Hoffman 1932, 5; Lurie 1979, 53, 115).

Many Americans believed that futures traders frequently manipulated prices. From the end of the Civil War until 1879 alone, corners – control of enough of the available supply of a commodity to manipulate its price – allegedly occurred with varying degrees of success in wheat (1868, 1871, 1878/9), corn (1868), oats (1868, 1871, 1874), rye (1868) and pork (1868) (Boyle 1920, 64-65). This manipulation continued throughout the century and culminated in the Three Big Corners – the Hutchinson (1888), the Leiter (1898), and the Patten (1909). The Patten corner was later debunked (Boyle 1920, 67-74), while the Leiter corner was the inspiration for Frank Norris’s classic The Pit: A Story of Chicago (Norris 1903; Rothstein 1982, 60).14 In any case, reports of market corners on America’s early futures exchanges were likely exaggerated (Boyle 1920, 62-74; Hieronymus 1977, 84), as were their long term effects on prices and hence consumer welfare (Rothstein 1982, 60).

By 1892 thousands of petitions to Congress called for the prohibition of “speculative gambling in grain” (Lurie, 1979, 109). And, attacks from state legislatures were seemingly unrelenting: in 1812 a New York act made short sales illegal (the act was repealed in 1858); in 1841 a Pennsylvania law made short sales, where the position was not covered in five days, a misdemeanor (the law was repealed in 1862); in 1882 an Ohio law and a similar one in Illinois tried unsuccessfully to restrict cash settlement of futures contracts; in 1867 the Illinois constitution forbade dealing in futures contracts (this was repealed by 1869); in 1879 California’s constitution invalidated futures contracts (this was effectively repealed in 1908); and, in 1882, 1883 and 1885, Mississippi, Arkansas, and Texas, respectively, passed laws that equated futures trading with gambling, thus making the former a misdemeanor (Peterson 1933, 68-69).

Two nineteenth century challenges to futures trading are particularly noteworthy. The first was the so-called Anti-Option movement. According to Lurie (1979), the movement was fueled by agrarians and their sympathizers in Congress who wanted to end what they perceived as wanton speculative abuses in futures trading (109). Although options were (are) not futures contracts, and were nonetheless already outlawed on most exchanges by the 1890s, the legislation did not distinguish between the two instruments and effectively sought to outlaw both (Lurie 1979, 109).

In 1890 the Butterworth Anti-Option Bill was introduced in Congress but never came to a vote. However, in 1892 the Hatch (and Washburn) Anti-Option bills passed both houses of Congress, and failed only on technicalities during reconciliation between the two houses. Had either bill become law, it would have effectively ended options and futures trading in the United States (Lurie 1979, 110).

A second notable challenge was the bucket shop controversy, which challenged the legitimacy of the CBT in particular. A bucket shop was essentially an association of gamblers who met outside the CBT and wagered on the direction of futures prices. These associations had legitimate-sounding names such as the Christie Grain and Stock Company and the Public Grain Exchange. To most Americans, these “exchanges” were no less legitimate than the CBT. That some CBT members were guilty of “bucket shopping” only made matters worse!

The bucket shop controversy was protracted and colorful (see Lurie 1979, 138-167). Between 1884 and 1887 Illinois, Iowa, Missouri and Ohio passed anti-bucket shop laws (Lurie 1979, 95). The CBT believed these laws entitled them to restrict bucket shops access to CBT price quotes, without which the bucket shops could not exist. Bucket shops argued that they were competing exchanges, and hence immune to extant anti-bucket shop laws. As such, they sued the CBT for access to these price quotes.15

The two sides and the telegraph companies fought in the courts for decades over access to these price quotes; the CBT’s very survival hung in the balance. After roughly twenty years of litigation, the Supreme Court of the U.S. effectively ruled in favor of the Chicago Board of Trade and against bucket shops (Board of Trade of the City of Chicago v. Christie Grain & Stock Co., 198 U.S. 236, 25 Sup. Ct. (1905)). Bucket shops disappeared completely by 1915 (Hieronymus 1977, 90).


The anti-option movement, the bucket shop controversy and the American public’s discontent with speculation masks an ironic reality of futures trading: it escaped government regulation until after the First World War; though early exchanges did practice self-regulation or administrative law.16 The absence of any formal governmental oversight was due in large part to two factors. First, prior to 1895, the opposition tried unsuccessfully to outlaw rather than regulate futures trading. Second, strong agricultural commodity prices between 1895 and 1920 weakened the opposition, who blamed futures markets for low agricultural commodity prices (Hieronymus 1977, 313).

Grain prices fell significantly by the end of the First World War, and opposition to futures trading grew once again (Hieronymus 1977, 313). In 1922 the U.S. Congress enacted the Grain Futures Act, which required exchanges to be licensed, limited market manipulation and publicized trading information (Leuthold 1989, 369).17 However, regulators could rarely enforce the act because it enabled them to discipline exchanges, rather than individual traders. To discipline an exchange was essentially to suspend it, a punishment unfit (too harsh) for most exchange-related infractions.

The Commodity Exchange Act of 1936 enabled the government to deal directly with traders rather than exchanges. It established the Commodity Exchange Authority (CEA), a bureau of the U.S. Department of Agriculture, to monitor and investigate trading activities and prosecute price manipulation as a criminal offense. The act also: limited speculators’ trading activities and the sizes of their positions; regulated futures commission merchants; banned options trading on domestic agricultural commodities; and restricted futures trading – designated which commodities were to be traded on which licensed exchanges (see Hieronymus 1977; Leuthold, et al. 1989).

Although Congress amended the Commodity Exchange Act in 1968 in order to increase the regulatory powers of the Commodity Exchange Authority, the latter was ill-equipped to handle the explosive growth in futures trading in the 1960s and 1970s. So, in 1974 Congress passed the Commodity Futures Trading Act, which created far-reaching federal oversight of U.S. futures trading and established the Commodity Futures Trading Commission (CFTC).

Like the futures legislation before it, the Commodity Futures Trading Act seeks “to ensure proper execution of customer orders and to prevent unlawful manipulation, price distortion, fraud, cheating, fictitious trades, and misuse of customer funds” (Leuthold, et al. 1989, 34). Unlike the CEA, the CFTC was given broad regulator powers over all futures trading and related exchange activities throughout the U.S. The CFTC oversees and approves modifications to extant contracts and the creation and introduction of new contracts. The CFTC consists of five presidential appointees who are confirmed by the U.S. Senate.

The Futures Trading Act of 1982 amended the Commodity Futures Trading Act of 1974. The 1982 act legalized options trading on agricultural commodities and identified more clearly the jurisdictions of the CFTC and Securities and Exchange Commission (SEC). The regulatory overlap between the two organizations arose because of the explosive popularity during the 1970s of financial futures contracts. Today, the CFTC regulates all futures contracts and options on futures contracts traded on U.S. futures exchanges; the SEC regulates all financial instrument cash markets as well as all other options markets.

Finally, in 2000 Congress passed the Commodity Futures Modernization Act, which reauthorized the Commodity Futures Trading Commission for five years and repealed an 18-year old ban on trading single stock futures. The bill also sought to increase competition and “reduce systematic risk in markets for futures and over-the-counter derivatives” (H.R. 5660, 106th Congress 2nd Session).

Modern Futures Markets

The growth in futures trading has been explosive in recent years (Chart 3).

Futures trading extended beyond physical commodities in the 1970s and 1980s – currency futures in 1972; interest rate futures in 1975; and stock index futures in 1982 (Silber 1985, 83). The enormous growth of financial futures at this time was likely because of the breakdown of the Bretton Woods exchange rate regime, which essentially fixed the relative values of industrial economies’ exchange rates to the American dollar (see Bordo and Eichengreen 1993), and relatively high inflation from the late 1960s to the early 1980s. Flexible exchange rates and inflation introduced, respectively, exchange and interest rate risks, which hedgers sought to mitigate through the use of financial futures. Finally, although futures contracts on agricultural commodities remain popular, financial futures and options dominate trading today. Trading volume in metals, minerals and energy remains relatively small.

Trading volume in agricultural futures contracts first dropped below 50% in 1982. By 1985 this volume had dropped to less than one fourth all trading. In the same year the volume of futures trading in the U.S. Treasury bond contract alone exceeded trading volume in all agricultural commodities combined (Leuthold et al. 1989, 2). Today exchanges in the U.S. actively trade contracts on several underlying assets (Table 1). These range from the traditional – e.g., agriculture and metals – to the truly innovative – e.g. the weather. The latter’s payoff varies with the number of degree-days by which the temperature in a particular region deviates from 65 degrees Fahrenheit.

Table 1: Select Futures Contracts Traded as of 2002

Agriculture Currencies Equity Indexes Interest Rates Metals & Energy
Corn British pound S&P 500 index Eurodollars Copper
Oats Canadian dollar Dow Jones Industrials Euroyen Aluminum
Soybeans Japanese yen S&P Midcap 400 Euro-denominated bond Gold
Soybean meal Euro Nasdaq 100 Euroswiss Platinum
Soybean oil Swiss franc NYSE index Sterling Palladium
Wheat Australian dollar Russell 2000 index British gov. bond (gilt) Silver
Barley Mexican peso Nikkei 225 German gov. bond Crude oil
Flaxseed Brazilian real FTSE index Italian gov. bond Heating oil
Canola CAC-40 Canadian gov. bond Gas oil
Rye DAX-30 Treasury bonds Natural gas
Cattle All ordinary Treasury notes Gasoline
Hogs Toronto 35 Treasury bills Propane
Pork bellies Dow Jones Euro STOXX 50 LIBOR CRB index
Cocoa EURIBOR Electricity
Coffee Municipal bond index Weather
Cotton Federal funds rate
Milk Bankers’ acceptance
Orange juice

Source: Bodie, Kane and Marcus (2005), p. 796.

Table 2 provides a list of today’s major futures exchanges.

Table 2: Select Futures Exchanges as of 2002

Exchange Exchange
Chicago Board of Trade CBT Montreal Exchange ME
Chicago Mercantile Exchange CME Minneapolis Grain Exchange MPLS
Coffee, Sugar & Cocoa Exchange, New York CSCE Unit of Euronext.liffe NQLX
COMEX, a division of the NYME CMX New York Cotton Exchange NYCE
European Exchange EUREX New York Futures Exchange NYFE
Financial Exchange, a division of the NYCE FINEX New York Mercantile Exchange NYME
International Petroleum Exchange IPE OneChicago ONE
Kansas City Board of Trade KC Sydney Futures Exchange SFE
London International Financial Futures Exchange LIFFE Singapore Exchange Ltd. SGX
Marche a Terme International de France MATIF

Source: Wall Street Journal, 5/12/2004, C16.

Modern trading differs from its nineteenth century counterpart in other respects as well. First, the popularity of open outcry trading is waning. For example, today the CBT executes roughly half of all trades electronically. And, electronic trading is the rule, rather than the exception throughout Europe. Second, today roughly 99% of all futures contracts are settled prior to maturity. Third, in 1982 the Commodity Futures Trading Commission approved cash settlement – delivery that takes the form of a cash balance – on financial index and Eurodollar futures, whose underlying assets are not deliverable, as well as on several non-financial contracts including lean hog, feeder cattle and weather (Carlton 1984, 253). And finally, on Dec. 6, 2002, the Chicago Mercantile Exchange became the first publicly traded financial exchange in the U.S.

References and Further Reading

Baer, Julius B. and Olin. G. Saxon. Commodity Exchanges and Futures Trading. New York: Harper & Brothers, 1949.

Bodie, Zvi, Alex Kane and Alan J. Marcus. Investments. New York: McGraw-Hill/Irwin, 2005.

Bordo, Michael D. and Barry Eichengreen, editors. A Retrospective on the Bretton Woods System: Lessons for International Monetary Reform. Chicago: University of Chicago Press, 1993.

Boyle, James. E. Speculation and the Chicago Board of Trade. New York: MacMillan Company, 1920.

Buck, Solon. J. The Granger Movement: A Study of Agricultural Organization and Its Political,

Carlton, Dennis W. “Futures Markets: Their Purpose, Their History, Their Growth, Their Successes and Failures.” Journal of Futures Markets 4, no. 3 (1984): 237-271.

Chicago Board of Trade Bulletin. The Development of the Chicago Board of Trade. Chicago: Chicago Board of Trade, 1936.

Chandler, Alfred. D. The Visible Hand: The Managerial Revolution in American Business. Cambridge: Harvard University Press, 1977.

Clark, John. G. The Grain Trade in the Old Northwest. Urbana: University of Illinois Press, 1966.

Commodity Futures Trading Commission. Annual Report. Washington, D.C. 2003.

Dies, Edward. J. The Wheat Pit. Chicago: The Argyle Press, 1925.

Ferris, William. G. The Grain Traders: The Story of the Chicago Board of Trade. East Lansing, MI: Michigan State University Press, 1988.

Hieronymus, Thomas A. Economics of Futures Trading for Commercial and Personal Profit. New York: Commodity Research Bureau, Inc., 1977.

Hoffman, George W. Futures Trading upon Organized Commodity Markets in the United States. Philadelphia: University of Pennsylvania Press, 1932.

Irwin, Harold. S. Evolution of Futures Trading. Madison, WI: Mimir Publishers, Inc., 1954

Leuthold, Raymond M., Joan C. Junkus and Jean E. Cordier. The Theory and Practice of Futures Markets. Champaign, IL: Stipes Publishing L.L.C., 1989.

Lurie, Jonathan. The Chicago Board of Trade 1859-1905. Urbana: University of Illinois Press, 1979.

National Agricultural Statistics Service. “Historical Track Records.” Agricultural Statistics Board, U.S. Department of Agriculture, Washington, D.C. April 2004.

Norris, Frank. The Pit: A Story of Chicago. New York, NY: Penguin Group, 1903.

Odle, Thomas. “Entrepreneurial Cooperation on the Great Lakes: The Origin of the Methods of American Grain Marketing.” Business History Review 38, (1964): 439-55.

Peck, Anne E., editor. Futures Markets: Their Economic Role. Washington D.C.: American Enterprise Institute for Public Policy Research, 1985.

Peterson, Arthur G. “Futures Trading with Particular Reference to Agricultural Commodities.” Agricultural History 8, (1933): 68-80.

Pierce, Bessie L. A History of Chicago: Volume III, the Rise of a Modern City. New York: Alfred A. Knopf, 1957.

Rothstein, Morton. “The International Market for Agricultural Commodities, 1850-1873.” In Economic Change in the Civil War Era, edited by David. T. Gilchrist and W. David Lewis, 62-71. Greenville DE: Eleutherian Mills-Hagley Foundation, 1966.

Rothstein, Morton. “Frank Norris and Popular Perceptions of the Market.” Agricultural History 56, (1982): 50-66.

Santos, Joseph. “Did Futures Markets Stabilize U.S. Grain Prices?” Journal of Agricultural Economics 53, no. 1 (2002): 25-36.

Silber, William L. “The Economic Role of Financial Futures.” In Futures Markets: Their Economic Role, edited by Anne E. Peck, 83-114. Washington D.C.: American Enterprise Institute for Public Policy Research, 1985.

Stein, Jerome L. The Economics of Futures Markets. Oxford: Basil Blackwell Ltd, 1986.

Taylor, Charles. H. History of the Board of Trade of the City of Chicago. Chicago: R. O. Law, 1917.

Werner, Walter and Steven T. Smith. Wall Street. New York: Columbia University Press, 1991.

Williams, Jeffrey C. “The Origin of Futures Markets.” Agricultural History 56, (1982): 306-16.

Working, Holbrook. “The Theory of the Price of Storage.” American Economic Review 39, (1949): 1254-62.

Working, Holbrook. “Hedging Reconsidered.” Journal of Farm Economics 35, (1953): 544-61.

1 The clearinghouse is typically a corporation owned by a subset of exchange members. For details regarding the clearing arrangements of a specific exchange, go to and click on “Clearing Organizations.”

2 The vast majority of contracts are offset. Outright delivery occurs when the buyer receives from, or the seller “delivers” to the exchange a title of ownership, and not the actual commodity or financial security – the urban legend of the trader who neglected to settle his long position and consequently “woke up one morning to find several car loads of a commodity dumped on his front yard” is indeed apocryphal (Hieronymus 1977, 37)!

3 Nevertheless, forward contracts remain popular today (see Peck 1985, 9-12).

4 The importance of New Orleans as a point of departure for U.S. grain and provisions prior to the Civil War is unquestionable. According to Clark (1966), “New Orleans was the leading export center in the nation in terms of dollar volume of domestic exports, except for 1847 and a few years during the 1850s, when New York’s domestic exports exceeded those of the Crescent City” (36).

5 This area was responsible for roughly half of U.S. wheat production and a third of U.S. corn production just prior to 1860. Southern planters dominated corn output during the early to mid- 1800s.

6 Millers milled wheat into flour; pork producers fed corn to pigs, which producers slaughtered for provisions; distillers and brewers converted rye and barley into whiskey and malt liquors, respectively; and ranchers fed grains and grasses to cattle, which were then driven to eastern markets.

7 Significant advances in transportation made the grain trade’s eastward expansion possible, but the strong and growing demand for grain in the East made the trade profitable. The growth in domestic grain demand during the early to mid-nineteenth century reflected the strong growth in eastern urban populations. Between 1820 and 1860, the populations of Baltimore, Boston, New York and Philadelphia increased by over 500% (Clark 1966, 54). Moreover, as the 1840’s approached, foreign demand for U.S. grain grew. Between 1845 and 1847, U.S. exports of wheat and flour rose from 6.3 million bushels to 26.3 million bushels and corn exports grew from 840,000 bushels to 16.3 million bushels (Clark 1966, 55).

8 Wheat production was shifting to the trans-Mississippi West, which produced 65% of the nation’s wheat by 1899 and 90% by 1909, and railroads based in the Lake Michigan port cities intercepted the Mississippi River trade that would otherwise have headed to St. Louis (Clark 1966, 95). Lake Michigan port cities also benefited from a growing concentration of corn production in the West North Central region – Iowa, Kansas, Minnesota, Missouri, Nebraska, North Dakota and South Dakota, which by 1899 produced 40% percent of the country’s corn (Clark 1966, 4).

9 Corn had to be dried immediately after it was harvested and could only be shipped profitably by water to Chicago, but only after rivers and lakes had thawed; so, country merchants stored large quantities of corn. On the other hand, wheat was more valuable relative to its weight, and it could be shipped to Chicago by rail or road immediately after it was harvested; so, Chicago merchants stored large quantities of wheat.

10 This is consistent with Odle (1964), who adds that “the creators of the new system of marketing [forward contracts] were the grain merchants of the Great Lakes” (439). However, Williams (1982) presents evidence of such contracts between Buffalo and New York City as early as 1847 (309). To be sure, Williams proffers an intriguing case that forward and, in effect, future trading was active and quite sophisticated throughout New York by the late 1840s. Moreover, he argues that this trading grew not out of activity in Chicago, whose trading activities were quite primitive at this early date, but rather trading in London and ultimately Amsterdam. Indeed, “time bargains” were common in London and New York securities markets in the mid- and late 1700s, respectively. A time bargain was essentially a cash-settled financial forward contract that was unenforceable by law, and as such “each party was forced to rely on the integrity and credit of the other” (Werner and Smith 1991, 31). According to Werner and Smith, “time bargains prevailed on Wall Street until 1840, and were gradually replaced by margin trading by 1860” (68). They add that, “margin trading … had an advantage over time bargains, in which there was little protection against default beyond the word of another broker. Time bargains also technically violated the law as wagering contracts; margin trading did not” (135). Between 1818 and 1840 these contracts comprised anywhere from 0.7% (49-day average in 1830) to 34.6% (78-day average in 1819) of daily exchange volume on the New York Stock & Exchange Board (Werner and Smith 1991, 174).

11 Of course, forward markets could and indeed did exist in the absence of both grading standards and formal exchanges, though to what extent they existed is unclear (see Williams 1982).

12 In the parlance of modern financial futures, the term cost of carry is used instead of the term storage. For example, the cost of carrying a bond is comprised of the cost of acquiring and holding (or storing) it until delivery minus the return earned during the carry period.

13 More specifically, the price of storage is comprised of three components: (1) physical costs such as warehouse and insurance; (2) financial costs such as borrowing rates of interest; and (3) the convenience yield – the return that the merchant, who stores the commodity, derives from maintaining an inventory in the commodity. The marginal costs of (1) and (2) are increasing functions of the amount stored; the more the merchant stores, the greater the marginal costs of warehouse use, insurance and financing. Whereas the marginal benefit of (3) is a decreasing function of the amount stored; put differently, the smaller the merchant’s inventory, the more valuable each additional unit of inventory becomes. Working used this convenience yield to explain a negative price of storage – the nearby contract is priced higher than the faraway contract; an event that is likely to occur when supplies are exceptionally low. In this instance, there is little for inventory dealers to store. Hence, dealers face extremely low physical and financial storage costs, but extremely high convenience yields. The price of storage turns negative; essentially, inventory dealers are willing to pay to store the commodity.

14 Norris’ protagonist, Curtis Jadwin, is a wheat speculator emotionally consumed and ultimately destroyed, while the welfare of producers and consumers hang in the balance, when a nineteenth century CBT wheat futures corner backfires on him.

15 One particularly colorful incident in the controversy came when the Supreme Court of Illinois ruled that the CBT had to either make price quotes public or restrict access to everyone. When the Board opted for the latter, it found it needed to “prevent its members from running (often literally) between the [CBT and a bucket shop next door], but with minimal success. Board officials at first tried to lock the doors to the exchange…However, after one member literally battered down the door to the east side of the building, the directors abandoned this policy as impracticable if not destructive” (Lurie 1979, 140).

16 Administrative law is “a body of rules and doctrines which deals with the powers and actions of administrative agencies” that are organizations other than the judiciary or legislature. These organizations affect the rights of private parties “through either adjudication, rulemaking, investigating, prosecuting, negotiating, settling, or informally acting” (Lurie 1979, 9).

17 In 1921 Congress passed The Futures Trading Act, which was declared unconstitutional.

Citation: Santos, Joseph. “A History of Futures Trading in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL

Investing in Life: Insurance in Antebellum America

Author(s):Murphy, Sharon Ann
Reviewer(s):Hilt, Eric

Published by EH.Net (March 2012)

Sharon Ann Murphy, Investing in Life:  Insurance in Antebellum America.  Baltimore: Johns Hopkins, 2010.  xii + 395 pp. $65 (hardcover), ISBN: 978-0-8018-9624-8.

Reviewed for EH.Net by Eric Hilt, Department of Economics, Wellesley College.

The first half of the nineteenth century witnessed a transformation of the American economy that some historians have termed the “market revolution.”  Financial markets and institutions played a central role in this process, as banks proliferated and securities markets deepened.  In addition, beginning in the 1820s, insurance companies began to offer American households life insurance policies.  Over the course of the nineteenth century, the business expanded rapidly, and by 1870 more than $2 billion in life insurance was in force in the United States.  The companies that underwrote these policies became important intermediaries within the financial system.

Sharon Ann Murphy’s Investing in Life tells the story of the development of the American life insurance industry through the 1870s.  Best characterized as a business history of antebellum life insurance, Murphy utilizes a rich variety of archival records from early companies, as well as newspapers and printed sources, in presenting her narrative.  The book focuses on the strategies employed by the industry’s entrepreneurs to surmount the challenges they faced in establishing and expanding the business.

And the industry faced a great many challenges in its early history. During the first half of the nineteenth century, no tables of mortality or life expectancy existed for the American population, and the vital statistics data necessary for the computations to produce such a table were generally not collected.  Rate-setting was therefore initially based on tables utilized by the English industry, combined with a fair amount of guesswork.  Also legal issues, such as common law restrictions on the ability of married women to enter into contracts, and uncertainty over the claims of a deceased person’s creditors on the payouts of life insurance policies to their families, impeded the industry’s efforts to market their products.  In response to these challenges, prominent figures in the industry worked with both the federal government and state governments to begin collecting mortality data, and to reshape the law in ways more friendly to the industry.

Cultural barriers were important as well.  Murphy persuasively refutes the notion that Americans’ religious beliefs were somehow incompatible with purchasing life insurance, as some scholars have suggested.  Nonetheless, the European experience with using life insurance policies to gamble on the duration of other people’s lives (or worse) made the American population initially reluctant to utilize the industry’s services.  The industry therefore adhered to strict standards regarding “insurable interest” – one could only insure the life of another if a financial interest in that person’s life, such as a debt owed from that person, could be documented.  And the industry emphasized the benefits of safety and security that a life insurance policy could offer to the growing ranks of salaried, middle-class household heads in their advertising campaigns.  As the composition of the population changed, the industry began to change the products it offered as well, for example creating low-cost “industrial” policies for working-class employees in the second half of the nineteenth century.

Although the earliest life insurance corporations were organized as stock companies, starting in the mid-1830s mutuals were created, and quickly dominated the industry.  Murphy argues that the success of the mutual model was not due to the lower rates they initially charged, or to other organizational advantages, but rather to a marketing advantage: the contracts of mutuals offered the appeal of a long-term investment, since the policy holders were entitled to a share of the accumulated profits from their premium payments.  The mutuals thus advertised themselves as “savings institutions” to the middle class, offering something more than insurance to households who might have considered an account with a savings bank. The stock companies responded in the 1850s by offering policies on mutual plans, and by offering tontine or “deferred dividend” plans.

Murphy argues that the Civil War was a watershed event in the industry’s development.  It profoundly disrupted the operations of Northern companies that had underwritten policies on the lives of people residing in Southern states, including some that had insured the lives of slaves on behalf of their masters.  But more importantly, it created an opportunity for the industry to market its services to the men who served in the War, and associate itself with the Union’s cause.  In the end the extremely high rates charged for these policies made them relatively unattractive, and few were sold.  But Murphy argues the industry benefitted from the war because “it revealed to Americans the benefits of insurance” (p. 274), while raising awareness of mortality.  New civilian policies did indeed grow rapidly during the War.

The Civil War created considerable uncertainty over rate-setting, and the industry’s trade association responded by setting an industry standard for war rates.  At least since the 1850s, prominent firms in the industry had attempted to coordinate rate-setting policies in order to reduce the competition they faced from new entrants.  The American states had also established a tradition of imposing high fees on out-of-state companies, in order to protect the underwriters located within their borders.  The industry sought to replace these state regulations with a system of federal regulation, and also challenged state laws that discriminated against out-of-state companies on constitutional grounds.  But in 1869, the Supreme Court ruled in Paul v. Virginia that insurance contracts underwritten by companies across state lines were not “interstate commerce,” and therefore fell within the legitimate purview of state law.

The book’s treatment of the managerial strategies employed in the industry, such as the development of the agency system, the content of the companies’ marketing campaigns, and the details of how different insurance products worked, are particularly strong.  This book makes a fine contribution to the study of the history of the insurance business.  My only criticism of the book is that its focus on management comes at the cost of excluding other questions of potentially great interest.  For example, insurance companies became enormously important financial intermediaries over the nineteenth century, but there is very little analysis or data on the firms’ investments or their role in the financial system.  And although some detail is provided on the content of state regulations of insurance companies, the political economy of these regulations is not explored, nor is much of a comparative perspective on these regulations presented.  Finally, the book mentions that waves of failures occurred in the 1870s, but relatively little attention is given to those events or to other collapses from earlier periods in the industry’s history, which to this reader seem as important as the successes.

Eric Hilt is Associate Professor of Economics at Wellesley College.  He is the author of “Rogue Finance: The Life and Fire Insurance Company and the Panic of 1826” (Business History Review, Spring 2009). Email:

Copyright (c) 2012 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator ( Published by EH.Net (March 2012). All EH.Net reviews are archived at

Subject(s):Business History
Geographic Area(s):North America
Time Period(s):19th Century

Insuring the Industrial Revolution: Fire Insurance in Great Britain, 1700-1850

Author(s):Pearson, Robin
Reviewer(s):Tebeau, Mark

Published by EH.NET (February 2006)

Robin Pearson, Insuring the Industrial Revolution: Fire Insurance in Great Britain, 1700-1850. Aldershot, UK: Ashgate, 2004. xiii + 434 pp. $100 (hardcover), ISBN: 0-7546-3363-2.

Reviewed by Mark Tebeau, Department of History, Cleveland State University.

Despite its rather obvious importance to modern economic development, the history of fire and property insurance has been largely neglected. Until now that is. Robin Pearson’s exhaustively researched and meticulously argued study, Insuring the Industrial Revolution: Fire Insurance in Great Britain, 1700-1850, offers the definitive history of the British fire insurance industry through the middle of the nineteenth century. Even more critically, Pearson establishes just how integral property insurance was to the industrial revolution. Although Pearson is careful to note that fire insurance did not determine the economic development (and, in fact, was often shaped by it), he shows how fire insurance developed in support of the broader economy, was often at the cutting edge of industrial business expansion, and fostered further economic growth.

At its broadest level, Pearson’s argument demonstrates how fire insurance was integral to an industrializing society. Widely available in London by the 1850s and easily acquired in the provinces, fire insurance reduced the uncertainty associated with the hazard of fire — at least in economic terms for businesses and the middle-class for whom insurance policies would have been most affordable. On mechanistic grounds, the availability of relatively cheap and stable forms of insurance offered security to property owners — residential, mercantile, or manufacturing. By indemnifying policy holders from significant property losses associated with fires, insurance provided an institutional incentive for accumulating wealth in the form of material items and investment. In short, it minimized the risks associated with aggressive economic development. At the same time, fire insurance was critical to the financing of the infrastructure, thus critical to industrial development. Firms provided capital for a variety of public improvement projects as well as for private economic ventures. Fire insurance firms also depended on and strengthened the institutional networks on which economic development depended. Thus, fire insurance became integral to industrial activity, “forming part of the feedback mechanism by which trust and confidence multiplied within business communities” (368).

Pearson’s research is exhaustive and his arguments are qualified with exceptional care — so much so, that it is difficult to offer a full accounting in a brief review. Insuring the Industrial Revolution begins with an initial chapter that outlines the overall development of the industry in a series of detailed and well-constructed tables. After that, the story is organized into two parts. The first section offers a chronological portrait of the industry. Pearson essentially divides his narrative into three periods of analysis, bounded by major political and economic developments, as well as shifting trends in the industry itself: the period from 1720 to 1782, the era from 1782 to 1815, and finally the years between 1815 and 1850. In these chapters, Pearson places the industry into the historiography of the British industrial revolution. Throughout, he takes an approach that explores the entirety of the market in insurance. He explores the vagaries of fire insurance firms in the provincial areas as well as in larger cities. He meticulously compiles and analyzes a mountain of data, synthesized into over fifty figures and tables — an impressive and (no-doubt) time-consuming contribution in their own right.

The second section of Insuring the Industrial Revolution examines the industry’s internal organization in a thematic explication of its central elements. It explores four broad topical areas: the process of company foundation and the social, political, and economic networks behind this process; the marketing of insurance and the development of networks of agents to manage the insurance transactions; the core practice of underwriting and its change over time, including the challenges of assessing risk; and the trends in how companies invested their capital and understood that capital in terms of their broader portfolio of risk, as well as how the insurance industry operated as an investment from the perspective of individual investors.

Although well written, it is easy to get lost in the details of this story. Pearson lovingly and painstakingly recounts an exhaustive list of similarities and differences in the industry, paying special attention to the geographic differences between Britain’s various provincial areas, and between the provinces and the major metropolitan centers. Sometimes frustrating from the perspective of a reader, this level of specificity nonetheless advances the larger purpose of the book, which is to recognize that subtle — and sometimes contradictory — manner in which the industry developed. Nor is this precisely a critique of Pearson’s skill as a writer. To the contrary, Pearson shows deft authorial voice in juggling such a complex story. For example, both the first and second sections of the narrative cover the same material, but there is little sense of redundancy here. In fact, Pearson’s examination of the various elements of the insurance industry is exceptionally well constructed. These chapters are a primer on the basics of insurance, introducing issues that did not disappear in 1850 but would continue to haunt insurers well into the twentieth century.

Insuring the Industrial Revolution will become a touchstone for future research on the history of fire and property insurance in part because of the connections that Pearson makes between industrialization and fire insurance. More importantly, though, this work also lays out an agenda for future research into the industry. Pearson provides a compelling argument as to why we should seek to better understand the connections between economic development and property insurance. He suggests, too, that we must look at the subject globally, developing rich cases studies that explore the history of insurance in Europe, the United States, and other places. And, through the example of what he has done with the British fire insurance industry, he demonstrates the benefits of weaving such case studies into the comparative history of property insurance. Not only will we get a better sense of the particulars of the industry, but we will also be able to better understand how the expansion of global economic connections may have fostered the stability of the insurance industry by creating a market in reinsurance and dispersing risk more widely. And, finally, we must keep in mind that the history of the fire insurance industry does not end in 1850 with the advent of more sophisticated tools for understanding risk. Rather, the industry’s continued evolution occurs in a direct relation to broader economic, political, and societal changes, not the least of which is that the danger of fire itself will continue to shift in modernizing societies.

Indeed, if Insuring the Industrial Revolution shows how the study of fire insurance contributes to broader debates in economic history, it also suggests implicitly that we place the study of fire insurance into a wider historical lens. Unfortunately, Pearson is a bit too guarded about such possibilities, arguing that “establishing an evidential link between the expansion of insurance and broad attitudinal changes in a society is extremely difficult” (368). However, I believe that we should nonetheless push the boundaries here and broaden the frame. We should identify ways in which the expansion of fire (and property) insurance was related to changes in the social, cultural, and political realms. I agree that such connections are difficult to prove, but as I suggest in my own work on urban fire risk, the work of insurers had tremendous implications for society. These include encouraging consumerism and the consumer safety movement; fire underwriters’ activities are clearly linked to shifting perceptions of societal danger; and, at the very least, insurers’ visions of the world frequently have been built into the material landscapes of ordinary life.

Insuring the Industrial Revolution is a singular achievement. Robin Pearson demonstrates that fire insurance played a consequential, if sometimes ambivalent, role in the industrial revolution. He also provides a roadmap that future scholars in this area will follow when constructing their own studies of the history of fire insurance. I hope that this fine study garners the wide audience it deserves.

Mark Tebeau is author of Eating Smoke: Fire in Urban America, 1800-1950 (Johns Hopkins University Press: 2003).

Subject(s):Markets and Institutions
Geographic Area(s):Europe
Time Period(s):19th Century

Public Pensions: Gender and Civic Service in the States, 1850-1937

Author(s):Sterett, Susan M.
Reviewer(s):Short, Joanna

Published by EH.NET (December 2003)

Susan M. Sterett, Public Pensions: Gender and Civic Service in the States, 1850-1937. Ithaca, NY: Cornell University Press, 2003. x + 222 pp. $39.95 (cloth), ISBN: 0-8014-3984-1.

Reviewed for EH.NET by Joanna Short, Department of Economics, Augustana College

In Public Pensions, Susan M. Sterett examines the development of state and local pensions in the United States. As early as the 1850s a few large cities made payments to disabled police and firefighters. By 1910, many cities provided pensions for teachers and other civil servants. By 1925, three states had developed state old-age pension plans for all elderly residents. Clearly, views on the appropriate use of public funds for pensions expanded. Initially, only those who performed a dangerous public service were eligible for a pension. Pensionable service gradually expanded to include any public employment, and finally included everyone regardless of service or employment.

Certainly, pension advocates influenced the transformation of public opinion on pensionable service, and thereby influenced the opinions of state court judges. More directly, though, courts responded to the inevitable challenges to new pension programs. In the process, judges carefully constructed their reasoning and placed pensions in the broader context of other payments to individuals, like poor relief and aid to farmers. In this book Sterett, Professor of Political Science at the University of Denver, provides a much-needed analysis of state court decisions on pension programs. She finds that the courts insisted on distinctions between “service,” which was pensionable, and “work,” which generally was not. Gradually, these distinctions were blurred, and the courts became tolerant of social insurance programs.

Courts regulated state taxing and spending using the public purpose doctrine. States could spend money on individuals or firms if the payments served a public purpose. Spending without a public purpose was considered class legislation, an unconstitutional preference of one group over another. Under this doctrine, Sterett claims that courts maintained a distinction between those who are inherently dependent and those who are independent. State payments to the dependent (poor relief, pensions for disabled veterans) served the public interest. Hence, “mother’s pensions,” for widows with children, were a legitimate use of state funds, but only when limited to the indigent. State payments to the independent, though, were considered class legislation. For example, in Griffith v. Osawkee Township (Supreme Court, 1875), the court found that a Kansas township could not sell bonds in order to buy grain for farmers who had suffered a total crop loss as a result of a grasshopper plague. At that time, aid to farmers did not satisfy a public purpose. Thus, the gradual expansion of public pensions hinged on a change in what constituted a public purpose.

The strength of Public Pensions is Sterett’s treatment of some of the early, and relatively obscure, court battles over payments to firemen and soldiers. In the 1850s, for example, insurance companies challenged laws in New York and Illinois requiring them to pay a tax to go to firemen’s charities. The courts upheld the tax, in part because those who paid the tax benefited directly from the services provided by firefighters. In addition, the firemen’s charities provided for the dependent, that is, for disabled firefighters.

Similarly, in response to the Conscription Act of 1863, several towns subject to a draft quota issued bonds or raised taxes in order to raise commutation fees collectively. Taxpayers, and occasionally bondholders who did not receive timely interest payments, sued the towns. For example, Charles Booth, apparently a cranky taxpayer, sued Woodbury, Connecticut. Woodbury planned to raise $200 per man for each man on the town’s 32-man draft quota. Booth argued that the town was transferring the personal liability of the draftees to everyone in town. Woodbury argued that military service was the collective obligation of the town. The court agreed that the draft quota was a collective obligation, therefore, raising money for commutation fees served a public purpose.

Although federal military pensions posed few legal problems since they clearly served a national purpose, states repeatedly disallowed state pensions for all but the disabled and poor. State courts argued that military pensions were a reward for past service, thus they could not serve a public purpose by, say, encouraging more recruits. How, then, did public purpose grow from applying only to those who served in a dangerous service to those who work as teachers and in other civil service jobs? Here, Sterett is not as clear. Courts apparently began to recognize the potential benefits that pensions could bring in recruiting, retaining, and providing an orderly retirement for civil servants. In particular, contemporary advocates of federal civil service pensions emphasized that pensions with a mandatory retirement age would make the civil service more efficient, by retiring elderly workers who had “retired on the job.” Thus, the expansion of pensions to civil servants may have originated with the aging of the civil service ranks once these jobs were transformed from temporary patronage to permanent civil service jobs. Sterett disagrees with this view, since pensions originated at the local level, and patronage (for example in the naming of police captains) continued with pensions as it did without them. Instead, Sterett argues that civil service pensions were part of a more general (and vague) transformation of the courts’ view from one of public, collective service to more direct ties between the individual and the state.

Much more could be said on why states paid pensions to expanding groups of civil servants, and why courts gradually accepted them. Sterett does a great service, though, by directly examining the reasoning that courts used in response to challenges, and how this reasoning changed over time.

Joanna Short is assistant professor of economics at Augustana College. She is the author of an article, currently under review, on Confederate veteran pensions and retirement in the South. At present, she is investigating the origins of saving for retirement in nineteenth-century America.

Subject(s):Labor and Employment History
Geographic Area(s):North America
Time Period(s):20th Century: Pre WWII

A History of Public Sector Pensions in the United States

Author(s):Clark, Robert L.
Craig, Lee A.
Wilson, Jack W.
Reviewer(s):Fishback, Price

Published by EH.NET (October 2003)

Robert L. Clark, Lee A. Craig, and Jack W. Wilson, A History of Public Sector Pensions in the United States. Philadelphia: University of Pennsylvania Press, 2003. ix + 259 pp. $49.95 (cloth), ISBN: 0-8122-3714-5.

Reviewed for EH.NET by Price Fishback, Department of Economics, University of Arizona.

Professors Clark, Craig, and Wilson (CCW) have written an extremely useful history of public sector pensions in the U.S. from Revolutionary times through the 1920s. After laying out the basic economics of pension plans, they set the stage for a discussion of the U.S. experience with military pensions by tracing the history of military pensions from Roman times through the various European plans in the late eighteenth century. Almost half of the book is devoted to an extensive case study of the development of the U.S. Navy pension fund. Once the naval experience has been established, they compare and contrast it with the development of pension plans for the Army, federal nonmilitary employees, and state and local workers.

One of my favorite chapters is the second chapter on “Pension Economics.” It is a marvelous introduction to the economic issues related to the provision of retirement pensions. It is clearly written with simple illustrations that make the fundamental points well. I plan to use it as a reading for MBA students to give them a better appreciation of pension schemes and how they influence labor market decisions. It would be useful as an extra reading for undergraduate labor economics or any other course where pensions are an issue.

The bulk of the book deals with the development of the U.S. Naval pension fund, which is fascinating in part because it seems so unusual to the modern student of pensions. The Continental Congress had offered disability payments to veterans of the Revolution but had never really established a fund. As the navy was established under the Early Republic, a specific disability fund was established and it relied primarily on the sales of prizes captured in naval conflicts. It seems unusual to fund long standing and relatively stable obligations with assets that are highly variable in nature. However, CCW show that the use of prizes to fund disability pensions and to provide direct compensation to naval forces was a long standing practice in many societies. They argue that it was difficult to monitor the actions of naval officers and that the provision of prizes helped provide the proper incentives in naval engagements. (For more extended discussions of this issue, see the recent debate between Douglas Allen and Daniel Benjamin and Christopher Thornberg in the April 2003 issue of Explorations in Economic History.)

Through the early 1830s, assets in the naval funds accumulated at a faster rate than payments for disability. Given this surplus of assets, it was easy politically for Congress to expand the definition of disability on several occasions. Eventually old age and service in the Navy qualified the ex-sailor for funds. The generosity to the family of veterans who died also expanded greatly. Problems arose with the pension fund eventually because the benefits kept expanding but the capture of prizes was highly variable, so that Congress had to bail out of the funds.

CCW’s description of the management of the naval pension fund will make pension experts cringe. Monies from the sale of the prizes were originally invested in relatively safe U.S. Treasury bonds, although the fund appeared to be paying a premium on the purchase of the assets. This avenue for investment eventually dried up as the U.S. retired its war debt and the fund was then invested in a range of state bonds and bank stocks. In analyzing the choices made by the pension fund manager, CCW provide a nice summary of the recent research on investment opportunities in the early 1800s. In their view the fund managers made a series of odd choices, particularly the choice to invest in some Washington, D.C. area banks. These investments were substantially riskier than the First Bank of the United States and many of the New York banks. Further, the failures of the Washington banks created some temporary problems for the disability fund. The odd choices came about partially because of political pressures that left the stock in the Bank of the United States and British government bonds out of bounds. In other cases, the choices appear to have been fraudulent. My sense is that CCW may have relied too much on hindsight in describing the problems with the investment choices. Here we have a brand new method for insuring the payment of disability and no one who had any prior experience in operating such a fund. The options for asset investments were very limited because there were few banks and British government bonds were not an option. The choice of Washington area investments may well have been driven by monitoring issues, as the fund manager trusted investing in assets that he could see close to home with people he knew as opposed to assets in New York (or in Britain for that matter) at a time when the post took a couple of weeks to arrive. Certainly, the discounts on distant bank currencies prior to the Civil War and the regional differences in interest rates through the nineteenth century are consistent with people treating distance as a significant source of monitoring costs.

CCW draw several lessons for modern public pension funds from the naval fund experience with which I agree. First, fund surpluses give politicians incentives to expand the generosity of benefits, particularly because beneficiaries can expect that if the fund goes bust, the taxing power of the state can be relied upon to maintain their benefits. Second, absence of adequate oversight can lead to significant problems with fraud in the administration of the funds, particularly when the funds are invested in private assets. Third, it is likely that political exigencies will get in the way of sound financial management of the fund. We can already point to experiences with Social Security funds that seem to fit these warnings, although I believe that CCW could have done more to describe actual problems experienced with modern pension funds that highlight how the lessons from the past are repeated.

CCW use the chapter on Army pensions to explain the fundamental differences in the financing of the pension disability funds for the Army and the Navy and to illustrate the common labor economics of retirement plans. The Army, unlike the navy, did not rely on prizes for compensation in part because the actions of land-based forces could be monitored more closely than could the navy. Eventually, the reliance of the naval fund on prizes was eliminated as changes in naval technology lowered the costs of monitoring effort at sea and Congress sought to eliminate the costs of operating two separate plans for the army and the navy. Both the Army and Navy pensions eventually shifted from pure disability plans into retirement plans. The retirement plans offered the advantage of providing deferred payment of compensation that would more tightly bind people to military careers. However, the typical patterns of promotion led to too many officers “retiring on the job.” Thus, the army and navy plans used a combination of limits on service and generosity to force and induce officers to retire when their productivity declined.

I have a couple of speculative insights to offer on the development of the army and the navy plans (and for that matter plans for police and firemen, as well). All of these plans started as disability plans to take care of those protecting the public when they were injured. All eventually turned into retirement plans by expanding the definition of disability to include old age. I believe that a combination of compassion for those who served and protected and the costs of monitoring disability claims can help explain this phenomenon of converting disability plans into de facto retirement plans. At young ages it is relatively easy to determine a disability related to service. However, battle wounds, injuries, and diseases often take their toll not just initially but later in life as well. Once veterans reached their sixties, trying to sort out the difference between battle-related and other disabilities became more difficult. As the number of potential pensioners dwindled, the total costs of taking care of those remaining fell, so it became easier to make the case that mere veteran status and current disability were enough to qualify for benefits.

The final two substantive chapters discuss the early origins of nonmilitary pensions for federal employees and for state and local employees. Municipalities led the way in providing pension plans for police officers, fire fighters, and teachers. The funding of these plans was restricted by the basic constitutional relationships between cities and the states; therefore, most city pension plans were invested heavily in their own debt. These plans were relatively generous, replacing nearly 50 percent of lost earnings, as they do today. At the federal level, there had been provisions for pensions for some federal employees but these were typically provided through special ad hoc legislation. The introduction of federal pensions was delayed until the development of a professional civil service that would have long term service. The federal plan provided much better benefits than the existing private pension funds at the time and over the long run put pressure on private employers improve their programs when competing with the government for workers. I found that the two chapters provide some new insights and in particular some valuable quantitative evidence on the character of state and local plans.

CCW could have done more to highlight just how important the military pension funds were in the development of social insurance for the entire population. Theda Skocpol’s Protecting Soldiers and Mothers: The Political Origins of Social Policy in the United States (Harvard University Press, 1992) and Dora Costa’s The Evolution of Retirement (University of Chicago Press, 1998) both show that Civil War pensions were widespread among retirees in the North in the early 1900s. Meanwhile, the pressure from World War I veterans for early payments of the veterans’ bonus was one of many pressures that contributed to the adoption of the Social Security Act.

The book left me sated with information on the navy pension fund, but it left me hungry for more evidence on the state and local and federal pension funds in the early 1900s. In terms of people served, it seems like the nonmilitary funds were more important. However, it is important to note that in terms of published knowledge, we knew much less about the early military pensions than we did about the state and local funds. In the final analysis, this book has admirably filled that gap. It provides a strong foundation for future studies of the development of both public and private pensions in the twentieth century.

Price Fishback is the Frank and Clara Kramer Professor of Economics at the University of Arizona and a Research Associate with the National Bureau of Economic Research. He and Shawn Kantor are the authors of A Prelude to the Welfare State: The Origins of Workers’ Compensation. Price is currently working on a series of studies of the Political Economy of the New Deal.

Subject(s):Labor and Employment History
Geographic Area(s):North America
Time Period(s):20th Century: Pre WWII