EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Child Labor in the United States

Robert Whaples, Wake Forest University

Child labor was widespread in agriculture and in industry in U.S. economy up until the early twentieth century but largely disappeared by the 1930s.

In the colonial period and into the 1800s parents and guardians generally required children to work. Initially most of the population worked in agriculture and children gradually moved into tasks demanding greater strength and skills as they aged. Craig (1993) uses census data to gauge the impact and value of child labor in the middle of the 1800s. He finds that the activities of farm-owning families were not closely linked to the number and ages of their children. Within each region, families in different life-cycle stages earned revenues in almost exactly the same manner. At every life-cycle stage, farm-owning families in the Midwest, for example, earned approximately 30 percent of their gross farm revenue from growing cereal crops; 29 percent from dairy, poultry, and market gardens; 22 percent from land and capital improvements; and 15 percent from hay and livestock. In addition, Craig calculates the value of child labor by estimating how the total value of labor output changed in the presence of each type of family member. He finds that children under 7 reduced the value of farm output, presumably because they reduced their mothers’ economic activities. For each child aged 7 to 12 the family’s output increased by about $16 per year – only 7 percent of the income produced by a typical adult male. Teen-aged females boosted family farm income by only about $22, while teen-aged males boosted income by $58. Because of these low productivity levels, families couldn’t really strike it rich by putting their children to work. When viewed as an investment, children had a strikingly negative rate of return because the costs of raising them generally exceeded the value of the work they performed.

The low value of child labor in agriculture may help explain why children were an important source of labor in many early industrial firms. In 1820 children aged 15 and under made up 23 percent of the manufacturing labor force of the industrializing Northeast. They were especially common in textiles, constituting 50 percent of the work force in cotton mills with 16 or more employees, as well as 41 percent of workers in wool mills, and 24 percent in paper mills. Goldin and Sokoloff (1982) conclude, however, that child labor’s share of industrial employment began its decline as early as 1840. Many textile manufacturers employed whole families and – despite its declining share – child labor continued to be an important input into this industry until the early twentieth century.

In the mid-1800s the median age of leaving home was about 22.5 for males and 20.5 for females. Apprenticed children generally left home at much earlier ages, but this institution was not very strong in the U.S. One study of rural Maryland found that nearly 20 percent of white males aged 15 to 20 were formally bound as apprentices in 1800, but the percentage fell to less than 1 percent by 1860.

National statistics on child labor are first available in 1880. They show that the labor force participation rate of children aged 10 to 19 was considerably higher among black males (65.5 percent) and females (43.7 percent) than among white males (43.1 percent) and females (13.1 percent). Likewise, the rate among foreign-born children exceeded that of their counterparts born in the U.S. – by about 9 percentage points among males and 16 percentage points among females. These differences may be largely attributable to the higher earnings levels of white and native-born families. In addition, labor force participation among rural children exceeded urban rates by about 8 percentage points.

The figures below give trends in child labor from 1880 to 1930.

1880 1900 1930
Labor force participation rates of children, 10 to 15 years old (percentages)
Males 32.5 26.1 6.4
Females 12.2 6.4 2.9
Percentage of 10 to 15 year olds in agricultural employment
Males 69.9 67.6 74.5
Females 37.3 74.5 61.5

Note: 1880 figures are based on Carter and Sutch (1996). Other numbers are unadjusted from those reported by the Bureau of the Census.

These figures show that throughout this period agricultural employment dominated child labor, despite the fact that industrialization was occurring rapidly and agricultural employment fell from 48 percent to 25 percent of the work force between 1880 and 1930.

Data from the Cost of Living Survey of 1889-90 show the importance of child labor to urban households. While the family head’s income peaked when he was in his thirties, family expenditures peaked when he was in his fifties because of the contributions of children. Similarly, in a 1917-19 Department of Labor survey, among families with working children, children’s earnings accounted for an average of 23 percent of total family income.

The continuation of child labor in industry in the late nineteenth and early twentieth centuries, however, sparked controversy. Much of this ire was directed at employers, especially in industries where supervisors bullied children to work harder and assigned them to dangerous, exhausting or degrading jobs. In addition, working-class parents were accused of greedily not caring about the long-term well-being of their children. Requiring them to go to work denied them educational opportunities and reduced their life-time earnings, yet parents of laboring children generally required them to turn over all or almost all of their earnings. For example, one government study of unmarried young women living at home and working in factories and stores in New York City in 1907 found over ninety percent of those under age 20 turned all of their earnings over to their parents. Likewise, Parsons and Goldin (1989) find that children of fathers working in the textile industry left school about three years younger than those with fathers in other industries. They argue that many parents with adolescent children migrated to places, like textile centers, where their children could earn more, even though doing so didn’t increase overall family wages very much. On the other hand, Moehling (2005), using data from 1917 to 1919, finds that adolescents’ earnings gave them increased bargaining power, so that, for example, expenditures on children’s clothing increased as the income they brought into the household increased.

The earliest legal restriction on child labor in the U.S. was a Massachusetts law in 1837 which prohibited manufacturing establishments from employing children under age 15 who hadn’t attended school for at least three months in the previous year. Legislation enacted before 1880 generally contained only weak restrictions and little provisions for enforcement. In the late 1800s, however, social pressure against child labor became more organized under leaders such as Florence Kelley, Edgar Gardner Murphy and Felix Adler. By 1899, 44 states and territories had a child labor law of some type. Twenty-four states had minimum age limits for manufacturing employment by 1900, with age limits around 14 years in the Northeast and Upper Midwest, and no minimums at all in most of the South. When the 1900 Census reported a rise in child labor above levels of 1880, child labor activists responded with increased efforts including a press campaign and the establishment of the National Child Labor Committee in 1904. (Ironically, recent research suggests the Census was in error and child labor was already on the decline by 1900.) By 1910 seventeen more states enacted minimum age laws and several others increased age minimums.

Federal legislation, however, initially proved unsuccessful. The Keating-Owen Act of 1916, which prevented the interstate shipment of goods produced in factories by children under 14 and in mines by children under 16, was struck down in the Hammer v. Dagenhart (1918) ruling. Likewise, the Pomerane Amendment of 1918, which taxed companies that used child labor, was declared unconstitutional in Bailey v. Drexel Furniture (1922) on the grounds that it was an unwarranted exercise of the commerce power of the federal government and violated states’ rights. In 1924, the Senate passed a Constitutional amendment banning child labor, but it was never ratified by enough states. Finally, the Fair Labor Standards Act of 1938 prohibited the full-time employment of those 16 and under (with a few exemptions) and enacted a national minimum wage which made employing most children uneconomical. It received the Supreme Court’s blessing.

Most economic historians conclude that this legislation was not the primary reason for the reduction and virtual elimination of child labor between 1880 and 1940. Instead they point out that industrialization and economic growth brought rising incomes, which allowed parents the luxury of keeping their children out of the work force. In addition, child labor rates have been linked to the expansion of schooling, high rates of return from education, and a decrease in the demand for child labor due to technological changes which increased the skills required in some jobs and allowed machines to take jobs previously filled by children. Moehling (1999) finds that the employment rate of 13-year olds around the beginning of the twentieth century did decline in states that enacted age minimums of 14, but so did the rates for 13-year olds not covered by the restrictions. Overall she finds that state laws are linked to only a small fraction – if any – of the decline in child labor. It may be that states experiencing declines were therefore more likely to pass legislation, which was largely symbolic.

References:

Carter, Susan and Richard Sutch. “Fixing the Facts: Editing of the 1880 U.S. Census of Occupations with Implications for Long-Term Labor Force Trends and the Sociology of Official Statistics.” Historical Methods 29 (1996): 5-24.

Craig, Lee A. To Sow One Acre More: Childbearing and Farm Productivity in the Antebellum North. Baltimore: Johns Hopkins University Press, 1993.

Goldin, Claudia and Kenneth Sokoloff. “Women, Children, and Industrialization in the Early Republic: Evidence from the Manufacturing Censuses.” Journal of Economic History 42, no. 4 (1982): 741-74.

Brian Gratton and Jon Moen, “Immigration, Culture, and Child Labor in the United States, 1880-1920.” Journal of Interdisciplinary History 34, no. 3 (2004): 355-91.

Moehling, Carolyn. “‘She Has Suddenly Become Powerful’: Youth Employment and Household Decision-Making in the Early Twentieth Century.” Journal of Economic History 65, no. 2 (2005): 414-38.

Moehling, Carolyn. “State Child Labor Laws and the Decline of Child Labor.” Explorations in Economic History 36 (1998): 72-106.

Parsons, Donald O. and Claudia Goldin. “Parental Altruism and Self-Interest: Child Labor among Late Nineteenth-Century American Families.” Economic Inquiry 27, no. 4 (1989): 637-59.

Citation: Whaples, Robert. “Child Labor in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. October 7, 2005. URL http://eh.net/encyclopedia/child-labor-in-the-united-states/

Hunger in War and Peace: Women and Children in Germany, 1914-1924

Author(s):Cox, Mary Elisabeth
Reviewer(s):Guinnane, Timothy W.

Published by EH.Net (July 2022).

Mary Elisabeth Cox. Hunger in War and Peace: Women and Children in Germany, 1914-1924. New York: Oxford University Press, 2019. xviii + 383 pp. $105 (cloth), ISBN 978-0-19-882011-6.

Reviewed for EH.Net by Timothy W. Guinnane, Philip Golden Bartlett Professor of Economic History, Emeritus, Yale University.

 

Embargoes and blockades have long been a feature of warfare. During World War I, both the Axis and Entente powers tried to prevent their enemies from trading, especially with neutrals. The Royal Navy’s surface blockade proved both more effective and less damaging to relationships with other countries than did Germany’s submarine warfare. Both sides justified blockades as necessary to prevent their enemies from importing material necessary for making war. Cox focuses on a part of the blockade that was harder to justify: stopping combatants from importing food. Preventing food imports had a military logic. Soldiers eat, of course, and every farmer released from agricultural work could be a soldier. Yet the clear-eyed understood that blocking food imports made war on civilians.

Food shortages led most combatants to adopt rationing schemes. Rationing did not overcome all of the supply shortfalls, however, and debates about the effects on civilians have never stopped. The impact on German civilians has been especially contentious. First, Germany’s economic structure, among the Axis powers, made it especially vulnerable to food-supply interruptions. Prior to the war, external sources accounted for at least 20 percent of total German calories. German farmers relied heavily on fertilizers, most of which had been imported. Wartime demands on labor and other inputs such as draft animals also strained domestic production. If anything, Germany would have wanted to import more, not less food during World War I.  Second, the nutritional deprivation of German civilians during and just after the war supports a narrative about the Treaty of Versailles. To some, the Treaty’s harsh reparations provisions reflected a deliberate effort to make the German people suffer for decades. The deprivation induced by food blockades, in that light, looks like a first step.

Mary Elisabeth Cox provides a comprehensive account of this entire episode and its effect on German civilians. She begins with the law and military strategy behind the World War I blockades and ends with the post-war efforts to feed the German population, especially its children. Her empirical core relies on several remarkable anthropometric studies from the war and immediate post-war periods. For most of these studies, Cox can draw on published data summaries (for example, the mean height of all boys in a particular school class) to conduct more analysis than appeared in the original work. Her statistical work forms the basis for her judgements about how the food shortfalls affected the various components of the civilian population. She concludes, with considerable justification, that the wartime blockade harmed women, children, and the poor much more than other social groups in Germany.

The anthropometric studies reflect a pre-war interest in measuring the human body, especially for children, and a growing concern about the fate of children during the war. None of them are ideal, but together they allow Cox to address the several facets of her question. A wartime sample of Leipzig families yields rare evidence on the intra-household implications of the blockade. Some observers claimed that German mothers mitigated the impact of food shortages by, in effect, starving themselves to protect their children. The Leipzig study’s results are consistent with that view. A different study from Straßburg includes both rural and urban children, and thus allows Cox to address the claim that farmers profited at the expense of urban dwellers. A composite collection published after the war has information on more than 570,000 children from 2,343 school classes and supports an all-German view not possible from the local efforts.

The post-war anthropometric studies reflect one biological and one political issue. Adults who face a temporary calorie shortfall typically recover, sometimes with long-term damage to their health. Children are more vulnerable to malnutrition. Nutritional deprivation reduces height and weight below what one would expect for a given age; it also hinders the development of the brain and other organs. However, if the deficits are made good, the affected person may enjoy a period of “catch-up” growth and reach what would otherwise have been their expected height. Some of the post-war studies document this phenomenon: German children who were unusually short or underweight in 1918 grew rapidly once they had access to adequate food.

The second motivation for the post-war anthropometric studies reflects the startling fact that the end of military conflict did not bring the end of the blockade.  For some eight months after the November 1918 Armistice, the Entente powers and the United States maintained most of the wartime embargo. As Cox notes, some wanted the blockade in place to pressure Germany into accepting the ultimate peace terms. Others (less openly) supported the continued blockade to punish the Germans.

Formally ending the embargo did not return German food supply to its pre-war situation. Neither the government nor private German entities had resources to buy food in the quantities necessary. Transportation was also a problem; wartime underinvestment left the rail system weakened, and the Treaty of Versailles required Germany to give all of its merchant marine and much of its railroad rolling stuck to the Allies.  Cox devotes considerable space to the various post-war efforts to feed the Germans, many of which focused on children. Several foreign entities stepped in: the U.S. government in the form of the United States Food Administration (led by Herbert Hoover, the future president); private efforts spearheaded by German-American groups, by Quaker organizations, and by Save the Children (then a British charity created for this purpose); and even the Swiss government. These bodies succeeded in feeding millions of German children to a standard such that many of them experienced, according to the anthropometric studies, considerable catch-up growth.

The anthropometric studies Cox uses have the great virtue of disciplining wild statements made at the time and since. In some cases, her analysis shows patterns the original study’s authors might not have appreciated. Germany’s food-rationing program intended to make sure all social classes had enough to eat, but the anthropometric data show that the poor and working classes suffered more than others. This finding will surprise few, but it illustrates the imperfections in Germany’s food-rationing schemes. (These imperfections were not limited to Germany, of course.) There may be larger implications in her results: morale among soldiers and civilians alike deteriorated in the last years of the war, and the clear, differential impact of the food shortages by social class may be one reason why. The anthropometrics also shows interesting nuances in the post-war relief efforts. While poorer children were shortest at the end of the war, after the war, their heights recovered more rapidly than did the heights of affluent. Relief agencies claimed they focused on the worst affected. It seems they did.

Cox’s research shows that the blockade significantly affected the height and weight of Germans, especially children, during and just after the war. Thus the policy was not harmless to civilians, as some on the Entente side asserted, and as some earlier historians concluded. On the other hand, the data do not suggest significant, life-long stunting for those measured after the post-war food-aid programs took effect. To be clear, the right counterfactual is complex: the data show that the blockade itself did a lot of damage. Without the post-war efforts, that damage might have been much more widespread and permanent. The general conclusion (food embargoes harm civilians) differs from the historically specific conclusion (in this case, post-war programs mitigated much of the harm.)

None of the usual quibbles and cavils can undermine the core lessons in Cox’s statistical analysis.  Critics might argue with her emphasis on the Entente embargo alone, however. The social-class differences in the anthropometric results hints at a domestic policy failing. Her brief discussion of the domestic turmoil following the November 1918 Armistice also does not do justice to what most historians call a Revolution. Even if the embargo had ended with the Armistice, conditions in Germany would have made food distribution difficult.

The German government claimed that 800,000 civilians died as a direct result of the embargo. While various levels of government collected detailed and usually accurate statistics, such claims, like any that involve a counter-factual, are hard to evaluate. Excess wartime mortality among German civilians had many causes, some under government control. Cox does little with the mortality question, which is an understandable decision, but some discussion would have conveyed a better sense of the polemics involved in the blockade. Some contemporaries worried that about a generation of stunted children; many more made extravagant claims about deaths.

Hunger in War and Peace has the great virtue of considering an ugly episode from several different angles. Many quantitative historical accounts, perhaps especially those that rely on anthropometric methods, tend to focus on the numbers alone, leading to a sterile, context-free study. Some authors would content themselves with dry statistics on the height of Leipzig’s second-graders. Not Cox. Her nine chapters’ subject-matters cross several historical subfields. Her reading and considerable archival research took her to areas that one would expect from students of diplomatic or military history, or even historians of the United States. This broad perspective accounts in part for the book’s length. The length, unfortunately, also reflects some lack of discipline in the argument and prose. Some claims have little basis in her evidence; for example, “Hoover’s actions ushered in a new liberalism to the United States by creating government and privately sponsored social programmes” (p. 236). Some discussions do not seem relevant to the topic; Hoover’s personal motivations do not matter for his efforts in post-World War I Germany. A discussion of religious themes in the thank-you notes German children sent to their benefactors also seem a bit astray. She also claims repeatedly that this or that source was “forgotten” until she came along. This is an odd claim, especially when repeated; was it really forgotten, or were others simply not interested?

These are not serious flaws. Anyone interested in either the impact of war on civilian populations, or in Germany’s turbulent history in the first half of the twentieth century, should just follow her where she goes. This serious scholarship sheds new light on how World War I affected civilians.

 

Timothy W. Guinnane is Philip Golden Bartlett Professor of Economic History, Emeritus, Yale University. Recent publications include “Creating a New Legal Form: the GmbH” (Business History Review, 2021) and “We Do Not Know the Population of Every Country in The World for the Past Two Thousand Years” (Journal of Economic History, forthcoming.)

Copyright (c) 2022 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (July 2022). All EH.Net reviews are archived at https://www.eh.net/book-reviews.

Subject(s):Economywide Country Studies and Comparative History
Military and War
Living Standards, Anthropometric History, Economic Anthropology
Geographic Area(s):Europe
Time Period(s):20th Century: Pre WWII

Urban Decline (and Success) in the United States

Fred Smith and Sarah Allen, Davidson College

Introduction

Any discussion of urban decline must begin with a difficult task – defining what is meant by urban decline. Urban decline (or “urban decay”) is a term that evokes images of abandoned homes, vacant storefronts, and crumbling infrastructure, and if asked to name a city that has suffered urban decline, people often think of a city from the upper Midwest like Cleveland, Detroit, or Buffalo. Yet, while nearly every American has seen or experienced urban decline, the term is one that is descriptive and not easily quantifiable. Further complicating the story is this simple fact – metropolitan areas, like greater Detroit, may experience the symptoms of severe urban decline in one neighborhood while remaining economically robust in others. Indeed, the city of Detroit is a textbook case of urban decline, but many of the surrounding communities in metropolitan Detroit are thriving. An additional complication comes from the fact that modern American cities – cities like Dallas, Charlotte, and Phoenix – don’t look much like their early twentieth century counterparts. Phoenix of the early twenty-first century is an economically vibrant city, yet the urban core of Phoenix looks very, very different from the urban core found in “smaller” cities like Boston or San Francisco.[1] It is unlikely that a weekend visitor to downtown Phoenix would come away with the impression that Phoenix is a rapidly growing city, for downtown Phoenix does not contain the housing, shopping, or recreational venues that are found in downtown San Francisco or Boston.

There isn’t a single variable that will serve as a perfect choice for measuring urban decline, but this article will take an in depth look at urban decline by focusing on the best measure of a city’s well being – population. In order to provide a thorough understanding of urban decline, this article contains three additional sections. The next section employs data from a handful of sources to familiarize the reader with the location and severity of urban decline in the United States. Section three is dedicated to explaining the causes of urban decline in the U.S. Finally, the fourth section looks at the future of cities in the United States and provides some concluding remarks.

Urban Decline in the United States – Quantifying the Population Decline

Between 1950 and 2000 the population of the United States increased by approximately 120 million people, from 152 million to 272 million. Despite the dramatic increase in population experienced by the country as a whole, different cities and states experienced radically different rates of growth. Table 1 shows the population figures for a handful of U.S. cities for the years 1950 to 2000. (It should be noted that these figures are population totals for the cities in the list, not for the associated metropolitan areas.)

Table 1: Population for Selected U.S. Cities, 1950-2000

City

Population

% Change

1950 – 2000

1950

1960

1970

1980

1990

2000

New York

7,891,957

7,781,984

7,895,563

7,071,639

7,322,564

8,008,278

1.5

Philadelphia

2,071,605

2,002,512

1,949,996

1,688,210

1,585,577

1,517,550

-26.7

Boston

801,444

697,177

641,071

562,994

574,283

589,141

-26.5

Chicago

3,620,962

3,550,404

3,369,357

3,005,072

2,783,726

2,896,016

-20.0

Detroit

1,849,568

1,670,144

1,514,063

1,203,339

1,027,974

951,270

-48.6

Cleveland

914,808

876,050

750,879

573,822

505,616

478,403

-47.7

Kansas City

456,622

475,539

507,330

448,159

435,146

441,545

-3.3

Denver

415,786

493,887

514,678

492,365

467,610

554,636

33.4

Omaha

251,117

301,598

346,929

314,255

335,795

390,007

55.3

Los Angeles

1,970,358

2,479,015

2,811,801

2,966,850

3,485,398

3,694,820

87.5

San Francisco

775,357

740,316

715,674

678,974

723,959

776,733

0.2

Seattle

467,591

557,087

530,831

493,846

516,259

563,374

20.5

Houston

596,163

938,219

1,233,535

1,595,138

1,630,553

1,953,631

227.7

Dallas

434,462

679,684

844,401

904,078

1,006,877

1,188,580

173.6

Phoenix

106,818

439,170

584,303

789,704

983,403

1,321,045

1136.7

New Orleans

570,445

627,525

593,471

557,515

496,938

484,674

-15.0

Atlanta

331,314

487,455

495,039

425,022

394,017

416,474

25.7

Nashville

174,307

170,874

426,029

455,651

488,371

545,524

213.0

Washington

802,178

763,956

756,668

638,333

606,900

572,059

-28.7

Miami

249,276

291,688

334,859

346,865

358,548

362,470

45.4

Charlotte

134,042

201,564

241,178

314,447

395,934

540,828

303.5

Source: U.S. Census Bureau.

Several trends emerge from the data in Table 1. The cities in the table are clustered together by region, and the cities at the top of the table – cities from the Northeast and Midwest – experience no significant population growth (New York City) or experience dramatic population loss (Detroit and Cleveland). These cities’ experiences stand in stark contrast to that of the cities located in the South and West – cities found farther down the list. Phoenix, Houston, Dallas, Charlotte, and Nashville all experience triple digit population increases during the five decades from 1950 to 2000. Figure 1 displays this information even more dramatically:

Figure 1: Percent Change in Population, 1950 – 2000

Source: U.S. Census Bureau.

While Table 1 and Figure 1 clearly display the population trends within these cities, they do not provide any information about what was happening to the metropolitan areas in which these cities are located. Table 2 fills this gap. (Please note – these metropolitan areas do not correspond directly to the metropolitan areas identified by the U.S. Census Bureau. Rather, Jordan Rappaport – an economist at the Kansas City Federal Reserve Bank – created these metropolitan areas for his 2005 article “The Shared Fortunes of Cities and Suburbs.”)

Table 2: Population of Selected Metropolitan Areas, 1950 to 2000

Metropolitan Area

1950

1960

1970

2000

Percent Change 1950 to 2000

New York-Newark-Jersey City, NY

13,047,870

14,700,000

15,812,314

16,470,048

26.2

Philadelphia, PA

3,658,905

4,175,988

4,525,928

4,580,167

25.2

Boston, MA

3,065,344

3,357,607

3,708,710

4,001,752

30.5

Chicago-Gary, IL-IN

5,612,248

6,805,362

7,606,101

8,573,111

52.8

Detroit, MI

3,150,803

3,934,800

4,434,034

4,366,362

38.6

Cleveland, OH

1,640,319

2,061,668

2,238,320

1,997,048

21.7

Kansas City, MO-KS

972,458

1,232,336

1,414,503

1,843,064

89.5

Denver, CO

619,774

937,677

1,242,027

2,414,649

289.6

Omaha, NE

471,079

568,188

651,174

803,201

70.5

Los Angeles-Long Beach, CA

4,367,911

6,742,696

8,452,461

12,365,627

183.1

San Francisco-Oakland, CA

2,531,314

3,425,674

4,344,174

6,200,867

145.0

Seattle, WA

920,296

1,191,389

1,523,601

2,575,027

179.8

Houston, TX

1,021,876

1,527,092

2,121,829

4,540,723

344.4

Dallas, TX

780,827

1,119,410

1,555,950

3,369,303

331.5

Phoenix, AZ

NA

663,510

967,522

3,251,876

390.1*

New Orleans, LA

754,856

969,326

1,124,397

1,316,510

74.4

Atlanta, GA

914,214

1,224,368

1,659,080

3,879,784

324.4

Nashville, TN

507,128

601,779

704,299

1,238,570

144.2

Washington, DC

1,543,363

2,125,008

2,929,483

4,257,221

175.8

Miami, FL

579,017

1,268,993

1,887,892

3,876,380

569.5

Charlotte, NC

751,271

876,022

1,028,505

1,775,472

136.3

* The percentage change is for the period from 1960 to 2000.

Source: Rappaport; http://www.kc.frb.org/econres/staff/jmr.htm

Table 2 highlights several of the difficulties in conducting a meaningful discussion about urban decline. First, by glancing at the metro population figures for Cleveland and Detroit, it becomes clear that while these cities were experiencing severe urban decay, the suburbs surrounding them were not. The Detroit metropolitan area grew more rapidly than the Boston, Philadelphia, or New York metro areas, and even the Cleveland metro area experienced growth between 1950 and 2000. Next, we can see from Tables 1 and 2 that some of the cities experiencing dramatic growth between 1950 and 2000 did not enjoy similar increases in population at the metro level. The Phoenix, Charlotte, and Nashville metro areas experienced tremendous growth, but their metro growth rates were not nearly as large as their city growth rates. This raises an important question – did these cities experience tremendous growth rates because the population was growing rapidly or because the cities were annexing large amounts of land from the surrounding suburbs? Table 3 helps to answer this question. In Table 3, land area, measured in square miles, is provided for each of the cities initially listed in Table 1. The data in Table 3 clearly indicate that Nashville and Charlotte, as well as Dallas, Phoenix, and Houston, owe some of their growth to the expansion of their physical boundaries. Charlotte, Phoenix, and Nashville are particularly obvious examples of this phenomenon, for each city increased its physical footprint by over seven hundred percent between 1950 and 2000.

Table 3: Land Area for Selected U.S. Cities, 1950 – 2000

Metropolitan Area

1950

1960

1970

2000

Percent Change 1950 to 2000

New York, NY

315.1

300

299.7

303.3

-3.74

Philadelphia, PA

127.2

129

128.5

135.1

6.21

Boston, MA

47.8

46

46

48.4

1.26

Chicago, IL

207.5

222

222.6

227.1

9.45

Detroit, MI

139.6

138

138

138.8

-0.57

Cleveland, OH

75

76

75.9

77.6

3.47

Kansas City, MO

80.6

130

316.3

313.5

288.96

Denver, CO

66.8

68

95.2

153.4

129.64

Omaha, NE

40.7

48

76.6

115.7

184.28

Los Angeles, CA

450.9

455

463.7

469.1

4.04

San Francisco, CA

44.6

45

45.4

46.7

4.71

Seattle, WA

70.8

82

83.6

83.9

18.50

Houston, TX

160

321

433.9

579.4

262.13

Dallas, TX

112

254

265.6

342.5

205.80

Phoenix, AZ

17.1

187

247.9

474.9

2677.19

New Orleans, LA

199.4

205

197.1

180.6

-9.43

Atlanta, GA

36.9

136

131.5

131.7

256.91

Nashville, TN

22

29

507.8

473.3

2051.36

Washington, DC

61.4

61

61.4

61.4

0.00

Miami, FL

34.2

34

34.3

35.7

4.39

Charlotte, NC

30

64.8

76

242.3

707.67

Sources: Rappaport, http://www.kc.frb.org/econres/staff/jmr.htm; Gibson, Population of the 100 Largest Cities.

Taken together, Tables 1 through 3 paint a clear picture of what has happened in urban areas in the United States between 1950 and 2000: Cities in the Southern and Western U.S. have experienced relatively high rates of growth when they are compared to their neighbors in the Midwest and Northeast. And, as a consequence of this, central cities in the Midwest and Northeast have remained the same size or they have experienced moderate to severe urban decay. But, to complete this picture, it is worth considering some additional data. Table 4 presents regional population and housing data for the United States during the period from 1950 to 2000.

Table 4: Regional Population and Housing Data for the U.S., 1950 – 2000

1950

1960

1970

1980

1990

2000

Population Density – persons/(square mile)

50.9

50.7

57.4

64

70.3

79.6

Population by Region

West

19,561,525

28,053,104

34,804,193

43,172,490

52,786,082

63,197,932

South

47,197,088

54,973,113

62,795,367

75,372,362

85,445,930

100,236,820

Midwest

44,460,762

51,619,139

56,571,663

58,865,670

59,668,632

64,392,776

Northeast

39,477,986

44,677,819

49,040,703

49,135,283

50,809,229

53,594,378

Population by Region – % of Total

West

13

15.6

17.1

19.1

21.2

22.5

South

31.3

30.7

30.9

33.3

34.4

35.6

Midwest

29.5

28.8

27.8

26

24

22.9

Northeast

26.2

24.9

24.1

21.7

20.4

19

Population Living in non-Metropolitan Areas (millions)

66.2

65.9

63

57.1

56

55.4

Population Living in Metropolitan Areas (millions)

84.5

113.5

140.2

169.4

192.7

226

Percent in Suburbs in Metropolitan Area

23.3

30.9

37.6

44.8

46.2

50

Percent in Central City in Metropolitan Area

32.8

32.3

31.4

30

31.3

30.3

Percent Living in the Ten Largest Cities

14.4

12.1

10.8

9.2

8.8

8.5

Percentage Minority by Region

West

26.5

33.3

41.6

South

25.7

28.2

34.2

Midwest

12.5

14.2

18.6

Northeast

16.6

20.6

26.6

Housing Units by Region

West

6,532,785

9,557,505

12,031,802

17,082,919

20,895,221

24,378,020

South

13,653,785

17,172,688

21,031,346

29,419,692

36,065,102

42,382,546

Midwest

13,745,646

16,797,804

18,973,217

22,822,059

24,492,718

26,963,635

Northeast

12,051,182

14,798,360

16,642,665

19,086,593

20,810,637

22,180,440

Source: Hobbs and Stoops (2002).

There are several items of particular interest in Table 4. Every region in the United States becomes more diverse between 1980 and 2000. No region has a minority population greater than 26.5 percent minority in 1980, but only the Midwest remains below 26.5 percent minority by 2000. The U.S. population becomes increasingly urbanized over time, yet the percentage of Americans who live in central cities remains nearly constant. Thus, it is the number of Americans living in suburban communities that has fueled the dramatic increase in “urban” residents. This finding is reinforced by looking at the figures for average population density for the United States as a whole, the figures listing the numbers of Americans living in metropolitan versus non-metropolitan areas, and the figures listing the percentage of Americans living in the ten largest cities in the United States.

Other Measures of Urban Decline

While the population decline documented in the first part of this section suggests that cities in the Northeast and Midwest experienced severe urban decline, anyone who has visited the cities of Detroit and Boston would be able to tell you that the urban decline in these cities has affected their downtowns in very different ways. The central city in Boston is, for the most part, economically vibrant. A visitor to Boston would fine manicured public spaces as well as thriving retail, housing, and commercial sectors. Detroit’s downtown is still scarred by vacant office towers, abandoned retail space, and relatively little housing. Furthermore, the city’s public spaces would not compare favorably to those of Boston. While the leaders of Detroit have made some needed improvements to the city’s downtown in the past several years, the central city remains a mere shadow of its former self. Thus, the loss of population experienced by Detroit and Boston do not tell the full story about how urban decline has affected these cities. They have both lost population, yet Detroit has lost a great deal more – it no longer possesses a well-functioning urban economy.

To date, there have been relatively few attempts to quantify the loss of economic vitality in cities afflicted by urban decay. This is due, in part, to the complexity of the problem. There are few reliable historical measures of economic activity available at the city level. However, economists and other social scientists are beginning to better understand the process and the consequences of severe urban decline.

Economists Edward Glaeser and Joseph Gyourko (2005) developed a model that thoroughly explains the process of urban decline. One of their principal insights is that the durable nature of housing means that the process of urban decline will not mirror the process of urban expansion. In a growing city, the demand for housing is met through the construction of new dwellings. When a city faces a reduction in economic productivity and the resulting reduction in the demand for labor, workers will begin to leave the city. Yet, when population in a city begins to decline, housing units do not magically disappear from the urban landscape. Thus, in Glaeser and Gyourko’s model a declining city is characterized by a stock of housing that interacts with a reduction in housing demand, producing a rapid reduction in the real price of housing. Empirical evidence supports the assertions made by the model, for in cities like Cleveland, Detroit, and Buffalo the real price of housing declined in the second half of the twentieth century. An important implication of the Glaeser and Gyourko model is that declining housing prices are likely to attract individuals who are poor and who have acquired relatively little human capital. The presence of these workers makes it difficult for a declining city – like Detroit – to reverse its economic decline, for it becomes relatively difficult to attract businesses that need workers with high levels of human capital.

Complementing the theoretical work of Glaeser and Gyourko, Fred H. Smith (2003) used property values as a proxy for economic activity in order to quantify the urban decline experienced by Cleveland, Ohio. Smith found that the aggregate assessed value for the property in the downtown core of Cleveland fell from its peak of nearly $600 million in 1930 to a mere $45 million by 1980. (Both figures are expressed in 1980 dollars.) Economists William Collins and Robert Margo have also examined the impact of urban decline on property values. Their work focuses on how the value of owner occupied housing declined in cities that experienced a race riot in the 1960s, and, in particular, it focuses on the gap in property values that developed between white and black owned homes. Nonetheless, a great deal of work still remains to be done before the magnitude of urban decay in the United States is fully understood.

What Caused Urban Decline in the United States?

Having examined the timing and the magnitude of the urban decline experienced by U.S. cities, it is now necessary to consider why these cities decayed. In the subsections that follow, each of the principal causes of urban decline is considered in turn.

Decentralizing Technologies

In “Sprawl and Urban Growth,” Edward Glaeser and Matthew Kahn (2001) assert that “while many factors may have helped the growth of sprawl, it ultimately has only one root cause: the automobile” (p. 2). Urban sprawl is simply a popular term for the decentralization of economic activity, one of the principal symptoms of urban decline. So it should come as no surprise that many of the forces that have caused urban sprawl are in fact the same forces that have driven the decline of central cities. As Glaeser and Kahn suggest, the list of causal forces must begin with the emergence of the automobile.

In order to maximize profit, firm owners must choose their location carefully. Input prices and transportation costs (for inputs and outputs) vary across locations. Firm owners ultimately face two important decisions about location, and economic forces dictate the choices made in each instance. First, owners must decide in which city they will do business. Then, the firm owners must decide where the business should be located within the chosen city. In each case, transportation costs and input costs must dominate the owners’ decision making. For example, a business owner whose firm will produce steel must consider the costs of transporting inputs (e.g. iron ore), the costs of transporting the output (steel), and the cost of other inputs in the production process (e.g. labor). For steel firms operating in the late nineteenth century these concerns were balanced out by choosing locations in the Midwest, either on the Great Lakes (e.g. Cleveland) or major rivers (e.g. Pittsburgh). Cleveland and Pittsburgh were cities with plentiful labor and relatively low transport costs for both inputs and the output. However, steel firm owners choosing Cleveland or Pittsburgh also had to choose a location within these cities. Not surprisingly, the owners chose locations that minimized transportation costs. In Cleveland, for example, the steel mills were built near the shore of Lake Erie and relatively close to the main rail terminal. This minimized the costs of getting iron ore from ships that had come to the city via Lake Erie, and it also provided easy access to water or rail transportation for shipping the finished product. The cost of choosing a site near the rail terminal and the city’s docks was not insignificant: Land close to the city’s transportation hub was in high demand, and, therefore, relatively expensive. It would have been cheaper for firm owners to buy land on the periphery of these cities, but they chose not to do this because the costs associated with transporting inputs and outputs to and from the transportation hub would have dominated the savings enjoyed from buying cheaper land on the periphery of the city. Ultimately, it was the absence of cheap intra-city transport that compressed economic activity into the center of an urban area.

Yet, transportation costs and input prices have not simply varied across space; they’ve also changed over time. The introduction of the car and truck had a profound impact on transportation costs. In 1890, moving a ton of goods one mile cost 18.5 cents (measured in 2001 dollars). By 2003 the cost had fallen to 2.3 cents (measured in 2001 dollars) per ton-mile (Glaeser and Kahn 2001, p. 4). While the car and truck dramatically lowered transportation costs, they did not immediately affect firm owners’ choices about which city to choose as their base of operations. Rather, the immediate impact was felt in the choice of where within a city a firm should choose to locate. The intra-city truck made it easy for a firm to locate on the periphery of the city, where land was plentiful and relatively cheap. Returning to the example from the previous paragraph, the introduction of the intra-city truck allowed the owners of steel mills in Cleveland to build new plants on the periphery of the urban area where land was much cheaper (Encyclopedia of Cleveland History). Similarly, the car made it possible for residents to move away from the city center and out to the periphery of the city – or even to newly formed suburbs. (The suburbanization of the urban population had begun in the late nineteenth century when streetcar lines extended from the central city out to the periphery of the city or to communities surrounding the city; the automobile simply accelerated the process of decentralization.) The retail cost of a Ford Model T dropped considerably between 1910 and 1925 – from approximately $1850 to $470, measuring the prices in constant 1925 dollars (these values would be roughly $21,260 and $5400 in 2006 dollars), and the market responded accordingly. As Table 5 illustrates, the number of passenger car registrations increased dramatically during the twentieth century.

Table 5: Passenger Car Registrations in the United States, 1910-1980

Year

Millions of Registered Vehicles

1910

.5

1920

8.1

1930

23.0

1940

27.5

1950

40.4

1960

61.7

1970

89.2

1980

131.6

Source: Muller, p. 36.

While changes in transportation technology had a profound effect on firms’ and residents’ choices about where to locate within a given city, they also affected the choice of which city would be the best for the firm or resident. Americans began demanding more and improved roads to capitalize on the mobility made possible by the car. Also, the automotive, construction, and tourism related industries lobbied state and federal governments to become heavily involved in funding road construction, a responsibility previously relegated to local governments. The landmark National Interstate and Defense Highway Act of 1956 signified a long-term commitment by the national government to unite the country through an extensive network of interstates, while also improving access between cities’ central business district and outlying suburbs. As cars became affordable for the average American, and paved roads became increasingly ubiquitous, not only did the suburban frontier open up to a rising proportion of the population; it was now possible to live almost anywhere in the United States. (However, it is important to note that the widespread availability of air conditioning was a critical factor in Americans’ willingness to move to the South and West.)

Another factor that opened up the rest of the United States for urban development was a change in the cost of obtaining energy. Obtaining abundant, cheap energy is a concern for firm owners and for households. Historical constraints on production and residential locations continued to fall away in the late nineteenth and early twentieth century as innovations in energy production began to take hold. One of the most important of these advances was the spread of the alternating-current electric grid, which further expanded firms’ choices regarding plant location and layout. Energy could be generated at any site and could travel long distances through thin copper wires. Over a fifty-year period from 1890 to 1940, the proportion of goods manufactured using electrical power soared from 0.1 percent to 85.6 percent (Nye 1990). With the complementary advancements in transportation, factories now had the option of locating outside of the city where they could capture savings from cheaper land. The flexibility of electrical power also offered factories new freedom in the spatial organization of production. Whereas steam engines had required a vertical system of organization in multi-level buildings, the AC grid made possible a form of production that permanently transformed the face of manufacturing – the assembly line (Nye 1990).

The Great Migration

Technological advances were not bound by urban limits; they also extended into rural America where they had sweeping social and economic repercussions. Historically, the vast majority of African Americans had worked on Southern farms, first as slaves and then as sharecroppers. But progress in the mechanization of farming – particularly the development of the tractor and the mechanical cotton-picker – reduced the need for unskilled labor on farms. The dwindling need for farm laborers coupled with continuing racial repression in the South led hundreds of thousands of southern African Americans to migrate North in search of new opportunities. The overall result was a dramatic shift in the spatial distribution of African Americans. In 1900, more than three-fourths of black Americans lived in rural areas, and all but a handful of rural blacks lived in the South. By 1960, 73% of blacks lived in urban areas, and the majority of the urban blacks lived outside of the South (Cahill 1974).

Blacks had begun moving to Northern cities in large numbers at the onset of World War I, drawn by the lure of booming wartime industries. In the 1940s, Southern blacks began pouring into the industrial centers at more than triple the rate of the previous decade, bringing with them a legacy of poverty, poor education, and repression. The swell of impoverished and uneducated African Americans rarely received a friendly reception in Northern communities. Instead they frequently faced more of the treatment they had sought to escape (Groh 1972). Furthermore, the abundance of unskilled manufacturing jobs that had greeted the first waves of migrants had begun to dwindle. Manufacturing firms in the upper Midwest (the Rustbelt) faced increased competition from foreign firms, and many of the American firms that remained in business relocated to the suburbs or the Sunbelt to take advantage of cheap land. African Americans had difficulty accessing jobs at locations in the suburbs, and the result for many was a “spatial mismatch” – they lived in the inner city where employment opportunities were scarce, yet lacked access to transportation and that would allow them to commute to the suburban jobs (Kain 1968). Institutionalized racism, which hindered blacks’ attempts to purchase real estate in the suburbs, as well as the proliferation of inner city public housing projects, reinforced the spatial mismatch problem. As inner city African Americans coped with high unemployment rates, high crime rates and urban disturbances such as the race riots of the 1960s were obvious symptoms of economic distress. High crime rates and the race riots simply accelerated the demographic transformation of Northern cities. White city residents had once been “pulled” to the suburbs by the availability of cheap land and cheap transportation when the automobile became affordable; now white residents were being “pushed” by racism and the desire to escape the poverty and crime that had become common in the inner city. Indeed, by 2000 more than 80 percent of Detroit’s residents were African American – a stark contrast from 1950 when only 16 percent of the population was black.

The American City in the Twenty-First Century

Some believe that technology – specifically advances in information technology – will render the city obsolete in the twenty-first century. Urban economists find their arguments unpersuasive (Glaeser 1998). Recent history shows that the way we interact with one another has changed dramatically in a very short period of time. E-mail, cell phones, and text messages belonged to the world science fiction as recently as 1980. Clearly, changes in information technology no longer make it a requirement that we locate ourselves in close proximity to the people we want to interact with. Thus, one can understand the temptation to think that we will no longer need to live so close to one another in New York, San Francisco or Chicago. Ultimately, a person or a firm will only locate in a city if the benefits from being in the city outweigh the costs. What is missing from this analysis, though, is that people and firms locate in cities for reasons that are not immediately obvious.

Economists point to economies of agglomeration as one of the main reasons that firms will continue to choose urban locations over rural locations. Economics of agglomeration exist when a firm’s productivity is enhanced (or its cost of doing business is lowered) because it is located in a cluster of complementary firms of in a densely populated area. A classic example of an urban area that displays substantial economies of agglomeration is “Silicon Valley” (near San Jose, California). Firms choosing to locate in Silicon Valley benefit from several sources of economies of agglomeration, but two of the most easily understood are knowledge spillovers and labor pooling. Knowledge spillovers in Silicon Valley occur because individuals who work at “computer firms” (firms producing software, hardware, etc.) are likely to interact with one another on a regular basis. These interactions can be informal – playing together on a softball team, running into one another at a child’s soccer game, etc. – but they are still very meaningful because they promote the exchange of ideas. By exchanging ideas and information it makes it possible for workers to (potentially) increase their productivity at their own job. Another example of economies of agglomeration in Silicon Valley is the labor pooling that occurs there. Because workers who are trained in computer related fields know that computer firms are located in Silicon Valley, they are more likely to choose to live in or around Silicon Valley. Thus, firms operating in Silicon Valley have an abundant supply of labor in close proximity, and, similarly, workers enjoy the opportunities associated with having several firms that can make use of their skills in a small geographic area. The clustering of computer industry workers and firms allows firms to save money when they need to hire another worker, and it makes it easier for workers who need a job to find one.

In addition to economies of agglomeration, there are other economic forces that make the disappearance of the city unlikely. Another of the benefits that some individuals will associate with urban living is the diversity of products and experiences that are available in a city. For example, in a large city like Chicago it is possible to find deep dish pizza, thin crust pizza, Italian food, Persian food, Greek food, Swedish food, Indian food, Chinese food… literally almost any type of food that you might imagine. Why is all of this food available in Chicago but not in a small town in southern Illinois? Economists answer this question using the concept of demand density. Lots of people like Chinese food, so it is not uncommon to find a Chinese restaurant in a small town. Fewer people, though, have been exposed to Persian cuisine. While it is quite likely that the average American would like Persian food if it were available, most Americans haven’t had the opportunity to try it. Hence, the average American is unlikely to demand much Persian food in a given time period. So, individuals who are interested in operating a Persian food restaurant logically choose to operate in Chicago instead of a small town in southern Illinois. While each individual living in Chicago may not demand Persian food any more frequently than the individuals living in the small town, the presence of so many people in a relatively small area makes it possible for the Persian food restaurant to operate and thrive. Moreover, exposure to Persian food may change people’s tastes and preferences. Over time, the amount of Persian food demand (on average) from each inhabitant of the city may increase.

Individuals who value Persian food – or any of the other experiences that can only be found in a large city – will value the opportunity to live in a large city more than they will value the opportunity to live in a rural area. But the incredible diversity that a large city has to offer is a huge benefit to some individuals, not to everyone. Rural areas will continue to be populated as long as there are people who prefer the pleasures of low-density living. For these individuals, the pleasure of being able to walk in the woods or hike in the mountains may be more than enough compensation for living in a part of the country that doesn’t have a Persian restaurant.

As long as there are people (and firm owners) who believe that the benefits from locating in a city outweigh the costs, cities will continue to exist. The data shown above make it clear that Americans continue to value urban living. Indeed, the population figures for Chicago and New York suggest that in the 1990s more people were finding that there are net benefits to living in very large cities. The rapid expansion of cities in the South and Southwest simply reinforces this idea. To be sure, the urban living experienced in Charlotte is not the same as the urban living experience in Chicago or New York. So, while the urban cores of cities like Detroit and Cleveland are not likely to return to their former size anytime soon, and urban decline will continue to be a problem for these cities in the foreseeable future, it remains clear that Americans enjoy the benefits of urban living and that the American city will continue to thrive in the future.

References

Cahill, Edward E. “Migration and the Decline of the Black Population in Rural and Non-Metropolitan Areas.” Phylon 35, no. 3, (1974): 284-92.

Casadesus-Masanell, Ramon. “Ford’s Model-T: Pricing over the Product Life Cycle,” ABANTE –

Studies in Business Management 1, no. 2, (1998): 143-65.

Chudacoff, Howard and Judith Smith. The Evolution of American Urban Society, fifth edition. Upper Saddle River, NJ: Prentice Hall, 2000.

Collins, William and Robert Margo. “The Economic Aftermath of the 1960s Riots in American Cities: Evidence from Property Values.” Journal of Economic History 67, no. 4 (2007): 849 -83.

Collins, William and Robert Margo. “Race and the Value of Owner-Occupied Housing, 1940-1990.”

Regional Science and Urban Economics 33, no. 3 (2003): 255-86.

Cutler, David et al. “The Rise and Decline of the American Ghetto.” Journal of Political Economy 107, no. 3 (1999): 455-506.

Frey, William and Alden Speare, Jr. Regional and Metropolitan Growth and Decline in the United States. New York: Russell Sage Foundation, 1988.

Gibson, Campbell. “Population of the 100 Largest Cities and Other Urban Places in the United States: 1790 to 1990.” Population Division Working Paper, no. 27, U.S. Bureau of the Census, June 1998. Accessed at: http://www.census.gov/population/www/documentation/twps0027.html

Glaeser, Edward. “Are Cities Dying?” Journal of Economic Perspectives 12, no. 2 (1998): 139-60.

Glaeser, Edward and Joseph Gyourko. “Urban Decline and Durable Housing.” Journal of Political Economy 113, no. 2 (2005): 345-75.

Glaeser, Edward and Matthew Kahn. “Decentralized Employment and the Transformation of the American City.” Brookings-Wharton Papers on Urban Affairs, 2001.

Glaeser, Edward and Janet Kohlhase. “Cities, Regions, and the Decline of Transport Costs.” NBER Working Paper Series, National Bureau of Economic Research, 2003.

Glaeser, Edward and Albert Saiz. “The Rise of the Skilled City.” Brookings-Wharton Papers on Urban Affairs, 2004.

Glaeser, Edward and Jesse Shapiro. “Urban Growth in the 1990s: Is City Living Back?” Journal of Regional Science 43, no. 1 (2003): 139-65.

Groh, George. The Black Migration: The Journey to Urban America. New York: Weybright and Talley, 1972.

Gutfreund, Owen D. Twentieth Century Sprawl: Highways and the Reshaping of the American Landscape. Oxford: Oxford University Press, 2004.

Hanson, Susan, ed. The Geography of Urban Transportation. New York: Guilford Press, 1986.

Hobbs, Frank and Nicole Stoops. Demographic Trends in the Twentieth Century: Census 2000 Special Reports. Washington, DC: U.S. Census Bureau, 2002.

Kim, Sukkoo. “Urban Development in the United States, 1690-1990.” NBER Working Paper Series, National Bureau of Economic Research, 1999.

Mieszkowski, Peter and Edwin Mills. “The Causes of Metropolitan Suburbanization.” Journal of Economic Perspectives 7, no. 3 (1993): 135-47.

Muller, Peter. “Transportation and Urban Form: Stages in the Spatial Evolution of the American Metropolis.” In The Geography of Urban Transportation, edited by Susan Hanson. New York: Guilford Press, 1986.

Nye, David. Electrifying America: Social Meanings of a New Technology, 1880-1940. Cambridge, MA: MIT Press, 1990.

Nye, David. Consuming Power: A Social History of American Energies. Cambridge, MA: MIT Press, 1998.

Rae, Douglas. City: Urbanism and Its End. New Haven: Yale University Press, 2003.

Rappaport, Jordan. “U.S. Urban Decline and Growth, 1950 to 2000.” Economic Review: Federal Reserve Bank of Kansas City, no. 3, 2003: 15-44.

Rodwin, Lloyd and Hidehiko Sazanami, eds. Deindustrialization and Regional Economic Transformation: The Experience of the United States. Boston: Unwin Hyman, 1989.

Smith, Fred H. “Decaying at the Core: Urban Decline in Cleveland, Ohio.” Research in Economic History 21 (2003): 135-84.

Stanback, Thomas M. Jr. and Thierry J. Noyelle. Cities in Transition: Changing Job Structures in Atlanta, Denver, Buffalo, Phoenix, Columbus (Ohio), Nashville, Charlotte. Totowa, NJ: Allanheld, Osmun, 1982.

Van Tassel, David D. and John J. Grabowski, editors, The Encyclopedia of Cleveland History. Bloomington: Indiana University Press, 1996. Available at http://ech.case.edu/


[1] Reporting the size of a “city” should be done with care. In day-to-day usage, many Americans might talk about the size (population) of Boston and assert that Boston is a larger city than Phoenix. Strictly speaking, this is not true. The 2000 Census reports that the population of Boston was 589,000 while Phoenix had a population of 1.3 million. However, the Boston metropolitan area contained 4.4 million inhabitants in 2000 – substantially more than the 3.3 million residents of the Phoenix metropolitan area.

Citation: Smith, Fred and Sarah Allen. “Urban Decline (and Success), US”. EH.Net Encyclopedia, edited by Robert Whaples. June 5, 2008. URL http://eh.net/encyclopedia/urban-decline-and-success-in-the-united-states/

Slavery in the United States

Jenny Bourne, Carleton College

Slavery is fundamentally an economic phenomenon. Throughout history, slavery has existed where it has been economically worthwhile to those in power. The principal example in modern times is the U.S. South. Nearly 4 million slaves with a market value estimated to be between $3.1 and $3.6 billion lived in the U.S. just before the Civil War. Masters enjoyed rates of return on slaves comparable to those on other assets; cotton consumers, insurance companies, and industrial enterprises benefited from slavery as well. Such valuable property required rules to protect it, and the institutional practices surrounding slavery display a sophistication that rivals modern-day law and business.

THE SPREAD OF SLAVERY IN THE U.S.

Not long after Columbus set sail for the New World, the French and Spanish brought slaves with them on various expeditions. Slaves accompanied Ponce de Leon to Florida in 1513, for instance. But a far greater proportion of slaves arrived in chains in crowded, sweltering cargo holds. The first dark-skinned slaves in what was to become British North America arrived in Virginia — perhaps stopping first in Spanish lands — in 1619 aboard a Dutch vessel. From 1500 to 1900, approximately 12 million Africans were forced from their homes to go westward, with about 10 million of them completing the journey. Yet very few ended up in the British colonies and young American republic. By 1808, when the trans-Atlantic slave trade to the U.S. officially ended, only about 6 percent of African slaves landing in the New World had come to North America.

Slavery in the North

Colonial slavery had a slow start, particularly in the North. The proportion there never got much above 5 percent of the total population. Scholars have speculated as to why, without coming to a definite conclusion. Some surmise that indentured servants were fundamentally better suited to the Northern climate, crops, and tasks at hand; some claim that anti-slavery sentiment provided the explanation. At the time of the American Revolution, fewer than 10 percent of the half million slaves in the thirteen colonies resided in the North, working primarily in agriculture. New York had the greatest number, with just over 20,000. New Jersey had close to 12,000 slaves. Vermont was the first Northern region to abolish slavery when it became an independent republic in 1777. Most of the original Northern colonies implemented a process of gradual emancipation in the late eighteenth and early nineteenth centuries, requiring the children of slave mothers to remain in servitude for a set period, typically 28 years. Other regions above the Mason-Dixon line ended slavery upon statehood early in the nineteenth century — Ohio in 1803 and Indiana in 1816, for instance.

TABLE 1
Population of the Original Thirteen Colonies, selected years by type

1750 1750 1790 1790 1790 1810 1810 1810 1860 1860 1860

State

White Black White Free Slave White Free Slave White Free Slave
Nonwhite Nonwhite Nonwhite
108,270 3,010 232,236 2,771 2,648 255,179 6,453 310 451,504 8,643 Connecticut
27,208 1,496 46,310 3,899 8,887 55,361 13,136 4,177 90,589 19,829 1,798 Delaware
4,200 1,000 52,886 398 29,264 145,414 1,801 105,218 591,550 3,538 462,198 Georgia
97,623 43,450 208,649 8,043 103,036 235,117 33,927 111,502 515,918 83,942 87,189 Maryland
183,925 4,075 373,187 5,369 465,303 6,737 1,221,432 9,634 Massachusetts
26,955 550 141,112 630 157 182,690 970 325,579 494 New Hampshire
66,039 5,354 169,954 2,762 11,423 226,868 7,843 10,851 646,699 25,318 New Jersey
65,682 11,014 314,366 4,682 21,193 918,699 25,333 15,017 3,831,590 49,145 New York
53,184 19,800 289,181 5,041 100,783 376,410 10,266 168,824 629,942 31,621 331,059 North Carolina
116,794 2,872 317,479 6,531 3,707 786,804 22,492 795 2,849,259 56,956 Pennsylvania
29,879 3,347 64,670 3,484 958 73,214 3,609 108 170,649 3,971 Rhode Island
25,000 39,000 140,178 1,801 107,094 214,196 4,554 196,365 291,300 10,002 402,406 South Carolina
129,581 101,452 442,117 12,866 292,627 551,534 30,570 392,518 1,047,299 58,154 490,865 Virginia
934,340 236,420 2,792,325 58,277 681,777 4,486,789 167,691 1,005,685 12,663,310 361,247 1,775,515 United States

Source: Historical Statistics of the U.S. (1970), Franklin (1988).

Slavery in the South

Throughout colonial and antebellum history, U.S. slaves lived primarily in the South. Slaves comprised less than a tenth of the total Southern population in 1680 but grew to a third by 1790. At that date, 293,000 slaves lived in Virginia alone, making up 42 percent of all slaves in the U.S. at the time. South Carolina, North Carolina, and Maryland each had over 100,000 slaves. After the American Revolution, the Southern slave population exploded, reaching about 1.1 million in 1810 and over 3.9 million in 1860.

TABLE 2
Population of the South 1790-1860 by type

Year White Free Nonwhite Slave
1790 1,240,454 32,523 654,121
1800 1,691,892 61,575 851,532
1810 2,118,144 97,284 1,103,700
1820 2,867,454 130,487 1,509,904
1830 3,614,600 175,074 1,983,860
1840 4,601,873 207,214 2,481,390
1850 6,184,477 235,821 3,200,364
1860 8,036,700 253,082 3,950,511

Source: Historical Statistics of the U.S. (1970).

Slave Ownership Patterns

Despite their numbers, slaves typically comprised a minority of the local population. Only in antebellum South Carolina and Mississippi did slaves outnumber free persons. Most Southerners owned no slaves and most slaves lived in small groups rather than on large plantations. Less than one-quarter of white Southerners held slaves, with half of these holding fewer than five and fewer than 1 percent owning more than one hundred. In 1860, the average number of slaves residing together was about ten.

TABLE 3
Slaves as a Percent of the Total Population
selected years, by Southern state

1750 1790 1810 1860
State Black/total Slave/total Slave/total Slave/total
population population population population
Alabama 45.12
Arkansas 25.52
Delaware 5.21 15.04 5.75 1.60
Florida 43.97
Georgia 19.23 35.45 41.68 43.72
Kentucky 16.87 19.82 19.51
Louisiana 46.85
Maryland 30.80 32.23 29.30 12.69
Mississippi 55.18
Missouri 9.72
North Carolina 27.13 25.51 30.39 33.35
South Carolina 60.94 43.00 47.30 57.18
Tennessee 17.02 24.84
Texas 30.22
Virginia 43.91 39.14 40.27 30.75
Overall 37.97 33.95 33.25 32.27

Sources: Historical Statistics of the United States (1970), Franklin (1988).

TABLE 4
Holdings of Southern Slaveowners
by states, 1860

State Total Held 1 Held 2 Held 3 Held 4 Held 5 Held 1-5 Held 100- Held 500+
slaveholders slave slaves Slaves slaves slaves slaves 499 slaves slaves
AL 33,730 5,607 3,663 2,805 2,329 1,986 16,390 344
AR 11,481 2,339 1,503 1,070 894 730 6,536 65 1
DE 587 237 114 74 51 34 510
FL 5,152 863 568 437 365 285 2,518 47
GA 41,084 6,713 4,335 3,482 2,984 2,543 20,057 211 8
KY 38,645 9,306 5,430 4,009 3,281 2,694 24,720 7
LA 22,033 4,092 2,573 2,034 1,536 1,310 11,545 543 4
MD 13,783 4,119 1,952 1,279 1,023 815 9,188 16
MS 30,943 4,856 3,201 2,503 2,129 1,809 14,498 315 1
MO 24,320 6,893 3,754 2,773 2,243 1,686 17,349 4
NC 34,658 6,440 4,017 3,068 2,546 2,245 18,316 133
SC 26,701 3,763 2,533 1,990 1,731 1,541 11,558 441 8
TN 36,844 7,820 4,738 3,609 3,012 2,536 21,715 47
TX 21,878 4,593 2,874 2,093 1,782 1,439 12,781 54
VA 52,128 11,085 5,989 4,474 3,807 3,233 28,588 114
TOTAL 393,967 78,726 47,244 35,700 29,713 24,886 216,269 2,341 22

Source: Historical Statistics of the United States (1970).

Rapid Natural Increase in U.S. Slave Population

How did the U.S. slave population increase nearly fourfold between 1810 and 1860, given the demise of the trans-Atlantic trade? They enjoyed an exceptional rate of natural increase. Unlike elsewhere in the New World, the South did not require constant infusions of immigrant slaves to keep its slave population intact. In fact, by 1825, 36 percent of the slaves in the Western hemisphere lived in the U.S. This was partly due to higher birth rates, which were in turn due to a more equal ratio of female to male slaves in the U.S. relative to other parts of the Americas. Lower mortality rates also figured prominently. Climate was one cause; crops were another. U.S. slaves planted and harvested first tobacco and then, after Eli Whitney’s invention of the cotton gin in 1793, cotton. This work was relatively less grueling than the tasks on the sugar plantations of the West Indies and in the mines and fields of South America. Southern slaves worked in industry, did domestic work, and grew a variety of other food crops as well, mostly under less abusive conditions than their counterparts elsewhere. For example, the South grew half to three-quarters of the corn crop harvested between 1840 and 1860.

INSTITUTIONAL FRAMEWORK

Central to the success of slavery are political and legal institutions that validate the ownership of other persons. A Kentucky court acknowledged the dual character of slaves in Turner v. Johnson (1838): “[S]laves are property and must, under our present institutions, be treated as such. But they are human beings, with like passions, sympathies, and affections with ourselves.” To construct slave law, lawmakers borrowed from laws concerning personal property and animals, as well as from rules regarding servants, employees, and free persons. The outcome was a set of doctrines that supported the Southern way of life.

The English common law of property formed a foundation for U.S. slave law. The French and Spanish influence in Louisiana — and, to a lesser extent, Texas — meant that Roman (or civil) law offered building blocks there as well. Despite certain formal distinctions, slave law as practiced differed little from common-law to civil-law states. Southern state law governed roughly five areas: slave status, masters’ treatment of slaves, interactions between slaveowners and contractual partners, rights and duties of noncontractual parties toward others’ slaves, and slave crimes. Federal law and laws in various Northern states also dealt with matters of interstate commerce, travel, and fugitive slaves.

Interestingly enough, just as slave law combined elements of other sorts of law, so too did it yield principles that eventually applied elsewhere. Lawmakers had to consider the intelligence and volition of slaves as they crafted laws to preserve property rights. Slavery therefore created legal rules that could potentially apply to free persons as well as to those in bondage. Many legal principles we now consider standard in fact had their origins in slave law.

Legal Status Of Slaves And Blacks

By the end of the seventeenth century, the status of blacks — slave or free — tended to follow the status of their mothers. Generally, “white” persons were not slaves but Native and African Americans could be. One odd case was the offspring of a free white woman and a slave: the law often bound these people to servitude for thirty-one years. Conversion to Christianity could set a slave free in the early colonial period, but this practice quickly disappeared.

Skin Color and Status

Southern law largely identified skin color with status. Those who appeared African or of African descent were generally presumed to be slaves. Virginia was the only state to pass a statute that actually classified people by race: essentially, it considered those with one quarter or more black ancestry as black. Other states used informal tests in addition to visual inspection: one-quarter, one-eighth, or one-sixteenth black ancestry might categorize a person as black.

Even if blacks proved their freedom, they enjoyed little higher status than slaves except, to some extent, in Louisiana. Many Southern states forbade free persons of color from becoming preachers, selling certain goods, tending bar, staying out past a certain time of night, or owning dogs, among other things. Federal law denied black persons citizenship under the Dred Scott decision (1857). In this case, Chief Justice Roger Taney also determined that visiting a free state did not free a slave who returned to a slave state, nor did traveling to a free territory ensure emancipation.

Rights And Responsibilities Of Slave Masters

Southern masters enjoyed great freedom in their dealings with slaves. North Carolina Chief Justice Thomas Ruffin expressed the sentiments of many Southerners when he wrote in State v. Mann (1829): “The power of the master must be absolute, to render the submission of the slave perfect.” By the nineteenth century, household heads had far more physical power over their slaves than their employees. In part, the differences in allowable punishment had to do with the substitutability of other means of persuasion. Instead of physical coercion, antebellum employers could legally withhold all wages if a worker did not complete all agreed-upon services. No such alternate mechanism existed for slaves.

Despite the respect Southerners held for the power of masters, the law — particularly in the thirty years before the Civil War — limited owners somewhat. Southerners feared that unchecked slave abuse could lead to theft, public beatings, and insurrection. People also thought that hungry slaves would steal produce and livestock. But masters who treated slaves too well, or gave them freedom, caused consternation as well. The preamble to Delaware’s Act of 1767 conveys one prevalent view: “[I]t is found by experience, that freed [N]egroes and mulattoes are idle and slothful, and often prove burdensome to the neighborhood wherein they live, and are of evil examples to slaves.” Accordingly, masters sometimes fell afoul of the criminal law not only when they brutalized or neglected their slaves, but also when they indulged or manumitted slaves. Still, prosecuting masters was extremely difficult, because often the only witnesses were slaves or wives, neither of whom could testify against male heads of household.

Law of Manumission

One area that changed dramatically over time was the law of manumission. The South initially allowed masters to set their slaves free because this was an inherent right of property ownership. During the Revolutionary period, some Southern leaders also believed that manumission was consistent with the ideology of the new nation. Manumission occurred only rarely in colonial times, increased dramatically during the Revolution, then diminished after the early 1800s. By the 1830s, most Southern states had begun to limit manumission. Allowing masters to free their slaves at will created incentives to emancipate only unproductive slaves. Consequently, the community at large bore the costs of young, old, and disabled former slaves. The public might also run the risk of having rebellious former slaves in its midst.

Antebellum U.S. Southern states worried considerably about these problems and eventually enacted restrictions on the age at which slaves could be free, the number freed by any one master, and the number manumitted by last will. Some required former masters to file indemnifying bonds with state treasurers so governments would not have to support indigent former slaves. Some instead required former owners to contribute to ex-slaves’ upkeep. Many states limited manumissions to slaves of a certain age who were capable of earning a living. A few states made masters emancipate their slaves out of state or encouraged slaveowners to bequeath slaves to the Colonization Society, which would then send the freed slaves to Liberia. Former slaves sometimes paid fees on the way out of town to make up for lost property tax revenue; they often encountered hostility and residential fees on the other end as well. By 1860, most Southern states had banned in-state and post-mortem manumissions, and some had enacted procedures by which free blacks could voluntarily become slaves.

Other Restrictions

In addition to constraints on manumission, laws restricted other actions of masters and, by extension, slaves. Masters generally had to maintain a certain ratio of white to black residents upon plantations. Some laws barred slaves from owning musical instruments or bearing firearms. All states refused to allow slaves to make contracts or testify in court against whites. About half of Southern states prohibited masters from teaching slaves to read and write although some of these permitted slaves to learn rudimentary mathematics. Masters could use slaves for some tasks and responsibilities, but they typically could not order slaves to compel payment, beat white men, or sample cotton. Nor could slaves officially hire themselves out to others, although such prohibitions were often ignored by masters, slaves, hirers, and public officials. Owners faced fines and sometimes damages if their slaves stole from others or caused injuries.

Southern law did encourage benevolence, at least if it tended to supplement the lash and shackle. Court opinions in particular indicate the belief that good treatment of slaves could enhance labor productivity, increase plantation profits, and reinforce sentimental ties. Allowing slaves to control small amounts of property, even if statutes prohibited it, was an oft-sanctioned practice. Courts also permitted slaves small diversions, such as Christmas parties and quilting bees, despite statutes that barred slave assemblies.

Sale, Hire, And Transportation Of Slaves

Sales of Slaves

Slaves were freely bought and sold across the antebellum South. Southern law offered greater protection to slave buyers than to buyers of other goods, in part because slaves were complex commodities with characteristics not easily ascertained by inspection. Slave sellers were responsible for their representations, required to disclose known defects, and often liable for unknown defects, as well as bound by explicit contractual language. These rules stand in stark contrast to the caveat emptor doctrine applied in antebellum commodity sales cases. In fact, they more closely resemble certain provisions of the modern Uniform Commercial Code. Sales law in two states stands out. South Carolina was extremely pro-buyer, presuming that any slave sold at full price was sound. Louisiana buyers enjoyed extensive legal protection as well. A sold slave who later manifested an incurable disease or vice — such as a tendency to escape frequently — could generate a lawsuit that entitled the purchaser to nullify the sale.

Hiring Out Slaves

Slaves faced the possibility of being hired out by their masters as well as being sold. Although scholars disagree about the extent of hiring in agriculture, most concur that hired slaves frequently worked in manufacturing, construction, mining, and domestic service. Hired slaves and free persons often labored side by side. Bond and free workers both faced a legal burden to behave responsibly on the job. Yet the law of the workplace differed significantly for the two: generally speaking, employers were far more culpable in cases of injuries to slaves. The divergent law for slave and free workers does not necessarily imply that free workers suffered. Empirical evidence shows that nineteenth-century free laborers received at least partial compensation for the risks of jobs. Indeed, the tripartite nature of slave-hiring arrangements suggests why antebellum laws appeared as they did. Whereas free persons had direct work and contractual relations with their bosses, slaves worked under terms designed by others. Free workers arguably could have walked out or insisted on different conditions or wages. Slaves could not. The law therefore offered substitute protections. Still, the powerful interests of slaveowners also may mean that they simply were more successful at shaping the law. Postbellum developments in employment law — North and South — in fact paralleled earlier slave-hiring law, at times relying upon slave cases as legal precedents.

Public Transportation

Public transportation also figured into slave law: slaves suffered death and injury aboard common carriers as well as traveled as legitimate passengers and fugitives. As elsewhere, slave-common carrier law both borrowed from and established precedents for other areas of law. One key doctrine originating in slave cases was the “last-clear-chance rule.” Common-carrier defendants that had failed to offer slaves — even negligent slaves — a last clear chance to avoid accidents ended up paying damages to slaveowners. Slaveowner plaintiffs won several cases in the decade before the Civil War when engineers failed to warn slaves off railroad tracks. Postbellum courts used slave cases as precedents to entrench the last-clear-chance doctrine.

Slave Control: Patrollers And Overseers

Society at large shared in maintaining the machinery of slavery. In place of a standing police force, Southern states passed legislation to establish and regulate county-wide citizen patrols. Essentially, Southern citizens took upon themselves the protection of their neighbors’ interests as well as their own. County courts had local administrative authority; court officials appointed three to five men per patrol from a pool of white male citizens to serve for a specified period. Typical patrol duty ranged from one night per week for a year to twelve hours per month for three months. Not all white men had to serve: judges, magistrates, ministers, and sometimes millers and blacksmiths enjoyed exemptions. So did those in the higher ranks of the state militia. In many states, courts had to select from adult males under a certain age, usually 45, 50, or 60. Some states allowed only slaveowners or householders to join patrols. Patrollers typically earned fees for captured fugitive slaves and exemption from road or militia duty, as well as hourly wages. Keeping order among slaves was the patrollers’ primary duty. Statutes set guidelines for appropriate treatment of slaves and often imposed fines for unlawful beatings. In rare instances, patrollers had to compensate masters for injured slaves. For the most part, however, patrollers enjoyed quasi-judicial or quasi-executive powers in their dealings with slaves.

Overseers commanded considerable control as well. The Southern overseer was the linchpin of the large slave plantation. He ran daily operations and served as a first line of defense in safeguarding whites. The vigorous protests against drafting overseers into military service during the Civil War reveal their significance to the South. Yet slaves were too valuable to be left to the whims of frustrated, angry overseers. Injuries caused to slaves by overseers’ cruelty (or “immoral conduct”) usually entitled masters to recover civil damages. Overseers occasionally confronted criminal charges as well. Brutality by overseers naturally generated responses by their victims; at times, courts reduced murder charges to manslaughter when slaves killed abusive overseers.

Protecting The Master Against Loss: Slave Injury And Slave Stealing

Whether they liked it or not, many Southerners dealt daily with slaves. Southern law shaped these interactions among strangers, awarding damages more often for injuries to slaves than injuries to other property or persons, shielding slaves more than free persons from brutality, and generating convictions more frequently in slave-stealing cases than in other criminal cases. The law also recognized more offenses against slaveowners than against other property owners because slaves, unlike other property, succumbed to influence.

Just as assaults of slaves generated civil damages and criminal penalties, so did stealing a slave to sell him or help him escape to freedom. Many Southerners considered slave stealing worse than killing fellow citizens. In marked contrast, selling a free black person into slavery carried almost no penalty.

The counterpart to helping slaves escape — picking up fugitives — also created laws. Southern states offered rewards to defray the costs of capture or passed statutes requiring owners to pay fees to those who caught and returned slaves. Some Northern citizens worked hand-in-hand with their Southern counterparts, returning fugitive slaves to masters either with or without the prompting of law. But many Northerners vehemently opposed the peculiar institution. In an attempt to stitch together the young nation, the federal government passed the first fugitive slave act in 1793. To circumvent its application, several Northern states passed personal liberty laws in the 1840s. Stronger federal fugitive slave legislation then passed in 1850. Still, enough slaves fled to freedom — perhaps as many as 15,000 in the decade before the Civil War — with the help (or inaction) of Northerners that the profession of “slave-catching” evolved. This occupation was often highly risky — enough so that such men could not purchase life insurance coverage — and just as often highly lucrative.

Slave Crimes

Southern law governed slaves as well as slaveowners and their adversaries. What few due process protections slaves possessed stemmed from desires to grant rights to masters. Still, slaves faced harsh penalties for their crimes. When slaves stole, rioted, set fires, or killed free people, the law sometimes had to subvert the property rights of masters in order to preserve slavery as a social institution.

Slaves, like other antebellum Southern residents, committed a host of crimes ranging from arson to theft to homicide. Other slave crimes included violating curfew, attending religious meetings without a master’s consent, and running away. Indeed, a slave was not permitted off his master’s farm or business without his owner’s permission. In rural areas, a slave was required to carry a written pass to leave the master’s land.

Southern states erected numerous punishments for slave crimes, including prison terms, banishment, whipping, castration, and execution. In most states, the criminal law for slaves (and blacks generally) was noticeably harsher than for free whites; in others, slave law as practiced resembled that governing poorer white citizens. Particularly harsh punishments applied to slaves who had allegedly killed their masters or who had committed rebellious acts. Southerners considered these acts of treason and resorted to immolation, drawing and quartering, and hanging.

MARKETS AND PRICES

Market prices for slaves reflect their substantial economic value. Scholars have gathered slave prices from a variety of sources, including censuses, probate records, plantation and slave-trader accounts, and proceedings of slave auctions. These data sets reveal that prime field hands went for four to six hundred dollars in the U.S. in 1800, thirteen to fifteen hundred dollars in 1850, and up to three thousand dollars just before Fort Sumter fell. Even controlling for inflation, the prices of U.S. slaves rose significantly in the six decades before South Carolina seceded from the Union. By 1860, Southerners owned close to $4 billion worth of slaves. Slavery remained a thriving business on the eve of the Civil War: Fogel and Engerman (1974) projected that by 1890 slave prices would have increased on average more than 50 percent over their 1860 levels. No wonder the South rose in armed resistance to protect its enormous investment.

Slave markets existed across the antebellum U.S. South. Even today, one can find stone markers like the one next to the Antietam battlefield, which reads: “From 1800 to 1865 This Stone Was Used as a Slave Auction Block. It has been a famous landmark at this original location for over 150 years.” Private auctions, estate sales, and professional traders facilitated easy exchange. Established dealers like Franklin and Armfield in Virginia, Woolfolk, Saunders, and Overly in Maryland, and Nathan Bedford Forrest in Tennessee prospered alongside itinerant traders who operated in a few counties, buying slaves for cash from their owners, then moving them overland in coffles to the lower South. Over a million slaves were taken across state lines between 1790 and 1860 with many more moving within states. Some of these slaves went with their owners; many were sold to new owners. In his monumental study, Michael Tadman (1989) found that slaves who lived in the upper South faced a very real chance of being sold for profit. From 1820 to 1860, he estimated that an average of 200,000 slaves per decade moved from the upper to the lower South, most via sales. A contemporary newspaper, The Virginia Times, calculated that 40,000 slaves were sold in the year 1830.

Determinants of Slave Prices

The prices paid for slaves reflected two economic factors: the characteristics of the slave and the conditions of the market. Important individual features included age, sex, childbearing capacity (for females), physical condition, temperament, and skill level. In addition, the supply of slaves, demand for products produced by slaves, and seasonal factors helped determine market conditions and therefore prices.

Age and Price

Prices for both male and female slaves tended to follow similar life-cycle patterns. In the U.S. South, infant slaves sold for a positive price because masters expected them to live long enough to make the initial costs of raising them worthwhile. Prices rose through puberty as productivity and experience increased. In nineteenth-century New Orleans, for example, prices peaked at about age 22 for females and age 25 for males. Girls cost more than boys up to their mid-teens. The genders then switched places in terms of value. In the Old South, boys aged 14 sold for 71 percent of the price of 27-year-old men, whereas girls aged 14 sold for 65 percent of the price of 27-year-old men. After the peak age, prices declined slowly for a time, then fell off rapidly as the aging process caused productivity to fall. Compared to full-grown men, women were worth 80 to 90 percent as much. One characteristic in particular set some females apart: their ability to bear children. Fertile females commanded a premium. The mother-child link also proved important for pricing in a different way: people sometimes paid more for intact families.


Source: Fogel and Engerman (1974)

Other Characteristics and Price

Skills, physical traits, mental capabilities, and other qualities also helped determine a slave’s price. Skilled workers sold for premiums of 40-55 percent whereas crippled and chronically ill slaves sold for deep discounts. Slaves who proved troublesome — runaways, thieves, layabouts, drunks, slow learners, and the like — also sold for lower prices. Taller slaves cost more, perhaps because height acts as a proxy for healthiness. In New Orleans, light-skinned females (who were often used as concubines) sold for a 5 percent premium.

Fluctuations in Supply

Prices for slaves fluctuated with market conditions as well as with individual characteristics. U.S. slave prices fell around 1800 as the Haitian revolution sparked the movement of slaves into the Southern states. Less than a decade later, slave prices climbed when the international slave trade was banned, cutting off legal external supplies. Interestingly enough, among those who supported the closing of the trans-Atlantic slave trade were several Southern slaveowners. Why this apparent anomaly? Because the resulting reduction in supply drove up the prices of slaves already living in the U.S and, hence, their masters’ wealth. U.S. slaves had high enough fertility rates and low enough mortality rates to reproduce themselves, so Southern slaveowners did not worry about having too few slaves to go around.

Fluctuations in Demand

Demand helped determine prices as well. The demand for slaves derived in part from the demand for the commodities and services that slaves provided. Changes in slave occupations and variability in prices for slave-produced goods therefore created movements in slave prices. As slaves replaced increasingly expensive indentured servants in the New World, their prices went up. In the period 1748 to 1775, slave prices in British America rose nearly 30 percent. As cotton prices fell in the 1840s, Southern slave prices also fell. But, as the demand for cotton and tobacco grew after about 1850, the prices of slaves increased as well.

Interregional Price Differences

Differences in demand across regions led to transitional regional price differences, which in turn meant large movements of slaves. Yet because planters experienced greater stability among their workforce when entire plantations moved, 84 percent of slaves were taken to the lower South in this way rather than being sold piecemeal.

Time of Year and Price

Demand sometimes had to do with the time of year a sale took place. For example, slave prices in the New Orleans market were 10 to 20 percent higher in January than in September. Why? September was a busy time of year for plantation owners: the opportunity cost of their time was relatively high. Prices had to be relatively low for them to be willing to travel to New Orleans during harvest time.

Expectations and Prices

One additional demand factor loomed large in determining slave prices: the expectation of continued legal slavery. As the American Civil War progressed, prices dropped dramatically because people could not be sure that slavery would survive. In New Orleans, prime male slaves sold on average for $1381 in 1861 and for $1116 in 1862. Burgeoning inflation meant that real prices fell considerably more. By war’s end, slaves sold for a small fraction of their 1860 price.


Source: Data supplied by Stanley Engerman and reported in Walton and Rockoff (1994).

PROFITABILITY, EFFICIENCY, AND EXPLOITATION

That slavery was profitable seems almost obvious. Yet scholars have argued furiously about this matter. On one side stand antebellum writers such as Hinton Rowan Helper and Frederick Law Olmstead, many antebellum abolitionists, and contemporary scholars like Eugene Genovese (at least in his early writings), who speculated that American slavery was unprofitable, inefficient, and incompatible with urban life. On the other side are scholars who have marshaled masses of data to support their contention that Southern slavery was profitable and efficient relative to free labor and that slavery suited cities as well as farms. These researchers stress the similarity between slave markets and markets for other sorts of capital.

Consensus That Slavery Was Profitable

This battle has largely been won by those who claim that New World slavery was profitable. Much like other businessmen, New World slaveowners responded to market signals — adjusting crop mixes, reallocating slaves to more profitable tasks, hiring out idle slaves, and selling slaves for profit. One well-known instance shows that contemporaneous free labor thought that urban slavery may even have worked too well: employees of the Tredegar Iron Works in Richmond, Virginia, went out on their first strike in 1847 to protest the use of slave labor at the Works.

Fogel and Engerman’s Time on the Cross

Carrying the banner of the “slavery was profitable” camp is Nobel laureate Robert Fogel. Perhaps the most controversial book ever written about American slavery is Time on the Cross, published in 1974 by Fogel and co-author Stanley Engerman. These men were among the first to use modern statistical methods, computers, and large datasets to answer a series of empirical questions about the economics of slavery. To find profit levels and rates of return, they built upon the work of Alfred Conrad and John Meyer, who in 1958 had calculated similar measures from data on cotton prices, physical yield per slave, demographic characteristics of slaves (including expected lifespan), maintenance and supervisory costs, and (in the case of females) number of children. To estimate the relative efficiency of farms, Fogel and Engerman devised an index of “total factor productivity,” which measured the output per average unit of input on each type of farm. They included in this index controls for quality of livestock and land and for age and sex composition of the workforce, as well as amounts of output, labor, land, and capital

Time on the Cross generated praise — and considerable criticism. A major critique appeared in 1976 as a collection of articles entitled Reckoning with Slavery. Although some contributors took umbrage at the tone of the book and denied that it broke new ground, others focused on flawed and insufficient data and inappropriate inferences. Despite its shortcomings, Time on the Cross inarguably brought people’s attention to a new way of viewing slavery. The book also served as a catalyst for much subsequent research. Even Eugene Genovese, long an ardent proponent of the belief that Southern planters had held slaves for their prestige value, finally acknowledged that slavery was probably a profitable enterprise. Fogel himself refined and expanded his views in a 1989 book, Without Consent or Contract.

Efficiency Estimates

Fogel’s and Engerman’s research led them to conclude that investments in slaves generated high rates of return, masters held slaves for profit motives rather than for prestige, and slavery thrived in cities and rural areas alike. They also found that antebellum Southern farms were 35 percent more efficient overall than Northern ones and that slave farms in the New South were 53 percent more efficient than free farms in either North or South. This would mean that a slave farm that is otherwise identical to a free farm (in terms of the amount of land, livestock, machinery and labor used) would produce output worth 53 percent more than the free. On the eve of the Civil War, slavery flourished in the South and generated a rate of economic growth comparable to that of many European countries, according to Fogel and Engerman. They also discovered that, because slaves constituted a considerable portion of individual wealth, masters fed and treated their slaves reasonably well. Although some evidence indicates that infant and young slaves suffered much worse conditions than their freeborn counterparts, teenaged and adult slaves lived in conditions similar to — sometimes better than — those enjoyed by many free laborers of the same period.

Transition from Indentured Servitude to Slavery

One potent piece of evidence supporting the notion that slavery provides pecuniary benefits is this: slavery replaces other labor when it becomes relatively cheaper. In the early U.S. colonies, for example, indentured servitude was common. As the demand for skilled servants (and therefore their wages) rose in England, the cost of indentured servants went up in the colonies. At the same time, second-generation slaves became more productive than their forebears because they spoke English and did not have to adjust to life in a strange new world. Consequently, the balance of labor shifted away from indentured servitude and toward slavery.

Gang System

The value of slaves arose in part from the value of labor generally in the antebellum U.S. Scarce factors of production command economic rent, and labor was by far the scarcest available input in America. Moreover, a large proportion of the reward to owning and working slaves resulted from innovative labor practices. Certainly, the use of the “gang” system in agriculture contributed to profits in the antebellum period. In the gang system, groups of slaves perfomed synchronized tasks under the watchful overseer’s eye, much like parts of a single machine. Masters found that treating people like machinery paid off handsomely.

Antebellum slaveowners experimented with a variety of other methods to increase productivity. They developed an elaborate system of “hand ratings” in order to improve the match between the slave worker and the job. Hand ratings categorized slaves by age and sex and rated their productivity relative to that of a prime male field hand. Masters also capitalized on the native intelligence of slaves by using them as agents to receive goods, keep books, and the like.

Use of Positive Incentives

Masters offered positive incentives to make slaves work more efficiently. Slaves often had Sundays off. Slaves could sometimes earn bonuses in cash or in kind, or quit early if they finished tasks quickly. Some masters allowed slaves to keep part of the harvest or to work their own small plots. In places, slaves could even sell their own crops. To prevent stealing, however, many masters limited the products that slaves could raise and sell, confining them to corn or brown cotton, for example. In antebellum Louisiana, slaves even had under their control a sum of money called a peculium. This served as a sort of working capital, enabling slaves to establish thriving businesses that often benefited their masters as well. Yet these practices may have helped lead to the downfall of slavery, for they gave slaves a taste of freedom that left them longing for more.

Slave Families

Masters profited from reproduction as well as production. Southern planters encouraged slaves to have large families because U.S. slaves lived long enough — unlike those elsewhere in the New World — to generate more revenue than cost over their lifetimes. But researchers have found little evidence of slave breeding; instead, masters encouraged slaves to live in nuclear or extended families for stability. Lest one think sentimentality triumphed on the Southern plantation, one need only recall the willingness of most masters to sell if the bottom line was attractive enough.

Profitability and African Heritage

One element that contributed to the profitability of New World slavery was the African heritage of slaves. Africans, more than indigenous Americans, were accustomed to the discipline of agricultural practices and knew metalworking. Some scholars surmise that Africans, relative to Europeans, could better withstand tropical diseases and, unlike Native Americans, also had some exposure to the European disease pool.

Ease of Identifying Slaves

Perhaps the most distinctive feature of Africans, however, was their skin color. Because they looked different from their masters, their movements were easy to monitor. Denying slaves education, property ownership, contractual rights, and other things enjoyed by those in power was simple: one needed only to look at people to ascertain their likely status. Using color was a low-cost way of distinguishing slaves from free persons. For this reason, the colonial practices that freed slaves who converted to Christianity quickly faded away. Deciphering true religious beliefs is far more difficult than establishing skin color. Other slave societies have used distinguishing marks like brands or long hair to denote slaves, yet color is far more immutable and therefore better as a cheap way of keeping slaves separate. Skin color, of course, can also serve as a racist identifying mark even after slavery itself disappears.

Profit Estimates

Slavery never generated superprofits, because people always had the option of putting their money elsewhere. Nevertheless, investment in slaves offered a rate of return — about 10 percent — that was comparable to returns on other assets. Slaveowners were not the only ones to reap rewards, however. So too did cotton consumers who enjoyed low prices and Northern entrepreneurs who helped finance plantation operations.

Exploitation Estimates

So slavery was profitable; was it an efficient way of organizing the workforce? On this question, considerable controversy remains. Slavery might well have profited masters, but only because they exploited their chattel. What is more, slavery could have locked people into a method of production and way of life that might later have proven burdensome.

Fogel and Engerman (1974) claimed that slaves kept about ninety percent of what they produced. Because these scholars also found that agricultural slavery produced relatively more output for a given set of inputs, they argued that slaves may actually have shared in the overall material benefits resulting from the gang system. Other scholars contend that slaves in fact kept less than half of what they produced and that slavery, while profitable, certainly was not efficient. On the whole, current estimates suggest that the typical slave received only about fifty percent of the extra output that he or she produced.

Did Slavery Retard Southern Economic Development?

Gavin Wright (1978) called attention as well to the difference between the short run and the long run. He noted that slaves accounted for a very large proportion of most masters’ portfolios of assets. Although slavery might have seemed an efficient means of production at a point in time, it tied masters to a certain system of labor which might not have adapted quickly to changed economic circumstances. This argument has some merit. Although the South’s growth rate compared favorably with that of the North in the antebellum period, a considerable portion of wealth was held in the hands of planters. Consequently, commercial and service industries lagged in the South. The region also had far less rail transportation than the North. Yet many plantations used the most advanced technologies of the day, and certain innovative commercial and insurance practices appeared first in transactions involving slaves. What is more, although the South fell behind the North and Great Britain in its level of manufacturing, it compared favorably to other advanced countries of the time. In sum, no clear consensus emerges as to whether the antebellum South created a standard of living comparable to that of the North or, if it did, whether it could have sustained it.

Ultimately, the South’s system of law, politics, business, and social customs strengthened the shackles of slavery and reinforced racial stereotyping. As such, it was undeniably evil. Yet, because slaves constituted valuable property, their masters had ample incentives to take care of them. And, by protecting the property rights of masters, slave law necessarily sheltered the persons embodied within. In a sense, the apologists for slavery were right: slaves sometimes fared better than free persons because powerful people had a stake in their well-being.

Conclusion: Slavery Cannot Be Seen As Benign

But slavery cannot be thought of as benign. In terms of material conditions, diet, and treatment, Southern slaves may have fared as well in many ways as the poorest class of free citizens. Yet the root of slavery is coercion. By its very nature, slavery involves involuntary transactions. Slaves are property, whereas free laborers are persons who make choices (at times constrained, of course) about the sort of work they do and the number of hours they work.

The behavior of former slaves after abolition clearly reveals that they cared strongly about the manner of their work and valued their non-work time more highly than masters did. Even the most benevolent former masters in the U.S. South found it impossible to entice their former chattels back into gang work, even with large wage premiums. Nor could they persuade women back into the labor force: many female ex-slaves simply chose to stay at home. In the end, perhaps slavery is an economic phenomenon only because slave societies fail to account for the incalculable costs borne by the slaves themselves.

REFERENCES AND FURTHER READING

For studies pertaining to the economics of slavery, see particularly Aitken, Hugh, editor. Did Slavery Pay? Readings in the Economics of Black Slavery in the United States. Boston: Houghton-Mifflin, 1971.

Barzel, Yoram. “An Economic Analysis of Slavery.” Journal of Law and Economics 20 (1977): 87-110.

Conrad, Alfred H., and John R. Meyer. The Economics of Slavery and Other Studies. Chicago: Aldine, 1964.

David, Paul A., Herbert G. Gutman, Richard Sutch, Peter Temin, and Gavin Wright. Reckoning with Slavery: A Critical Study in the Quantitative History of American Negro Slavery. New York: Oxford University Press, 1976

Fogel , Robert W. Without Consent or Contract. New York: Norton, 1989.

Fogel, Robert W., and Stanley L. Engerman. Time on the Cross: The Economics of American Negro Slavery. New York: Little, Brown, 1974.

Galenson, David W. Traders, Planters, and Slaves: Market Behavior in Early English America. New York: Cambridge University Press, 1986

Kotlikoff, Laurence. “The Structure of Slave Prices in New Orleans, 1804-1862.” Economic Inquiry 17 (1979): 496-518.

Ransom, Roger L., and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. New York: Cambridge University Press, 1977.

Ransom, Roger L., and Richard Sutch “Capitalists Without Capital” Agricultural History 62 (1988): 133-160.

Vedder, Richard K. “The Slave Exploitation (Expropriation) Rate.” Explorations in Economic History 12 (1975): 453-57.

Wright, Gavin. The Political Economy of the Cotton South: Households, Markets, and Wealth in the Nineteenth Century. New York: Norton, 1978.

Yasuba, Yasukichi. “The Profitability and Viability of Slavery in the U.S.” Economic Studies Quarterly 12 (1961): 60-67.

For accounts of slave trading and sales, see
Bancroft, Frederic. Slave Trading in the Old South. New York: Ungar, 1931. Tadman, Michael. Speculators and Slaves. Madison: University of Wisconsin Press, 1989.

For discussion of the profession of slave catchers, see
Campbell, Stanley W. The Slave Catchers. Chapel Hill: University of North Carolina Press, 1968.

To read about slaves in industry and urban areas, see
Dew, Charles B. Slavery in the Antebellum Southern Industries. Bethesda: University Publications of America, 1991.

Goldin, Claudia D. Urban Slavery in the American South, 1820-1860: A Quantitative History. Chicago: University of Chicago Press,1976.

Starobin, Robert. Industrial Slavery in the Old South. New York: Oxford University Press, 1970.

For discussions of masters and overseers, see
Oakes, James. The Ruling Race: A History of American Slaveholders. New York: Knopf, 1982.

Roark, James L. Masters Without Slaves. New York: Norton, 1977.

Scarborough, William K. The Overseer: Plantation Management in the Old South. Baton Rouge, Louisiana State University Press, 1966.

On indentured servitude, see
Galenson, David. “Rise and Fall of Indentured Servitude in the Americas: An Economic Analysis.” Journal of Economic History 44 (1984): 1-26.

Galenson, David. White Servitude in Colonial America: An Economic Analysis. New York: Cambridge University Press, 1981.

Grubb, Farley. “Immigrant Servant Labor: Their Occupational and Geographic Distribution in the Late Eighteenth Century Mid-Atlantic Economy.” Social Science History 9 (1985): 249-75.

Menard, Russell R. “From Servants to Slaves: The Transformation of the Chesapeake Labor System.” Southern Studies 16 (1977): 355-90.

On slave law, see
Fede, Andrew. “Legal Protection for Slave Buyers in the U.S. South.” American Journal of Legal History 31 (1987). Finkelman, Paul. An Imperfect Union: Slavery, Federalism, and Comity. Chapel Hill: University of North Carolina, 1981.

Finkelman, Paul. Slavery, Race, and the American Legal System, 1700-1872. New York: Garland, 1988.

Finkelman, Paul, ed. Slavery and the Law. Madison: Madison House, 1997.

Flanigan, Daniel J. The Criminal Law of Slavery and Freedom, 1800-68. New York: Garland, 1987.

Morris, Thomas D., Southern Slavery and the Law: 1619-1860. Chapel Hill: University of North Carolina Press, 1996.

Schafer, Judith K. Slavery, The Civil Law, and the Supreme Court of Louisiana. Baton Rouge: Louisiana State University Press, 1994.

Tushnet, Mark V. The American Law of Slavery, 1810-60: Considerations of Humanity and Interest. Princeton: Princeton University Press, 1981.

Wahl, Jenny B. The Bondsman’s Burden: An Economic Analysis of the Common Law of Southern Slavery. New York: Cambridge University Press, 1998.

Other useful sources include
Berlin, Ira, and Philip D. Morgan, eds. The Slave’s Economy: Independent Production by Slaves in the Americas. London: Frank Cass, 1991.

Berlin, Ira, and Philip D. Morgan, eds, Cultivation and Culture: Labor and the Shaping of Slave Life in the Americas. Charlottesville, University Press of Virginia, 1993.

Elkins, Stanley M. Slavery: A Problem in American Institutional and Intellectual Life. Chicago: University of Chicago Press, 1976.

Engerman, Stanley, and Eugene Genovese. Race and Slavery in the Western Hemisphere: Quantitative Studies. Princeton: Princeton University Press, 1975.

Fehrenbacher, Don. Slavery, Law, and Politics. New York: Oxford University Press, 1981.

Franklin, John H. From Slavery to Freedom. New York: Knopf, 1988.

Genovese, Eugene D. Roll, Jordan, Roll. New York: Pantheon, 1974.

Genovese, Eugene D. The Political Economy of Slavery: Studies in the Economy and Society of the Slave South . Middletown, CT: Wesleyan, 1989.

Hindus, Michael S. Prison and Plantation. Chapel Hill: University of North Carolina Press, 1980.

Margo, Robert, and Richard Steckel. “The Heights of American Slaves: New Evidence on Slave Nutrition and Health.” Social Science History 6 (1982): 516-538.

Phillips, Ulrich B. American Negro Slavery: A Survey of the Supply, Employment and Control of Negro Labor as Determined by the Plantation Regime. New York: Appleton, 1918.

Stampp, Kenneth M. The Peculiar Institution: Slavery in the Antebellum South. New York: Knopf, 1956.

Steckel, Richard. “Birth Weights and Infant Mortality Among American Slaves.” Explorations in Economic History 23 (1986): 173-98.

Walton, Gary, and Hugh Rockoff. History of the American Economy. Orlando: Harcourt Brace, 1994, chapter 13.

Whaples, Robert. “Where Is There Consensus among American Economic Historians?” Journal of Economic History 55 (1995): 139-154.

Data can be found at
U.S. Bureau of the Census, Historical Statistics of the United States, 1970, collected in ICPSR study number 0003, “Historical Demographic, Economic and Social Data: The United States, 1790-1970,” located at http://fisher.lib.virginia.edu/census/.

Citation: Bourne, Jenny. “Slavery in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/slavery-in-the-united-states/

Economic History of Retirement in the United States

Joanna Short, Augustana College

One of the most striking changes in the American labor market over the twentieth century has been the virtual disappearance of older men from the labor force. Moen (1987) and Costa (1998) estimate that the labor force participation rate of men age 65 and older declined from 78 percent in 1880 to less than 20 percent in 1990 (see Table 1). In recent decades, the labor force participation rate of somewhat younger men (age 55-64) has been declining as well. When coupled with the increase in life expectancy over this period, it is clear that men today can expect to spend a much larger proportion of their lives in retirement, relative to men living a century ago.

Table 1

Labor Force Participation Rates of Men Age 65 and Over

Year Labor Force Participation Rate (percent)
1850 76.6
1860 76.0
1870 —–
1880 78.0
1890 73.8
1900 65.4
1910 58.1
1920 60.1
1930 58.0
1940 43.5
1950 47.0
1960 40.8
1970 35.2
1980 24.7
1990 18.4
2000 17.5

Sources: Moen (1987), Costa (1998), Bureau of Labor Statistics

Notes: Prior to 1940, ‘gainful employment’ was the standard the U.S. Census used to determine whether or not an individual was working. This standard is similar to the ‘labor force participation’ standard used since 1940. With the exception of the figure for 2000, the data in the table are based on the gainful employment standard.

How can we explain the rise of retirement? Certainly, the development of government programs like Social Security has made retirement more feasible for many people. However, about half of the total decline in the labor force participation of older men from 1880 to 1990 occurred before the first Social Security payments were made in 1940. Therefore, factors other than the Social Security program have influenced the rise of retirement.

In addition to the increase in the prevalence of retirement over the twentieth century, the nature of retirement appears to have changed. In the late nineteenth century, many retirements involved a few years of dependence on children at the end of life. Today, retirement is typically an extended period of self-financed independence and leisure. This article documents trends in the labor force participation of older men, discusses the decision to retire, and examines the causes of the rise of retirement including the role of pensions and government programs.

Trends in U.S. Retirement Behavior

Trends by Gender

Research on the history of retirement focuses on the behavior of men because retirement, in the sense of leaving the labor force permanently in old age after a long career, is a relatively new phenomenon among women. Goldin (1990) concludes that “even as late as 1940, most young working women exited the labor force on marriage, and only a small minority would return.” The employment of married women accelerated after World War II, and recent evidence suggests that the retirement behavior of men and women is now very similar. Gendell (1998) finds that the average age at exit from the labor force in the U.S. was virtually identical for men and women from 1965 to 1995.

Trends by Race and Region

Among older men at the beginning of the twentieth century, labor force participation rates varied greatly by race, region of residence, and occupation. In the early part of the century, older black men were much more likely to be working than older white men. In 1900, for example, 84.1 percent of black men age 65 and over and 64.4 percent of white men were in the labor force. The racial retirement gap remained at about twenty percentage points until 1920, then narrowed dramatically by 1950. After 1950, the racial retirement gap reversed. In recent decades older black men have been slightly less likely to be in the labor force than older white men (see Table 2).

Table 2

Labor Force Participation Rates of Men Age 65 and Over, by Race

Labor Force Participation Rate (percent)
Year White Black
1880 76.7 87.3
1890 —- —-
1900 64.4 84.1
1910 58.5 86.0
1920 57.0 76.8
1930 —- —-
1940 44.1 54.6
1950 48.7 51.3
1960 40.3 37.3
1970 36.6 33.8
1980 27.1 23.7
1990 18.6 15.7
2000 17.8 16.6

Sources: Costa (1998), Bureau of Labor Statistics

Notes: Census data are unavailable for the years 1890 and 1930.

With the exception of the figures for 2000, participation rates are based on the gainful employment standard

Similarly, the labor force participation rate of men age 65 and over living in the South was higher than that of men living in the North in the early twentieth century. In 1900, for example, the labor force participation rate for older Southerners was sixteen percentage points higher than for Northerners. The regional retirement gap began to narrow between 1910 and 1920, and narrowed substantially by 1940 (see Table 3).

Table 3

Labor Force Participation Rates of Men Age 65 and Over, by Region

Labor Force Participation Rate (percent)
Year North South
1880 73.7 85.2
1890 —- —-
1900 66.0 82.9
1910 56.6 72.8
1920 58.8 69.9
1930 —- —-
1940 42.8 49.4
1950 43.2 42.9

Source: Calculated from Ruggles and Sobek, Integrated Public Use Microdata Series for 1880, 1900, 1910, 1920, 1940, and 1950, Version 2.0, 1997

Note: North includes the New England, Middle Atlantic, and North Central regions

South includes the South Atlantic and South Central regions

Differences in retirement behavior by race and region of residence are related. One reason Southerners appear less likely to retire in the late nineteenth and early twentieth centuries is that a relatively large proportion of Southerners were black. In 1900, 90 percent of black households were located in the South (see Maloney on African Americans in this Encyclopedia). In the early part of the century, black men were effectively excluded from skilled occupations. The vast majority worked for low pay as tenant farmers or manual laborers. Even controlling for race, southern per capita income lagged behind the rest of the nation well into the twentieth century. Easterlin (1971) estimates that in 1880, per capita income in the South was only half that in the Midwest, and per capita income remained less than 70 percent of the Midwestern level until 1950. Lower levels of income among blacks, and in the South as a whole during this period, may have made it more difficult for these men to accumulate resources sufficient to rely on in retirement.

Trends by Occupation

Older men living on farms have long been more likely to be working than men living in nonfarm households. In 1900, for example, 80.6 percent of farm residents and 62.7 percent of nonfarm residents over the age of 65 were in the labor force. Durand (1948), Graebner (1980), and others have suggested that older farmers could remain in the labor force longer than urban workers because of help from children or hired labor. Urban workers, on the other hand, were frequently forced to retire once they became physically unable to keep up with the pace of industry.

Despite the large difference in the labor force participation rates of farm and nonfarm residents, the actual gap in the retirement rates of farmers and nonfarmers was not that great. Confusion on this issue stems from the fact that the labor force participation rate of farm residents does not provide a good representation of the retirement behavior of farmers. Moen (1994) and Costa (1995a) point out that farmers frequently moved off the farm in retirement. When the comparison is made by occupation, farmers have labor force participation rates only slightly higher than laborers or skilled workers. Lee (2002) finds that excluding the period 1900-1910 (a period of exceptional growth in the value of farm property), the labor force participation rate of older farmers was on average 9.3 percentage points higher than that of nonfarmers from 1880-1940.

Trends in Living Arrangements

In addition to the overall rise of retirement, and the closing of differences in retirement behavior by race and region, over the twentieth century retired men became much more independent. In 1880, nearly half of retired men lived with children or other relatives. Today, fewer than 5 percent of retired men live with relatives. Costa (1998) finds that between 1910 and 1940, men who were older, had a change in marital status (typically from married to widowed), or had low income were much more likely to live with family members as a dependent. Rising income appears to explain most of the movement away from coresidence, suggesting that the elderly have always preferred to live by themselves, but they have only recently had the means to do so.

Explaining Trends in the Retirement Decision

One way to understand the rise of retirement is to consider the individual retirement decision. In order to retire permanently from the labor force, one must have enough resources to live on to the end of the expected life span. In retirement, one can live on pension income, accumulated savings, and anticipated contributions from family and friends. Without at least the minimum amount of retirement income necessary to survive, the decision-maker has little choice but to remain in the labor force. If the resource constraint is met, individuals choose to retire once the net benefits of retirement (e.g., leisure time) exceed the net benefits of working (labor income less the costs associated with working). From this model, we can predict that anything that increases the costs associated with working, such as advancing age, an illness, or a disability, will increase the probability of retirement. Similarly, an increase in pension income increases the probability of retirement in two ways. First, an increase in pension income makes it more likely the resource constraint will be satisfied. In addition, higher pension income makes it possible to enjoy more leisure in retirement, thereby increasing the net benefits of retirement.

Health Status

Empirically, age, disability, and pension income have all been shown to increase the probability that an individual is retired. In the context of the individual model, we can use this observation to explain the overall rise of retirement. Disability, for example, has been shown to increase the probability of retirement, both today and especially in the past. However, it is unlikely that the rise of retirement was caused by increases in disability rates — advances in health have made the overall population much healthier. Costa (1998), for example, shows that chronic conditions were much more prevalent for the elderly born in the nineteenth century than for men born in the twentieth century.

The Decline of Agriculture

Older farmers are somewhat more likely to be in the labor force than nonfarmers. Furthermore, the proportion of people employed in agriculture has declined steadily, from 51 percent of the work force in 1880, to 17 percent in 1940, to about 2 percent today (Lebergott, 1964). Therefore, as argued by Durand (1948), the decline in agriculture could explain the rise in retirement. Lee (2002) finds, though, that the decline of agriculture only explains about 20 percent of the total rise of retirement from 1880 to 1940. Since most of the shift away from agricultural work occurred before 1940, the decline of agriculture explains even less of the retirement trend since 1940. Thus, the occupational shift away from farming explains part of the rise of retirement. However, the underlying trend has been a long-term increase in the probability of retirement within all occupations.

Rising Income: The Most Likely Explanation

The most likely explanation for the rise of retirement is the overall increase in income, both from labor market earnings and from pensions. Costa (1995b) has shown that the pension income received by Union Army veterans in the early twentieth century had a strong effect on the probability that the veteran was retired. Over the period from 1890 to 1990, economic growth has led to nearly an eightfold increase in real gross domestic product (GDP) per capita. In 1890, GDP per capita was $3430 (in 1996 dollars), which is comparable to the levels of production in Morocco or Jamaica today. In 1990, real GDP per capita was $26,889. On average, Americans today enjoy a standard of living commensurate with eight times the income of Americans living a century ago. More income has made it possible to save for an extended retirement.

Rising income also explains the closing of differences in retirement behavior by race and region by the 1950s. Early in the century blacks and Southerners earned much lower income than Northern whites, but these groups made substantial gains in earnings by 1950. In the second half of the twentieth century, the increasing availability of pension income has also made retirement more attractive. Expansions in Social Security benefits, Medicare, and growth in employer-provided pensions all serve to increase the income available to people in retirement.

Costa (1998) has found that income is now less important to the decision to retire than it once was. In the past, only the rich could afford to retire. Income is no longer a binding constraint. One reason is that Social Security provides a safety net for those who are unable or unwilling to save for retirement. Another reason is that leisure has become much cheaper over the last century. Television, for example, allows people to enjoy concerts and sporting events at a very low price. Golf courses and swimming pools, once available only to the rich, are now publicly provided. Meanwhile, advances in health have allowed people to enjoy leisure and travel well into old age. All of these factors have made retirement so much more attractive that people of all income levels now choose to leave the labor force in old age.

Financing Retirement

Rising income also provided the young with a new strategy for planning for old age and retirement. Ransom and Sutch (1986a,b) and Sundstrom and David (1988) hypothesize that in the nineteenth century men typically used the promise of a bequest as an incentive for children to help their parents in old age. As more opportunities for work off the farm became available, children left home and defaulted on the implicit promise to care for retired parents. Children became an unreliable source of old age support, so parents stopped relying on children — had fewer babies — and began saving (in bank accounts) for retirement.

To support the “babies-to-bank accounts” theory, Sundstrom and David look for evidence of an inheritance-for-old age support bargain between parents and children. They find that many wills, particularly in colonial New England and some ethnic communities in the Midwest, included detailed clauses specifying the care of the surviving parent. When an elderly parent transferred property directly to a child, the contracts were particularly specific, often specifying the amount of food and firewood with which the parent was to be supplied. There is also some evidence that people viewed children and savings as substitute strategies for retirement planning. Haines (1985) uses budget studies from northern industrial workers in 1890 and finds a negative relationship between the number of children and the savings rate. Short (2001) conducts similar studies for southern men that indicate the two strategies were not substitutes until at least 1920. This suggests that the transition from babies to bank accounts occurred later in the South, only as income began to approach northern levels.

Pensions and Government Retirement Programs

Military and Municipal Pensions (1781-1934)

In addition to the rise in labor market income, the availability of pension income greatly increased with the development of Social Security and the expansion of private (employer-provided) pensions. In the U.S., public (government-provided) pensions originated with the military pensions that have been available to disabled veterans and widows since the colonial era. Military pensions became available to a large proportion of Americans after the Civil War, when the federal government provided pensions to Union Army widows and veterans disabled in the war. The Union Army pension program expanded greatly as a result of the Pension Act of 1890. As a result of this law, pensions were available for all veterans age 65 and over who had served more than 90 days and were honorably discharged, regardless of current employment status. In 1900, about 20 percent of all white men age 55 and over received a Union Army pension. The Union Army pension was generous even by today’s standards. Costa (1995b) finds that the average pension replaced about 30 percent of the income of a laborer. At its peak of nearly one million pensioners in 1902, the program consumed about 30 percent of the federal budget.

Each of the formerly Confederate states also provided pensions to its Confederate veterans. Most southern states began paying pensions to veterans disabled in the war and to war widows around 1880. These pensions were gradually liberalized to include most poor or disabled veterans and their widows. Confederate veteran pensions were much less generous than Union Army pensions. By 1910, the average Confederate pension was only about one-third the amount awarded to the average Union veteran.

By the early twentieth century, state and municipal governments also began paying pensions to their employees. Most major cities provided pensions for their firemen and police officers. By 1916, 33 states had passed retirement provisions for teachers. In addition, some states provided limited pensions to poor elderly residents. By 1934, 28 states had established these pension programs (See Craig in this Encyclopedia for more on public pensions).

Private Pensions (1875-1934)

As military and civil service pensions became available to more men, private firms began offering pensions to their employees. The American Express Company developed the first formal pension in 1875. Railroads, among the largest employers in the country, also began providing pensions in the late nineteenth century. Williamson (1992) finds that early pension plans, like that of the Pennsylvania Railroad, were funded entirely by the employer. Thirty years of service were required to qualify for a pension, and retirement was mandatory at age 70. Because of the lengthy service requirement and mandatory retirement provision, firms viewed pensions as a way to reduce labor turnover and as a more humane way to remove older, less productive employees. In addition, the 1926 Revenue Act excluded from current taxation all income earned in pension trusts. This tax advantage provided additional incentive for firms to provide pensions. By 1930, a majority of large firms had adopted pension plans, covering about 20 percent of all industrial workers.

In the early twentieth century, labor unions also provided pensions to their members. By 1928, thirteen unions paid pension benefits. Most of these were craft unions, whose members were typically employed by smaller firms that did not provide pensions.

Most private pensions survived the Great Depression. Exceptions were those plans that were funded under a ‘pay as you go’ system — where benefits were paid out of current earnings, rather than from built-up reserves. Many union pensions were financed under this system, and hence failed in the 1930s. Thanks to strong political allies, the struggling railroad pensions were taken over by the federal government in 1937.

Social Security (1935-1991)

The Social Security system was designed in 1935 to extend pension benefits to those not covered by a private pension plan. The Social Security Act consisted of two programs, Old Age Assistance (OAA) and Old Age Insurance (OAI). The OAA program provided federal matching funds to subsidize state old age pension programs. The availability of federal funds quickly motivated many states to develop a pension program or to increase benefits. By 1950, 22 percent of the population age 65 and over received OAA benefits. The OAA program peaked at this point, though, as the newly liberalized OAI program began to dominate Social Security. The OAI program is administered by the federal government, and financed by payroll taxes. Retirees (and later, survivors, dependents of retirees, and the disabled) who have paid into the system are eligible to receive benefits. The program remained small until 1950, when coverage was extended to include farm and domestic workers, and average benefits were increased by 77 percent. In 1965, the Social Security Act was amended to include Medicare, which provides health insurance to the elderly. The Social Security program continued to expand in the late 1960s and early 1970s — benefits increased 13 percent in 1968, another 15 percent in 1969, and 20 percent in 1972.

In the late 1970s and early 1980s Congress was finally forced to slow the growth of Social Security benefits, as the struggling economy introduced the possibility that the program would not be able to pay beneficiaries. In 1977, the formula for determining benefits was adjusted downward. Reforms in 1983 included the delay of a cost-of-living adjustment, the taxation of up to half of benefits, and payroll tax increases.

Today, Social Security benefits are the main source of retirement income for most retirees. Poterba, Venti, and Wise (1994) find that Social Security wealth was three times as large as all the other financial assets of those age 65-69 in 1991. The role of Social Security benefits in the budgets of elderly households varies greatly. In elderly households with less than $10,000 in income in 1990, 75 percent of income came from Social Security. Higher income households gain larger shares of income from earnings, asset income, and private pensions. In households with $30,000 to $50,000 in income, less than 30 percent was derived from Social Security.

The Growth of Private Pensions (1935-2000)

Even in the shadow of the Social Security system, employer-provided pensions continued to grow. The Wage and Salary Act of 1942 froze wages in an attempt to contain wartime inflation. In order to attract employees in a tight labor market, firms increasingly offered generous pensions. Providing pensions had the additional benefit that the firm’s contributions were tax deductible. Therefore, pensions provided firms with a convenient tax shelter from high wartime tax rates. From 1940 to 1960, the number of people covered by private pensions increased from 3.7 million to 23 million, or to nearly 30 percent of the labor force.

In the 1960s and 1970s, the federal government acted to regulate private pensions, and to provide tax incentives (like those for employer-provided pensions) for those without access to private pensions to save for retirement. Since 1962, the self-employed have been able to establish ‘Keogh plans’ — tax deferred accounts for retirement savings. In 1974, the Employment Retirement Income Security Act (ERISA) regulated private pensions to ensure their solvency. Under this law, firms are required to follow funding requirements and to insure against unexpected events that could cause insolvency. To further level the playing field, ERISA provided those not covered by a private pension with the option of saving in a tax-deductible Individual Retirement Account (IRA). The option of saving in a tax-advantaged IRA was extended to everyone in 1981.

Over the last thirty years, the type of pension plan that firms offer employees has shifted from ‘defined benefit’ to ‘defined contribution’ plans. Defined benefit plans, like Social Security, specify the amount of benefits the retiree will receive. Defined contribution plans, on the other hand, specify only how much the employer will contribute to the plan. Actual benefits then depend on the performance of the pension investments. The switch from defined benefit to defined contribution plans therefore shifts the risk of poor investment performance from the employer to the employee. The employee stands to benefit, though, because the high long-run average returns on stock market investments may lead to a larger retirement nest egg. Recently, 401(k) plans have become a popular type of pension plan, particularly in the service industries. These plans typically involve voluntary employee contributions that are tax deductible to the employee, employer matching of these contributions, and more choice as far as how the pension is invested.

Summary and Conclusions

The retirement pattern we see today, typically involving decades of self-financed leisure, developed gradually over the last century. Economic historians have shown that rising labor market and pension income largely explain the dramatic rise of retirement. Rather than being pushed out of the labor force because of increasing obsolescence, older men have increasingly chosen to use their rising income to finance an earlier exit from the labor force. In addition to rising income, the decline of agriculture, advances in health, and the declining cost of leisure have contributed to the popularity of retirement. Rising income has also provided the young with a new strategy for planning for old age and retirement. Instead of being dependent on children in retirement, men today save for their own, more independent, retirement.

References

Achenbaum, W. Andrew. Social Security: Visions and Revisions. New York: Cambridge University Press, 1986. Bureau of Labor Statistics, cpsaat3.pdf

Costa, Dora L. The Evolution of Retirement: An American Economic History, 1880-1990. Chicago: University of Chicago Press, 1998.

Costa, Dora L. “Agricultural Decline and the Secular Rise in Male Retirement Rates.” Explorations in Economic History 32, no. 4 (1995a): 540-552.

Costa, Dora L. “Pensions and Retirement: Evidence from Union Army Veterans.” Quarterly Journal of Economics 110, no. 2 (1995b): 297-319.

Durand, John D. The Labor Force in the United States 1890-1960. New York: Gordon and Breach Science Publishers, 1948.

Easterlin, Richard A. “Interregional Differences in per Capita Income, Population, and Total Income, 1840-1950.” In Trends in the American Economy in the Nineteenth Century: A Report of the National Bureau of Economic Research, Conference on Research in Income and Wealth. Princeton, NJ: Princeton University Press, 1960.

Easterlin, Richard A. “Regional Income Trends, 1840-1950.” In The Reinterpretation of American Economic History, edited by Robert W. Fogel and Stanley L. Engerman. New York: Harper & Row, 1971.

Gendell, Murray. “Trends in Retirement Age in Four Countries, 1965-1995.” Monthly Labor Review 121, no. 8 (1998): 20-30.

Glasson, William H. Federal Military Pensions in the United States. New York: Oxford University Press, 1918.

Glasson, William H. “The South’s Pension and Relief Provisions for the Soldiers of the

Confederacy.” Publications of the North Carolina Historical Commission, Bulletin no. 23, Raleigh, 1918.

Goldin, Claudia. Understanding the Gender Gap: An Economic History of American Women. New York: Oxford University Press, 1990.

Graebner, William. A History of Retirement: The Meaning and Function of an American Institution, 1885-1978. New Haven: Yale University Press, 1980.

Haines, Michael R. “The Life Cycle, Savings, and Demographic Adaptation: Some Historical Evidence for the United States and Europe.” In Gender and the Life Course, edited by Alice S. Rossi, pp. 43-63. New York: Aldine Publishing Co., 1985.

Kingson, Eric R. and Edward D. Berkowitz. Social Security and Medicare: A Policy Primer. Westport, CT: Auburn House, 1993.

Lebergott, Stanley. Manpower in Economic Growth. New York: McGraw Hill, 1964.

Lee, Chulhee. “Sectoral Shift and the Labor-Force Participation of Older Males in the United States, 1880-1940.” Journal of Economic History 62, no. 2 (2002): 512-523.

Maloney, Thomas N. “African Americans in the Twentieth Century.” EH.Net Encyclopedia, edited by Robert Whaples, Jan 18, 2002. http://www.eh.net/encyclopedia/contents/maloney.african.american.php

Moen, Jon R. Essays on the Labor Force and Labor Force Participation Rates: The United States from 1860 through 1950. Ph.D. dissertation, University of Chicago, 1987.

Moen, Jon R. “Rural Nonfarm Households: Leaving the Farm and the Retirement of Older Men, 1860-1980.” Social Science History 18, no. 1 (1994): 55-75.

Ransom, Roger and Richard Sutch. “Babies or Bank Accounts, Two Strategies for a More Secure Old Age: The Case of Workingmen with Families in Maine, 1890.” Paper prepared for presentation at the Eleventh Annual Meeting of the Social Science History Association, St. Louis, 1986a.

Ransom, Roger L. and Richard Sutch. “Did Rising Out-Migration Cause Fertility to Decline in Antebellum New England? A Life-Cycle Perspective on Old-Age Security Motives, Child Default, and Farm-Family Fertility.” California Institute of Technology, Social Science Working Paper, no. 610, April 1986b.

Ruggles, Steven and Matthew Sobek, et al. Integrated Public Use Microdata Series: Version 2.0. Minneapolis: Historical Census Projects, University of Minnesota, 1997.

http://www.ipums.umn.edu

Short, Joanna S. “The Retirement of the Rebels: Georgia Confederate Pensions and Retirement Behavior in the New South.” Ph.D. dissertation, Indiana University, 2001.

Sundstrom, William A. and Paul A. David. “Old-Age Security Motives, Labor Markets, and Farm Family Fertility in Antebellum America.” Explorations in Economic History 25, no. 2 (1988): 164-194.

Williamson, Samuel H. “United States and Canadian Pensions before 1930: A Historical Perspective.” In Trends in Pensions, U.S. Department of Labor, Vol. 2, 1992, pp. 34-45.

Williamson, Samuel H. The Development of Industrial Pensions in the United States during the Twentieth Century. World Bank, Policy Research Department, 1995.

Citation: Short, Joanna. “Economic History of Retirement in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. September 30, 2002. URL http://eh.net/encyclopedia/economic-history-of-retirement-in-the-united-states/

The History of American Labor Market Institutions and Outcomes

Joshua Rosenbloom, University of Kansas

One of the most important implications of modern microeconomic theory is that perfectly competitive markets produce an efficient allocation of resources. Historically, however, most markets have not approached the level of organization of this theoretical ideal. Instead of the costless and instantaneous communication envisioned in theory, market participants must rely on a set of incomplete and often costly channels of communication to learn about conditions of supply and demand; and they may face significant transaction costs to act on the information that they have acquired through these channels.

The economic history of labor market institutions is concerned with identifying the mechanisms that have facilitated the allocation of labor effort in the economy at different times, tracing the historical processes by which they have responded to shifting circumstances, and understanding how these mechanisms affected the allocation of labor as well as the distribution of labor’s products in different epochs.

Labor market institutions include both formal organizations (such as union hiring halls, government labor exchanges, and third party intermediaries such as employment agents), and informal mechanisms of communication such as word-of-mouth about employment opportunities passed between family and friends. The impact of these institutions is broad ranging. It includes the geographic allocation of labor (migration and urbanization), decisions about education and training of workers (investment in human capital), inequality (relative wages), the allocation of time between paid work and other activities such as household production, education, and leisure, and fertility (the allocation of time between production and reproduction).

Because each worker possesses a unique bundle of skills and attributes and each job is different, labor market transactions require the communication of a relatively large amount of information. In other words, the transactions costs involved in the exchange of labor are relatively high. The result is that the barriers separating different labor markets have sometimes been quite high, and these markets are relatively poorly integrated with one another.

The frictions inherent in the labor market mean that even during macroeconomic expansions there may be both a significant number of unemployed workers and a large number of unfilled vacancies. When viewed from some distance and looked at in the long-run, however, what is most striking is how effective labor market institutions have been in adapting to the shifting patterns of supply and demand in the economy. Over the past two centuries American labor markets have accomplished a massive redistribution of labor out of agriculture into manufacturing, and then from manufacturing into services. At the same time they have accomplished a huge geographic reallocation of labor between the United States and other parts of the world as well as within the United States itself, both across states and regions and from rural locations to urban areas.

This essay is organized topically, beginning with a discussion of the evolution of institutions involved in the allocation of labor across space and then taking up the development of institutions that fostered the allocation of labor across industries and sectors. The third section considers issues related to labor market performance.

The Geographic Distribution of Labor

One of the dominant themes of American history is the process of European settlement (and the concomitant displacement of the native population). This movement of population is in essence a labor market phenomenon. From the beginning of European settlement in what became the United States, labor markets were characterized by the scarcity of labor in relation to abundant land and natural resources. Labor scarcity raised labor productivity and enabled ordinary Americans to enjoy a higher standard of living than comparable Europeans. Counterbalancing these inducements to migration, however, were the high costs of travel across the Atlantic and the significant risks posed by settlement in frontier regions. Over time, technological changes lowered the costs of communication and transportation. But exploiting these advantages required the parallel development of new labor market institutions.

Trans-Atlantic Migration in the Colonial Period

During the seventeenth and eighteenth centuries a variety of labor market institutions developed to facilitate the movement of labor in response to the opportunities created by American factor proportions. While some immigrants migrated on their own, the majority of immigrants were either indentured servants or African slaves.

Because of the cost of passage—which exceeded half a year’s income for a typical British immigrant and a full year’s income for a typical German immigrant—only a small portion of European migrants could afford to pay for their passage to the Americas (Grubb 1985a). They did so by signing contracts, or “indentures,” committing themselves to work for a fixed number of years in the future—their labor being their only viable asset—with British merchants, who then sold these contracts to colonists after their ship reached America. Indentured servitude was introduced by the Virginia Company in 1619 and appears to have arisen from a combination of the terms of two other types of labor contract widely used in England at the time: service in husbandry and apprenticeship (Galenson 1981). In other cases, migrants borrowed money for their passage and committed to repay merchants by pledging to sell themselves as servants in America, a practice known as “redemptioner servitude (Grubb 1986). Redemptioners bore increased risk because they could not predict in advance what terms they might be able to negotiate for their labor, but presumably they did so because of other benefits, such as the opportunity to choose their own master, and to select where they would be employed.

Although data on immigration for the colonial period are scattered and incomplete a number of scholars have estimated that between half and three quarters of European immigrants arriving in the colonies came as indentured or redemptioner servants. Using data for the end of the colonial period Grubb (1985b) found that close to three-quarters of English immigrants to Pennsylvania and nearly 60 percent of German immigrants arrived as servants.

A number of scholars have examined the terms of indenture and redemptioner contracts in some detail (see, e.g., Galenson 1981; Grubb 1985a). They find that consistent with the existence of a well-functioning market, the terms of service varied in response to differences in individual productivity, employment conditions, and the balance of supply and demand in different locations.

The other major source of labor for the colonies was the forced migration of African slaves. Slavery had been introduced in the West Indies at an early date, but it was not until the late seventeenth century that significant numbers of slaves began to be imported into the mainland colonies. From 1700 to 1780 the proportion of blacks in the Chesapeake region grew from 13 percent to around 40 percent. In South Carolina and Georgia, the black share of the population climbed from 18 percent to 41 percent in the same period (McCusker and Menard, 1985, p. 222). Galenson (1984) explains the transition from indentured European to enslaved African labor as the result of shifts in supply and demand conditions in England and the trans-Atlantic slave market. Conditions in Europe improved after 1650, reducing the supply of indentured servants, while at the same time increased competition in the slave trade was lowering the price of slaves (Dunn 1984). In some sense the colonies’ early experience with indentured servants paved the way for the transition to slavery. Like slaves, indentured servants were unfree, and ownership of their labor could be freely transferred from one owner to another. Unlike slaves, however, they could look forward to eventually becoming free (Morgan 1971).

Over time a marked regional division in labor market institutions emerged in colonial America. The use of slaves was concentrated in the Chesapeake and Lower South, where the presence of staple export crops (rice, indigo and tobacco) provided economic rewards for expanding the scale of cultivation beyond the size achievable with family labor. European immigrants (primarily indentured servants) tended to concentrate in the Chesapeake and Middle Colonies, where servants could expect to find the greatest opportunities to enter agriculture once they had completed their term of service. While New England was able to support self-sufficient farmers, its climate and soil were not conducive to the expansion of commercial agriculture, with the result that it attracted relatively few slaves, indentured servants, or free immigrants. These patterns are illustrated in Table 1, which summarizes the composition and destinations of English emigrants in the years 1773 to 1776.

Table 1

English Emigration to the American Colonies, by Destination and Type, 1773-76

Total Emigration
Destination Number Percentage Percent listed as servants
New England 54 1.20 1.85
Middle Colonies 1,162 25.78 61.27
New York 303 6.72 11.55
Pennsylvania 859 19.06 78.81
Chesapeake 2,984 66.21 96.28
Maryland 2,217 49.19 98.33
Virginia 767 17.02 90.35
Lower South 307 6.81 19.54
Carolinas 106 2.35 23.58
Georgia 196 4.35 17.86
Florida 5 0.11 0.00
Total 4,507 80.90

Source: Grubb (1985b, p. 334).

International Migration in the Nineteenth and Twentieth Centuries

American independence marks a turning point in the development of labor market institutions. In 1808 Congress prohibited the importation of slaves. Meanwhile, the use of indentured servitude to finance the migration of European immigrants fell into disuse. As a result, most subsequent migration was at least nominally free migration.

The high cost of migration and the economic uncertainties of the new nation help to explain the relatively low level of immigration in the early years of the nineteenth century. But as the costs of transportation fell, the volume of immigration rose dramatically over the course of the century. Transportation costs were of course only one of the obstacles to international population movements. At least as important were problems of communication. Potential migrants might know in a general way that the United States offered greater economic opportunities than were available at home, but acting on this information required the development of labor market institutions that could effectively link job-seekers with employers.

For the most part, the labor market institutions that emerged in the nineteenth century to direct international migration were “informal” and thus difficult to document. As Rosenbloom (2002, ch. 2) describes, however, word-of-mouth played an important role in labor markets at this time. Many immigrants were following in the footsteps of friends or relatives already in the United States. Often these initial pioneers provided material assistance—helping to purchase ship and train tickets, providing housing—as well as information. The consequences of this so-called “chain migration” are readily reflected in a variety of kinds of evidence. Numerous studies of specific migration streams have documented the role of a small group of initial migrants in facilitating subsequent migration (for example, Barton 1975; Kamphoefner 1987; Gjerde 1985). At a more aggregate level, settlement patterns confirm the tendency of immigrants from different countries to concentrate in different cities (Ward 1971, p. 77; Galloway, Vedder and Shukla 1974).

Informal word-of-mouth communication was an effective labor market institution because it served both employers and job-seekers. For job-seekers the recommendations of friends and relatives were more reliable than those of third parties and often came with additional assistance. For employers the recommendations of current employees served as a kind of screening mechanism, since their employees were unlikely to encourage the immigration of unreliable workers.

While chain migration can explain a quantitatively large part of the redistribution of labor in the nineteenth century it is still necessary to explain how these chains came into existence in the first place. Chain migration always coexisted with another set of more formal labor market institutions that grew up largely to serve employers who could not rely on their existing labor force to recruit new hires (such as railroad construction companies). Labor agents, often themselves immigrants, acted as intermediaries between these employers and job-seekers, providing labor market information and frequently acting as translators for immigrants who could not speak English. Steamship companies operating between Europe and the United States also employed agents to help recruit potential migrants (Rosenbloom 2002, ch. 3).

By the 1840s networks of labor agents along with boarding houses serving immigrants and other similar support networks were well established in New York, Boston, and other major immigrant destinations. The services of these agents were well documented in published guides and most Europeans considering immigration must have known that they could turn to these commercial intermediaries if they lacked friends and family to guide them. After some time working in America these immigrants, if they were successful, would find steadier employment and begin to direct subsequent migration, thus establishing a new link in the stream of chain migration.

The economic impacts of immigration are theoretically ambiguous. Increased labor supply, by itself, would tend to lower wages—benefiting employers and hurting workers. But because immigrants are also consumers, the resulting increase in demand for goods and services will increase the demand for labor, partially offsetting the depressing effect of immigration on wages. As long as the labor to capital ratio rises, however, immigration will necessarily lower wages. But if, as was true in the late nineteenth century, foreign lending follows foreign labor, then there may be no negative impact on wages (Carter and Sutch 1999). Whatever the theoretical considerations, however, immigration became an increasingly controversial political issue during the late nineteenth and early twentieth centuries. While employers and some immigrant groups supported continued immigration, there was a growing nativist sentiment among other segments of the population. Anti-immigrant sentiments appear to have arisen out of a mix of perceived economic effects and concern about the implications of the ethnic, religious and cultural differences between immigrants and the native born.

In 1882, Congress passed the Chinese Exclusion Act. Subsequent legislative efforts to impose further restrictions on immigration passed Congress but foundered on presidential vetoes. The balance of political forces shifted, however, in the wake of World War I. In 1917 a literacy requirement was imposed for the first time, and in 1921 an Emergency Quota Act was passed (Goldin 1994).

With the passage of the Emergency Quota Act in 1921 and subsequent legislation culminating in the National Origins Act, the volume of immigration dropped sharply. Since this time international migration into the United States has been controlled to varying degrees by legal restrictions. Variations in the rules have produced variations in the volume of legal immigration. Meanwhile the persistence of large wage gaps between the United States and Mexico and other developing countries has encouraged a substantial volume of illegal immigration. It remains the case, however, that most of this migration—both legal and illegal—continues to be directed by chains of friends and relatives.

Recent trends in outsourcing and off-shoring have begun to create a new channel by which lower-wage workers outside the United States can respond to the country’s high wages without physically relocating. Workers in India, China, and elsewhere possessing technical skills can now provide services such as data entry or technical support by phone and over the internet. While the novelty of this phenomenon has attracted considerable attention, the actual volume of jobs moved off-shore remains limited, and there are important obstacles to overcome before more jobs can be carried out remotely (Edwards 2004).

Internal Migration in the Nineteenth and Twentieth Centuries

At the same time that American economic development created international imbalances between labor supply and demand it also created internal disequilibrium. Fertile land and abundant natural resources drew population toward less densely settled regions in the West. Over the course of the century, advances in transportation technologies lowered the cost of shipping goods from interior regions, vastly expanding the area available for settlement. Meanwhile transportation advances and technological innovations encouraged the growth of manufacturing and fueled increased urbanization. The movement of population and economic activity from the Eastern Seaboard into the interior of the continent and from rural to urban areas in response to these incentives is an important element of U.S. economic history in the nineteenth century.

In the pre-Civil War era, the labor market response to frontier expansion differed substantially between North and South, with profound effects on patterns of settlement and regional development. Much of the cost of migration is a result of the need to gather information about opportunities in potential destinations. In the South, plantation owners could spread these costs over a relatively large number of potential migrants—i.e., their slaves. Plantations were also relatively self-sufficient, requiring little urban or commercial infrastructure to make them economically viable. Moreover, the existence of well-established markets for slaves allowed western planters to expand their labor force by purchasing additional labor from eastern plantations.

In the North, on the other hand, migration took place through the relocation of small, family farms. Fixed costs of gathering information and the risks of migration loomed larger in these farmers’ calculations than they did for slaveholders, and they were more dependent on the presence of urban merchants to supply them with inputs and market their products. Consequently the task of mobilizing labor fell to promoters who bought up large tracts of land at low prices and then subdivided them into individual lots. To increase the value of these lands promoters offered loans, actively encourage the development of urban services such as blacksmith shops, grain merchants, wagon builders and general stores, and recruited settlers. With the spread of railroads, railroad construction companies also played a role in encouraging settlement along their routes to speed the development of traffic.

The differences in processes of westward migration in the North and South were reflected in the divergence of rates of urbanization, transportation infrastructure investment, manufacturing employment, and population density, all of which were higher in the North than in the South in 1860 (Wright 1986, pp. 19-29).

The Distribution of Labor among Economic Activities

Over the course of U.S. economic development technological changes and shifting consumption patterns have caused the demand for labor to increase in manufacturing and services and decline in agriculture and other extractive activities. These broad changes are illustrated in Table 2. As technological changes have increased the advantages of specialization and the division of labor, more and more economic activity has moved outside the scope of the household, and the boundaries of the labor market have been enlarged. As a result more and more women have moved into the paid labor force. On the other hand, with the increasing importance of formal education, there has been a decline in the number of children in the labor force (Whaples 2005).

Table 2

Sectoral Distribution of the Labor Force, 1800-1999

Share in
Non-Agriculture
Year Total Labor Force (1000s) Agriculture Total Manufacturing Services
1800 1,658 76.2 23.8
1850 8,199 53.6 46.4
1900 29,031 37.5 59.4 35.8 23.6
1950 57,860 11.9 88.1 41.0 47.1
1999 133,489 2.3 97.7 24.7 73.0

Notes and Sources: 1800 and 1850 from Weiss (1986), pp. 646-49; remaining years from Hughes and Cain (2003), 547-48. For 1900-1999 Forestry and Fishing are included in the Agricultural labor force.

As these changes have taken place they have placed strains on existing labor market institutions and encouraged the development of new mechanisms to facilitate the distribution of labor. Over the course of the last century and a half the tendency has been a movement away from something approximating a “spot” market characterized by short-term employment relationships in which wages are equated to the marginal product of labor, and toward a much more complex and rule-bound set of long-term transactions (Goldin 2000, p. 586) While certain segments of the labor market still involve relatively anonymous and short-lived transactions, workers and employers are much more likely today to enter into long-term employment relationships that are expected to last for many years.

The evolution of labor market institutions in response to these shifting demands has been anything but smooth. During the late nineteenth century the expansion of organized labor was accompanied by often violent labor-management conflict (Friedman 2002). Not until the New Deal did unions gain widespread acceptance and a legal right to bargain. Yet even today, union organizing efforts are often met with considerable hostility.

Conflicts over union organizing efforts inevitably involved state and federal governments because the legal environment directly affected the bargaining power of both sides, and shifting legal opinions and legislative changes played an important part in determining the outcome of these contests. State and federal governments were also drawn into labor markets as various groups sought to limit hours of work, set minimum wages, provide support for disabled workers, and respond to other perceived shortcomings of existing arrangements. It would be wrong, however, to see the growth of government regulation as simply a movement from freer to more regulated markets. The ability to exchange goods and services rests ultimately on the legal system, and to this extent there has never been an entirely unregulated market. In addition, labor market transactions are never as simple as the anonymous exchange of other goods or services. Because the identities of individual buyers and sellers matter and the long-term nature of many employment relationships, adjustments can occur along other margins besides wages, and many of these dimensions involve externalities that affect all workers at a particular establishment, or possibly workers in an entire industry or sector.

Government regulations have responded in many cases to needs voiced by participants on both sides of the labor market for assistance to achieve desired ends. That has not, of course, prevented both workers and employers from seeking to use government to alter the way in which the gains from trade are distributed within the market.

The Agricultural Labor Market

At the beginning of the nineteenth century most labor was employed in agriculture, and, with the exception of large slave plantations, most agricultural labor was performed on small, family-run farms. There were markets for temporary and seasonal agricultural laborers to supplement family labor supply, but in most parts of the country outside the South, families remained the dominant institution directing the allocation of farm labor. Reliable estimates of the number of farm workers are not readily available before 1860, when the federal Census first enumerated “farm laborers.” At this time census enumerators found about 800 thousand such workers, implying an average of less than one-half farm worker per farm. Interpretation of this figure is complicated, however, and it may either overstate the amount of hired help—since farm laborers included unpaid family workers—or understate it—since it excluded those who reported their occupation simply as “laborer” and may have spent some of their time working in agriculture (Wright 1988, p. 193). A possibly more reliable indicator is provided by the percentage of gross value of farm output spent on wage labor. This figure fell from 11.4 percent in 1870 to around 8 percent by 1900, indicating that hired labor was on average becoming even less important (Wright 1988, pp. 194-95).

In the South, after the Civil War, arrangements were more complicated. Former plantation owners continued to own large tracts of land that required labor if they were to be made productive. Meanwhile former slaves needed access to land and capital if they were to support themselves. While some land owners turned to wage labor to work their land, most relied heavily on institutions like sharecropping. On the supply side, croppers viewed this form of employment as a rung on the “agricultural ladder” that would lead eventually to tenancy and possibly ownership. Because climbing the agricultural ladder meant establishing one’s credit-worthiness with local lenders, southern farm laborers tended to sort themselves into two categories: locally established (mostly older, married men) croppers and renters on the one hand, and mobile wage laborers (mostly younger and unmarried) on the other. While the labor market for each of these types of workers appears to have been relatively competitive, the barriers between the two markets remained relatively high (Wright 1987, p. 111).

While the predominant pattern in agriculture then was one of small, family-operated units, there was an important countervailing trend toward specialization that both depended on, and encouraged the emergence of a more specialized market for farm labor. Because specialization in a single crop increased the seasonality of labor demand, farmers could not afford to employ labor year-round, but had to depend on migrant workers. The use of seasonal gangs of migrant wage laborers developed earliest in California in the 1870s and 1880s, where employers relied heavily on Chinese immigrants. Following restrictions on Chinese entry, they were replaced first by Japanese, and later by Mexican workers (Wright 1988, pp. 201-204).

The Emergence of Internal Labor Markets

Outside of agriculture, at the beginning of the nineteenth century most manufacturing took place in small establishments. Hired labor might consist of a small number of apprentices, or, as in the early New England textile mills, a few child laborers hired from nearby farms (Ware 1931). As a result labor market institutions remained small-scale and informal, and institutions for training and skill acquisition remained correspondingly limited. Workers learned on the job as apprentices or helpers; advancement came through establishing themselves as independent producers rather than through internal promotion.

With the growth of manufacturing, and the spread of factory methods of production, especially in the years after the end of the Civil War, an increasing number of people could expect to spend their working-lives as employees. One reflection of this change was the emergence in the 1870s of the problem of unemployment. During the depression of 1873 for the first time cities throughout the country had to contend with large masses of industrial workers thrown out of work and unable to support themselves through, in the language of the time, “no fault of their own” (Keyssar 1986, ch. 2).

The growth of large factories and the creation of new kinds of labor skills specific to a particular employer created returns to sustaining long-term employment relationships. As workers acquired job- and employer-specific skills their productivity increased giving rise to gains that were available only so long as the employment relationship persisted. Employers did little, however, to encourage long-term employment relationships. Instead authority over hiring, promotion and retention was commonly delegated to foremen or inside contractors (Nelson 1975, pp. 34-54). In the latter case, skilled craftsmen operated in effect as their own bosses contracting with the firm to supply components or finished products for an agreed price, and taking responsibility for hiring and managing their own assistants.

These arrangements were well suited to promoting external mobility. Foremen were often drawn from the immigrant community and could easily tap into word-of-mouth channels of recruitment. But these benefits came increasingly into conflict with rising costs of hiring and training workers.

The informality of personnel policies prior to World War I seems likely to have discouraged lasting employment relationships, and it is true that rates of labor turnover at the beginning of the twentieth century were considerably higher than they were to be later (Owen, 2004). Scattered evidence on the duration of employment relationships gathered by various state labor bureaus at the end of the century suggests, however, at least some workers did establish lasting employment relationship (Carter 1988; Carter and Savocca 1990; Jacoby and Sharma 1992; James 1994).

The growing awareness of the costs of labor-turnover and informal, casual labor relations led reformers to advocate the establishment of more centralized and formal processes of hiring, firing and promotion, along with the establishment of internal job-ladders, and deferred payment plans to help bind workers and employers. The implementation of these reforms did not make significant headway, however, until the 1920s (Slichter 1929). Why employers began to establish internal labor markets in the 1920s remains in dispute. While some scholars emphasize pressure from workers (Jacoby 1984; 1985) others have stressed that it was largely a response to the rising costs of labor turnover (Edwards 1979).

The Government and the Labor Market

The growth of large factories contributed to rising labor tensions in the late nineteenth- and early twentieth-centuries. Issues like hours of work, safety, and working conditions all have a significant public goods aspect. While market forces of entry and exit will force employers to adopt policies that are sufficient to attract the marginal worker (the one just indifferent between staying and leaving), less mobile workers may find that their interests are not adequately represented (Freeman and Medoff 1984). One solution is to establish mechanisms for collective bargaining, and the years after the American Civil War were characterized by significant progress in the growth of organized labor (Friedman 2002). Unionization efforts, however, met strong opposition from employers, and suffered from the obstacles created by the American legal system’s bias toward protecting property and the freedom of contract. Under prevailing legal interpretation, strikes were often found by the courts to be conspiracies in restraint of trade with the result that the apparatus of government was often arrayed against labor.

Although efforts to win significant improvements in working conditions were rarely successful, there were still areas where there was room for mutually beneficial change. One such area involved the provision of disability insurance for workers injured on the job. Traditionally, injured workers had turned to the courts to adjudicate liability for industrial accidents. Legal proceedings were costly and their outcome unpredictable. By the early 1910s it became clear to all sides that a system of disability insurance was preferable to reliance on the courts. Resolution of this problem, however, required the intervention of state legislatures to establish mandatory state workers compensation insurance schemes and remove the issue from the courts. Once introduced workers compensation schemes spread quickly: nine states passed legislation in 1911; 13 more had joined the bandwagon by 1913, and by 1920 44 states had such legislation (Fishback 2001).

Along with workers compensation state legislatures in the late nineteenth century also considered legislation restricting hours of work. Prevailing legal interpretations limited the effectiveness of such efforts for adult males. But rules restricting hours for women and children were found to be acceptable. The federal government passed legislation restricting the employment of children under 14 in 1916, but this law was found unconstitutional in 1916 (Goldin 2000, p. 612-13).

The economic crisis of the 1930s triggered a new wave of government interventions in the labor market. During the 1930s the federal government granted unions the right to organize legally, established a system of unemployment, disability and old age insurance, and established minimum wage and overtime pay provisions.

In 1933 the National Industrial Recovery Act included provisions legalizing unions’ right to bargain collectively. Although the NIRA was eventually ruled to be unconstitutional, the key labor provisions of the Act were reinstated in the Wagner Act of 1935. While some of the provisions of the Wagner Act were modified in 1947 by the Taft-Hartley Act, its passage marks the beginning of the golden age of organized labor. Union membership jumped very quickly after 1935 from around 12 percent of the non-agricultural labor force to nearly 30 percent, and by the late 1940s had attained a peak of 35 percent, where it stabilized. Since the 1960s, however, union membership has declined steadily, to the point where it is now back at pre-Wagner Act levels.

The Social Security Act of 1935 introduced a federal unemployment insurance scheme that was operated in partnership with state governments and financed through a tax on employers. It also created government old age and disability insurance. In 1938, the federal Fair Labor Standards Act provided for minimum wages and for overtime pay. At first the coverage of these provisions was limited, but it has been steadily increased in subsequent years to cover most industries today.

In the post-war era, the federal government has expanded its role in managing labor markets both directly—through the establishment of occupational safety regulations, and anti-discrimination laws, for example—and indirectly—through its efforts to manage the macroeconomy to insure maximum employment.

A further expansion of federal involvement in labor markets began in 1964 with passage of the Civil Rights Act, which prohibited employment discrimination against both minorities and women. In 1967 the Age Discrimination and Employment Act was passed prohibiting discrimination against people aged 40 to 70 in regard to hiring, firing, working conditions and pay. The Family and Medical Leave Act of 1994 allows for unpaid leave to care for infants, children and other sick relatives (Goldin 2000, p. 614).

Whether state and federal legislation has significantly affected labor market outcomes remains unclear. Most economists would argue that the majority of labor’s gains in the past century would have occurred even in the absence of government intervention. Rather than shaping market outcomes, many legislative initiatives emerged as a result of underlying changes that were making advances possible. According to Claudia Goldin (2000, p. 553) “government intervention often reinforced existing trends, as in the decline of child labor, the narrowing of the wage structure, and the decrease in hours of work.” In other cases, such as Workers Compensation and pensions, legislation helped to establish the basis for markets.

The Changing Boundaries of the Labor Market

The rise of factories and urban employment had implications that went far beyond the labor market itself. On farms women and children had found ready employment (Craig 1993, ch. 4). But when the male household head worked for wages, employment opportunities for other family members were more limited. Late nineteenth-century convention largely dictated that married women did not work outside the home unless their husband was dead or incapacitated (Goldin 1990, p. 119-20). Children, on the other hand, were often viewed as supplementary earners in blue-collar households at this time.

Since 1900 changes in relative earnings power related to shifts in technology have encouraged women to enter the paid labor market while purchasing more of the goods and services that were previously produced within the home. At the same time, the rising value of formal education has lead to the withdrawal of child labor from the market and increased investment in formal education (Whaples 2005). During the first half of the twentieth century high school education became nearly universal. And since World War II, there has been a rapid increase in the number of college educated workers in the U.S. economy (Goldin 2000, p. 609-12).

Assessing the Efficiency of Labor Market Institutions

The function of labor markets is to match workers and jobs. As this essay has described the mechanisms by which labor markets have accomplished this task have changed considerably as the American economy has developed. A central issue for economic historians is to assess how changing labor market institutions have affected the efficiency of labor markets. This leads to three sets of questions. The first concerns the long-run efficiency of market processes in allocating labor across space and economic activities. The second involves the response of labor markets to short-run macroeconomic fluctuations. The third deals with wage determination and the distribution of income.

Long-Run Efficiency and Wage Gaps

Efforts to evaluate the efficiency of market allocation begin with what is commonly know as the “law of one price,” which states that within an efficient market the wage of similar workers doing similar work under similar circumstances should be equalized. The ideal of complete equalization is, of course, unlikely to be achieved given the high information and transactions costs that characterize labor markets. Thus, conclusions are usually couched in relative terms, comparing the efficiency of one market at one point in time with those of some other markets at other points in time. A further complication in measuring wage equalization is the need to compare homogeneous workers and to control for other differences (such as cost of living and non-pecuniary amenities).

Falling transportation and communications costs have encouraged a trend toward diminishing wage gaps over time, but this trend has not always been consistent over time, nor has it applied to all markets in equal measure. That said, what stands out is in fact the relative strength of forces of market arbitrage that have operated in many contexts to promote wage convergence.

At the beginning of the nineteenth century, the costs of trans-Atlantic migration were still quite high and international wage gaps large. By the 1840s, however, vast improvements in shipping cut the costs of migration, and gave rise to an era of dramatic international wage equalization (O’Rourke and Williamson 1999, ch. 2; Williamson 1995). Figure 1 shows the movement of real wages relative to the United States in a selection of European countries. After the beginning of mass immigration wage differentials began to fall substantially in one country after another. International wage convergence continued up until the 1880s, when it appears that the accelerating growth of the American economy outstripped European labor supply responses and reversed wage convergence briefly. World War I and subsequent immigration restrictions caused a sharper break, and contributed to widening international wage differences during the middle portion of the twentieth century. From World War II until about 1980, European wage levels once again began to converge toward the U.S., but this convergence reflected largely internally-generated improvements in European living standards rather then labor market pressures.

Figure 1

Relative Real Wages of Selected European Countries, 1830-1980 (US = 100)

Source: Williamson (1995), Tables A2.1-A2.3.

Wage convergence also took place within some parts of the United States during the nineteenth century. Figure 2 traces wages in the North Central and Southern regions of the U.S relative to those in the Northeast across the period from 1820 to the early twentieth century. Within the United States, wages in the North Central region of the country were 30 to 40 percent higher than in the East in the 1820s (Margo 2000a, ch. 5). Thereafter, wage gaps declined substantially, falling to the 10-20 percent range before the Civil War. Despite some temporary divergence during the war, wage gaps had fallen to 5 to 10 percent by the 1880s and 1890s. Much of this decline was made possible by faster and less expensive means of transportation, but it was also dependent on the development of labor market institutions linking the two regions, for while transportation improvements helped to link East and West, there was no corresponding North-South integration. While southern wages hovered near levels in the Northeast prior to the Civil War, they fell substantially below northern levels after the Civil War, as Figure 2 illustrates.

Figure 2

Relative Regional Real Wage Rates in the United States, 1825-1984

(Northeast = 100 in each year)

Notes and sources: Rosenbloom (2002, p. 133); Montgomery (1992). It is not possible to assemble entirely consistent data on regional wage variations over such an extended period. The nature of the wage data, the precise geographic coverage of the data, and the estimates of regional cost-of-living indices are all different. The earliest wage data—Margo (2000); Sundstrom and Rosenbloom (1993) and Coelho and Shepherd (1976) are all based on occupational wage rates from payroll records for specific occupations; Rosenbloom (1996) uses average earnings across all manufacturing workers; while Montgomery (1992) uses individual level wage data drawn from the Current Population Survey, and calculates geographic variations using a regression technique to control for individual differences in human capital and industry of employment. I used the relative real wages that Montgomery (1992) reported for workers in manufacturing, and used an unweighted average of wages across the cities in each region to arrive at relative regional real wages. Interested readers should consult the various underlying sources for further details.

Despite the large North-South wage gap Table 3 shows there was relatively little migration out of the South until large-scale foreign immigration came to an end. Migration from the South during World War I and the 1920s created a basis for future chain migration, but the Great Depression of the 1930s interrupted this process of adjustment. Not until the 1940s did the North-South wage gap begin to decline substantially (Wright 1986, pp. 71-80). By the 1970s the southern wage disadvantage had largely disappeared, and because of the decline fortunes of older manufacturing districts and the rise of Sunbelt cities, wages in the South now exceed those in the Northeast (Coelho and Ghali 1971; Bellante 1979; Sahling and Smith 1983; Montgomery 1992). Despite these shocks, however, the overall variation in wages appears comparable to levels attained by the end of the nineteenth century. Montgomery (1992), for example finds that from 1974 to 1984 the standard deviation of wages across SMSAs was only about 10 percent of the average wage.

Table 3

Net Migration by Region, and Race, 1870-1950

South Northeast North Central West
Period White Black White Black White Black White Black
Number (in 1,000s)
1870-80 91 -68 -374 26 26 42 257 0
1880-90 -271 -88 -240 61 -43 28 554 0
1890-00 -30 -185 101 136 -445 49 374 0
1900-10 -69 -194 -196 109 -1,110 63 1,375 22
1910-20 -663 -555 -74 242 -145 281 880 32
1920-30 -704 -903 -177 435 -464 426 1,345 42
1930-40 -558 -480 55 273 -747 152 1,250 55
1940-50 -866 -1581 -659 599 -1,296 626 2,822 356
Rate (migrants/1,000 Population)
1870-80 11 -14 -33 55 2 124 274 0
1880-90 -26 -15 -18 107 -3 65 325 0
1890-00 -2 -26 6 200 -23 104 141 0
1900-10 -4 -24 -11 137 -48 122 329 542
1910-20 -33 -66 -3 254 -5 421 143 491
1920-30 -30 -103 -7 328 -15 415 160 421
1930-40 -20 -52 2 157 -22 113 116 378
1940-50 -28 -167 -20 259 -35 344 195 964

Note: Net migration is calculated as the difference between the actual increase in population over each decade and the predicted increase based on age and sex specific mortality rates and the demographic structure of the region’s population at the beginning of the decade. If the actual increase exceeds the predicted increase this implies a net migration into the region; if the actual increase is less than predicted this implies net migration out of the region. The states included in the Southern region are Oklahoma, Texas, Arkansas, Louisiana, Mississippi, Alabama, Tennessee, Kentucky, West Virginia, Virginia, North Carolina, South Carolina, Georgia, and Florida.

Source: Eldridge and Thomas (1964, pp. 90, 99).

In addition to geographic wage gaps economists have considered gaps between farm and city, between black and white workers, between men and women, and between different industries. The literature on these topics is quite extensive and this essay can only touch on a few of the more general themes raised here as they relate to U.S. economic history.

Studies of farm-city wage gaps are a variant of the broader literature on geographic wage variation, related to the general movement of labor from farms to urban manufacturing and services. Here comparisons are complicated by the need to adjust for the non-wage perquisites that farm laborers typically received, which could be almost as large as cash wages. The issue of whether such gaps existed in the nineteenth century has important implications for whether the pace of industrialization was impeded by the lack of adequate labor supply responses. By the second half of the nineteenth century at least, it appears that farm-manufacturing wage gaps were small and markets were relatively integrated (Wright 1988, pp. 204-5). Margo (2000, ch. 4) offers evidence of a high degree of equalization within local labor markets between farm and urban wages as early as 1860. Making comparisons within counties and states, he reports that farm wages were within 10 percent of urban wages in eight states. Analyzing data from the late nineteenth century through the 1930s, Hatton and Williamson (1991) find that farm and city wages were nearly equal within U.S. regions by the 1890s. It appears, however that during the Great Depression farm wages were much more flexible than urban wages causing a large gap to emerge at this time (Alston and Williamson 1991).

Much attention has been focused on trends in wage gaps by race and sex. The twentieth century has seen a substantial convergence in both of these differentials. Table 4 displays comparisons of earnings of black males relative to white males for full time workers. In 1940, full-time black male workers earned only about 43 percent of what white male full-time workers did. By 1980 the racial pay ratio had risen to nearly 73 percent, but there has been little subsequent progress. Until the mid-1960s these gains can be attributed primarily to migration from the low-wage South to higher paying areas in the North, and to increases in the quantity and quality of black education over time (Margo 1995; Smith and Welch 1990). Since then, however, most gains have been due to shifts in relative pay within regions. Although it is clear that discrimination was a key factor in limiting access to education, the role of discrimination within the labor market in contributing to these differentials has been a more controversial topic (see Wright 1986, pp. 127-34). But the episodic nature of black wage gains, especially after 1964 is compelling evidence that discrimination has played a role historically in earnings differences and that federal anti-discrimination legislation was a crucial factor in reducing its effects (Donohue and Heckman 1991).

Table 4

Black Male Wages as a Percentage of White Male Wages, 1940-2004

Date Black Relative Wage
1940 43.4
1950 55.2
1960 57.5
1970 64.4
1980 72.6
1990 70.0
2004 77.0

Notes and Sources: Data for 1940 through 1980 are based on Census data as reported in Smith and Welch (1989, Table 8). Data for 1990 are from Ehrenberg and Smith (2000, Table 12.4) and refer to earnings of full time, full year workers. Data from 2004 are for median weekly earnings of full-time wage and salary workers derived from data in the Current Population Survey accessed on-line from the Bureau of Labor Statistic on 13 December 2005; URL ftp://ftp.bls.gov/pub/special.requests/lf/aat37.txt.

Male-Female wage gaps have also narrowed substantially over time. In the 1820s women’s earnings in manufacturing were a little less than 40 percent of those of men, but this ratio rose over time reaching about 55 percent by the 1920s. Across all sectors women’s relative pay rose during the first half of the twentieth century, but gains in female wages stalled during the 1950s and 1960s at the time when female labor force participation began to increase rapidly. Beginning in the late 1970s or early 1980s, relative female pay began to rise again, and today women earn about 80 percent what men do (Goldin 1990, table 3.2; Goldin 2000, pp. 606-8). Part of this remaining difference is explained by differences in the occupational distribution of men and women, with women tending to be concentrated in lower paying jobs. Whether these differences are the result of persistent discrimination or arise because of differences in productivity or a choice by women to trade off greater flexibility in terms of labor market commitment for lower pay remains controversial.

In addition to locational, sectoral, racial and gender wage differentials, economists have also documented and analyzed differences by industry. Krueger and Summers (1987) find that there are pronounced differences in wages by industry within well-specified occupational classes, and that these differentials have remained relatively stable over several decades. One interpretation of this phenomenon is that in industries with substantial market power workers are able to extract some of the monopoly rents as higher pay. An alternative view is that workers are in fact heterogeneous, and differences in wages reflect a process of sorting in which higher paying industries attract more able workers.

The Response to Short-run Macroeconomic Fluctuations

The existence of unemployment is one of the clearest indications of the persistent frictions that characterize labor markets. As described earlier, the concept of unemployment first entered common discussion with the growth of the factory labor force in the 1870s. Unemployment was not a visible social phenomenon in an agricultural economy, although there was undoubtedly a great deal of hidden underemployment.

Although one might have expected that the shift from spot toward more contractual labor markets would have increased rigidities in the employment relationship that would result in higher levels of unemployment there is in fact no evidence of any long-run increase in the level of unemployment.

Contemporaneous measurements of the rate of unemployment only began in 1940. Prior to this date, economic historians have had to estimate unemployment levels from a variety of other sources. Decennial censuses provide benchmark levels, but it is necessary to interpolate between these benchmarks based on other series. Conclusions about long-run changes in unemployment behavior depend to a large extent on the method used to interpolate between benchmark dates. Estimates prepared by Stanley Lebergott (1964) suggest that the average level of unemployment and its volatility have declined between the pre-1930 and post-World War II periods. Christina Romer (1986a, 1986b), however, has argued that there was no decline in volatility. Rather, she argues that the apparent change in behavior is the result of Lebergott’s interpolation procedure.

While the aggregate behavior of unemployment has changed surprisingly little over the past century, the changing nature of employment relationships has been reflected much more clearly in changes in the distribution of the burden of unemployment (Goldin 2000, pp. 591-97). At the beginning of the twentieth century, unemployment was relatively widespread, and largely unrelated to personal characteristics. Thus many employees faced great uncertainty about the permanence of their employment relationship. Today, on the other hand, unemployment is highly concentrated: falling heavily on the least skilled, the youngest, and the non-white segments of the labor force. Thus, the movement away from spot markets has tended to create a two-tier labor market in which some workers are highly vulnerable to economic fluctuations, while others remain largely insulated from economic shocks.

Wage Determination and Distributional Issues

American economic growth has generated vast increases in the material standard of living. Real gross domestic product per capita, for example, has increased more than twenty-fold since 1820 (Steckel 2002). This growth in total output has in large part been passed on to labor in the form of higher wages. Although labor’s share of national output has fluctuated somewhat, in the long-run it has remained surprisingly stable. According to Abramovitz and David (2000, p. 20), labor received 65 percent of national income in the years 1800-1855. Labor’s share dropped in the late nineteenth and early twentieth centuries, falling to a low of 54 percent of national income between 1890 and 1927, but has since risen, reaching 65 percent again in 1966-1989. Thus, over the long term, labor income has grown at the same rate as total output in the economy.

The distribution of labor’s gains across different groups in the labor force has also varied over time. I have already discussed patterns of wage variation by race and gender, but another important issue revolves around the overall level of inequality of pay, and differences in pay between groups of skilled and unskilled workers. Careful research by Picketty and Saez (2003) using individual income tax returns has documented changes in the overall distribution of income in the United States since 1913. They find that inequality has followed a U-shaped pattern over the course of the twentieth century. Inequality was relatively high at the beginning of the period they consider, fell sharply during World War II, held steady until the early 1970s and then began to increase, reaching levels comparable to those in the early twentieth century by the 1990s.

An important factor in the rising inequality of income since 1970 has been growing dispersion in wage rates. The wage differential between workers in the 90th percentile of the wage distribution and those in the 10th percentile increased by 49 percent between 1969 and 1995 (Plotnick et al 2000, pp. 357-58). These shifts are mirrored in increased premiums earned by college graduates relative to high school graduates. Two primary explanations have been advanced for these trends. First, there is evidence that technological changes—especially those associated with the increased use of information technology—has increased relative demand for more educated workers (Murnane, Willett and Levy (1995). Second, increased global integration has allowed low-wage manufacturing industries overseas to compete more effectively with U.S. manufacturers, thus depressing wages in what have traditionally been high-paying blue collar jobs.

Efforts to expand the scope of analysis over a longer-run encounter problems with more limited data. Based on selected wage ratios of skilled and unskilled workers Willamson and Lindert (1980) have argued that there was an increase in wage inequality over the course of the nineteenth century. But other scholars have argued that the wage series that Williamson and Lindert used are unreliable (Margo 2000b, pp. 224-28).

Conclusions

The history of labor market institutions in the United States illustrates the point that real world economies are substantially more complex than the simplest textbook models. Instead of a disinterested and omniscient auctioneer, the process of matching buyers and sellers takes place through the actions of self-interested market participants. The resulting labor market institutions do not respond immediately and precisely to shifting patterns of incentives. Rather they are subject to historical forces of increasing-returns and lock-in that cause them to change gradually and along path-dependent trajectories.

For all of these departures from the theoretically ideal market, however, the history of labor markets in the United States can also be seen as a confirmation of the remarkable power of market processes of allocation. From the beginning of European settlement in mainland North America, labor markets have done a remarkable job of responding to shifting patterns of demand and supply. Not only have they accomplished the massive geographic shifts associated with the settlement of the United States, but they have also dealt with huge structural changes induced by the sustained pace of technological change.

References

Abramovitz, Moses and Paul A. David. “American Macroeconomic Growth in the Era of Knowledge-Based Progress: The Long-Run Perspective.” In The Cambridge Economic History of the United States, Volume 3: The Twentieth Century, edited by Stanley L. Engerman and Robert Gallman. New York: Cambridge University Press, 2000.

Alston, Lee J. and Jeffery G. Williamson. “The Earnings Gap between Agricultural and Manufacturing Laborers, 1925-1941. Journal of Economic History 51, no. 1 (1991): 83-99.

Barton, Josef J. Peasants and Strangers: Italians, Rumanians, and Slovaks in an American City, 1890-1950. Cambridge, MA: Harvard University Press, 1975.

Bellante, Don. “The North-South Differential and the Migration of Heterogeneous Labor.” American Economic Review 69, no. 1 (1979): 166-75.

Carter, Susan B. “The Changing Importance of Lifetime Jobs in the U.S. Economy, 1892-1978.” Industrial Relations 27 (1988): 287-300.

Carter, Susan B. and Elizabeth Savoca. “Labor Mobility and Lengthy Jobs in Nineteenth-Century America.” Journal of Economic History 50, no. 1 (1990): 1-16.

Carter, Susan B. and Richard Sutch. “Historical Perspectives on the Economic Consequences of Immigration into the United States.” In The Handbook of International Migration: The American Experience, edited by Charles Hirschman, Philip Kasinitz and Josh DeWind. New York: Russell Sage Foundation, 1999.

Coelho, Philip R.P. and Moheb A. Ghali. “The End of the North-South Wage Differential.” American Economic Review 61, no. 5 (1971): 932-37.

Coelho, Philip R.P. and James F. Shepherd. “Regional Differences in Real Wages: The United States in 1851-1880.” Explorations in Economic History 13 (1976): 203-30.

Craig, Lee A. To Sow One Acre More: Childbearing and Farm Productivity in the Antebellum North. Baltimore: Johns Hopkins University Press, 1993.

Donahue, John H. III and James J. Heckman. “Continuous versus Episodic Change: The Impact of Civil Rights Policy on the Economic Status of Blacks.” Journal of Economic Literature 29, no. 4 (1991): 1603-43.

Dunn, Richard S. “Servants and Slaves: The Recruitment and Employment of Labor.” In Colonial British America: Essays in the New History of the Early Modern Era, edited by Jack P. Greene and J.R. Pole. Baltimore: Johns Hopkins University Press, 1984.

Edwards, B. “A World of Work: A Survey of Outsourcing.” Economist 13 November 2004.

Edwards, Richard. Contested Terrain: The Transformation of the Workplace in the Twentieth Century. New York: Basic Books, 1979.

Ehrenberg, Ronald G. and Robert S. Smith. Modern Labor Economics: Theory and Public Policy, seventh edition. Reading, MA; Addison-Wesley, 2000.

Eldridge, Hope T. and Dorothy Swaine Thomas. Population Redistribution and Economic Growth, United States 1870-1950, vol. 3: Demographic Analyses and Interrelations. Philadelphia: American Philosophical Society, 1964.

Fishback, Price V. “Workers’ Compensation.” EH.Net Encyclopedia, edited by Robert Whaples. August 15, 2001. URL http://www.eh.net/encyclopedia/articles/fishback.workers.compensation.

Freeman, Richard and James Medoff. What Do Unions Do? New York: Basic Books, 1984.

Friedman, Gerald (2002). “Labor Unions in the United States.” EH.Net Encyclopedia, edited by Robert Whaples. May 8, 2002. URL http://www.eh.net/encyclopedia/articles/friedman.unions.us.

Galenson, David W. White Servitude in Colonial America. New York: Cambridge University Press, 1981.

Galenson, David W. “The Rise and Fall of Indentured Servitude in the Americas: An Economic Analysis.” Journal of Economic History 44, no. 1 (1984): 1-26.

Galloway, Lowell E., Richard K. Vedder and Vishwa Shukla. “The Distribution of the Immigrant Population in the United States: An Econometric Analysis.” Explorations in Economic History 11 (1974): 213-26.

Gjerde, John. From Peasants to Farmers: Migration from Balestrand, Norway to the Upper Middle West. New York: Cambridge University Press, 1985.

Goldin, Claudia. “The Political Economy of Immigration Restriction in the United States, 1890 to 1921.” In The Regulated Economy: A Historical Approach to Political Economy, edited by Claudia Goldin and Gary Libecap. Chicago: University of Chicago Press, 1994.

Goldin, Claudia. “Labor Markets in the Twentieth Century.” In The Cambridge Economic History of the United States, Volume 3: The Twentieth Century, edited by Stanley L. Engerman and Robert Gallman. Cambridge: Cambridge University Press, 2000.

Grubb, Farley. “The Market for Indentured Immigrants: Evidence on the Efficiency of Forward Labor Contracting in Philadelphia, 1745-1773.” Journal of Economic History 45, no. 4 (1985a): 855-68.

Grubb, Farley. “The Incidence of Servitude in Trans-Atlantic Migration, 1771-1804.” Explorations in Economic History 22 (1985b): 316-39.

Grubb, Farley. “Redemptioner Immigration to Pennsylvania: Evidence on Contract Choice and Profitability.” Journal of Economic History 46, no. 2 (1986): 407-18.

Hatton, Timothy J. and Jeffrey G. Williamson (1991). “Integrated and Segmented Labor Markets: Thinking in Two Sectors.” Journal of Economic History 51, no. 2 (1991): 413-25.

Hughes, Jonathan and Louis Cain. American Economic History, sixth edition. Boston: Addison-Wesley, 2003.

Jacoby, Sanford M. “The Development of Internal Labor markets in American Manufacturing Firms.” In Internal Labor Markets, edited by Paul Osterman, 23-69. Cambridge, MA: MIT Press, 1984

Jacoby, Sanford M. Employing Bureaucracy: Managers, Unions, and the Transformation of Work in American Industry, 1900-1945. New York: Columbia University Press, 1985.

Jacoby, Sanford M. and Sunil Sharma. “Employment Duration and Industrial Labor Mobility in the United States, 1880-1980.” Journal of Economic History 52, no. 1 (1992): 161-79.

James, John A. “Job Tenure in the Gilded Age.” In Labour Market Evolution: The Economic History of Market Integration, Wage Flexibility, and the Employment Relation, edited by George Grantham and Mary MacKinnon. New York: Routledge, 1994.

Kamphoefner, Walter D. The Westfalians: From Germany to Missouri. Princeton, NJ: Princeton University Press, 1987.

Keyssar, Alexander. Out of Work: The First Century of Unemployment in Massachusetts. New York: Cambridge University Press, 1986.

Krueger, Alan B. and Lawrence H. Summers. “Reflections on the Inter-Industry Wage Structure.” In Unemployment and the Structure of Labor Markets, edited by Kevin Lang and Jonathan Leonard, 17-47. Oxford: Blackwell, 1987.

Lebergott, Stanley. Manpower in Economic Growth: The American Record since 1800. New York: McGraw-Hill, 1964.

Margo, Robert. “Explaining Black-White Wage Convergence, 1940-1950: The Role of the Great Compression.” Industrial and Labor Relations Review 48 (1995): 470-81.

Margo, Robert. Wages and Labor Markets in the United States, 1820-1860. Chicago: University of Chicago Press, 2000a.

Margo, Robert. “The Labor Force in the Nineteenth Century.” In The Cambridge Economic History of the United States, Volume 2: The Long Nineteenth Century, edited by Stanley L. Engerman and Robert E. Gallman, 207-44. New York: Cambridge University Press, 2000b.

McCusker, John J. and Russell R. Menard. The Economy of British America: 1607-1789. Chapel Hill: University of North Carolina Press, 1985.

Montgomery, Edward. “Evidence on Metropolitan Wage Differences across Industries and over Time.” Journal of Urban Economics 31 (1992): 69-83.

Morgan, Edmund S. “The Labor Problem at Jamestown, 1607-18.” American Historical Review 76 (1971): 595-611.

Murnane, Richard J., John B. Willett and Frank Levy. “The Growing Importance of Cognitive Skills in Wage Determination.” Review of Economics and Statistics 77 (1995): 251-66

Nelson, Daniel. Managers and Workers: Origins of the New Factory System in the United States, 1880-1920. Madison: University of Wisconsin Press, 1975.

O’Rourke, Kevin H. and Jeffrey G. Williamson. Globalization and History: The Evolution of a Nineteenth-Century Atlantic Economy. Cambridge, MA: MIT Press, 1999.

Owen, Laura. “History of Labor Turnover in the U.S.” EH.Net Encyclopedia, edited by Robert Whaples. April 30, 2004. URL http://www.eh.net/encyclopedia/articles/owen.turnover.

Piketty, Thomas and Emmanuel Saez. “Income Inequality in the United States, 1913-1998.” Quarterly Journal of Economics 118 (2003): 1-39.

Plotnick, Robert D. et al. “The Twentieth-Century Record of Inequality and Poverty in the United States” In The Cambridge Economic History of the United States, Volume 3: The Twentieth Century, edited by Stanley L. Engerman and Robert Gallman. New York: Cambridge University Press, 2000.

Romer, Christina. “New Estimates of Prewar Gross National Product and Unemployment.” Journal of Economic History 46, no. 2 (1986a): 341-52.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” Journal of Political Economy 94 (1986b): 1-37.

Rosenbloom, Joshua L. “Was There a National Labor Market at the End of the Nineteenth Century? New Evidence on Earnings in Manufacturing.” Journal of Economic History 56, no. 3 (1996): 626-56.

Rosenbloom, Joshua L. Looking for Work, Searching for Workers: American Labor Markets during Industrialization. New York: Cambridge University Press, 2002.

Slichter, Sumner H. “The Current Labor Policies of American Industries.” Quarterly Journal of Economics 43 (1929): 393-435.

Sahling, Leonard G. and Sharon P. Smith. “Regional Wage Differentials: Has the South Risen Again?” Review of Economics and Statistics 65 (1983): 131-35.

Smith, James P. and Finis R. Welch. “Black Economic Progress after Myrdal.” Journal of Economic Literature 27 (1989): 519-64.

Steckel, Richard. “A History of the Standard of Living in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. July 22, 2002. URL http://eh.net/encyclopedia/article/steckel.standard.living.us

Sundstrom, William A. and Joshua L. Rosenbloom. “Occupational Differences in the Dispersion of Wages and Working Hours: Labor Market Integration in the United States, 1890-1903.” Explorations in Economic History 30 (1993): 379-408.

Ward, David. Cities and Immigrants: A Geography of Change in Nineteenth-Century America. New York: Oxford University Press, 1971.

Ware, Caroline F. The Early New England Cotton Manufacture: A Study in Industrial Beginnings. Boston: Houghton Mifflin, 1931.

Weiss, Thomas. “Revised Estimates of the United States Workforce, 1800-1860.” In Long Term Factors in American Economic Growth, edited by Stanley L. Engerman and Robert E. Gallman, 641-78. Chicago: University of Chicago, 1986.

Whaples, Robert. “Child Labor in the United States.” EH.Net Encyclopedia, edited by Robert Whaples. October 8, 2005. URL http://eh.net/encyclopedia/article/whaples.childlabor.

Williamson, Jeffrey G. “The Evolution of Global Labor Markets since 1830: Background Evidence and Hypotheses.” Explorations in Economic History 32 (1995): 141-96.

Williamson, Jeffrey G. and Peter H. Lindert. American Inequality: A Macroeconomic History. New York: Academic Press, 1980.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy since the Civil War. New York: Basic Books, 1986.

Wright, Gavin. “Postbellum Southern Labor Markets.” In Quantity and Quiddity: Essays in U.S. Economic History, edited by Peter Kilby. Middletown, CT: Wesleyan University Press, 1987.

Wright, Gavin. “American Agriculture and the Labor Market: What Happened to Proletarianization?” Agricultural History 62 (1988): 182-209.

Citation: Rosenbloom, Joshua. “The History of American Labor Market Institutions and Outcomes”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-history-of-american-labor-market-institutions-and-outcomes/

Immigration to the United States

Raymond L. Cohn, Illinois State University (Emeritus)

For good reason, it is often said the United States is a nation of immigrants. Almost every person in the United States is descended from someone who arrived from another country. This article discusses immigration to the United States from colonial times to the present. The focus is on individuals who paid their own way, rather than slaves and indentured servants. Various issues concerning immigration are discussed: (1) the basic data sources available, (2) the variation in the volume over time, (3) the reasons immigration occurred, (4) nativism and U.S. immigration policy, (5) the characteristics of the immigrant stream, (6) the effects on the United States economy, and (7) the experience of immigrants in the U.S. labor market.

For readers who wish to further investigate immigration, the following works listed in the Reference section of this entry are recommended as general histories of immigration to the United States: Hansen (1940); Jones (1960); Walker (1964); Taylor (1971); Miller (1985); Nugent (1992); Erickson (1994); Hatton and Williamson (1998); and Cohn (2009).

The Available Data Sources

The primary source of data on immigration to the United States is the Passenger Lists, though U.S. and state census materials, Congressional reports, and company records also contain material on immigrants. In addition, the Integrated Public Use Microdata Series (IPUMS) web site at the University of Minnesota (http://www.ipums.umn.edu/usa/) contains data samples drawn from a number of federal censuses. Since the samples are of individuals and families, the site is useful in immigration research. A number of the countries from which the immigrants left also kept records about the individuals. Many of these records were originally summarized in Ferenczi (1970). Although records from other countries are useful for some purposes, the U.S. records are generally viewed as more complete, especially for the period before 1870. It is worthy of note that comparisons of the lists between countries often lead to somewhat different results. It is also probable that, during the early years, a few of the U.S. lists were lost or never collected.

Passenger Lists

The U.S. Passenger Lists resulted from an 1819 law requiring every ship carrying passengers that arrived in the United States from a foreign port to file with the port authorities a list of all passengers on the ship. These records are the basis for the vast majority of the historical data on immigration. For example, virtually all of the tables in the chapter on immigration in Carter et. al (2006) are based on these records. The Passenger Lists recorded a great deal of information. Each list indicates the name of the ship, the name of the captain, the port(s) of embarkation, the port of arrival, and the date of arrival. Following this information is a list of the passengers. Each person’s name is listed, along with age, gender, occupation, country of origin, country of destination, and whether or not the person died on the voyage. It is often possible to distinguish family groups since family members were usually grouped together and, to save time, the compilers frequently used ditto marks to indicate the same last name. Various data based on the lists were published in Senate or Congressional Reports at the time. Due to their usefulness in genealogical research, the lists are now widely available on microfilm and are increasingly available on CD-rom. Even a few public libraries in major cities have full or partial collections of these records. Most of the ship lists are also available on-line at various web sites.

The Volume of Immigration

Both the total volume of immigration to the United States and the immigrants’ countries of origins varied substantially over time. Table 1 provides the basic data on total immigrant volume by time period broken down by country or area of origin. The column “Average Yearly Total – All Countries” presents the average yearly total immigration to the United States in the time period given. Immigration rates – the average number of immigrants entering per thousand individuals in the U.S. population – are shown in the next column. The columns headed by country or area names show the percentage of immigrants coming from that place. The time periods in Table 1 have been chosen for illustrative purposes. A few things should be noted concerning the figures in Table 1. First, the estimates for much of the period since 1820 are based on the original Passenger Lists and are subject to the caveats discussed above. The estimates for the period before 1820 are the best currently available but are less precise than those after 1820. Second, though it was legal to import slaves into the United States (or the American colonies) before 1808, the estimates presented exclude slaves. Third, though illegal immigration into the United States has occurred, the figures in Table 1 include only legal immigrants. In 2015, the total number of illegal immigrants in the United States is estimated at around 11 million. These individuals were mostly from Mexico, Central America, and Asia.

Trends over Time

From the data presented in Table 1, it is apparent that the volume of immigration and its rate relative to the U.S. population varied over time. Immigration was relatively small until a noticeable increase occurred in the 1830s and a huge jump in the 1840s. The volume passed 200,000 for the first time in 1847 and the period between 1847 and 1854 saw the highest rate of immigration in U.S. history. From the level reached between 1847 and 1854, volume decreased and increased over time through 1930. For the period from 1847 through 1930, the average yearly volume was 434,000. During these years, immigrant volume peaked between 1900 and 1914, when an average of almost 900,000 immigrants arrived in the United States each year. This period is also second in terms of the rate of immigration relative to the U.S. population. The volume and rate fell to low levels between 1931 and 1946, though by the 1970s the volume had again reached that experienced between 1847 and 1930. The rise in volume continued through the 1980s and 1990s, though the rate per one thousand American residents has remained well below that experienced before 1915. It is notable that since about 1990, the average yearly volume of immigration has surpassed the previous peak experienced between 1900 and 1914. In 2015, reflecting the large volume of immigration, about 15 percent of the U.S. population was foreign-born.

Table 1
Immigration Volume and Rates

Years Average Yearly Total – All Countries Immigration Rates (Per 1000 Population) Percent of Average Yearly Total
Great Britain Ireland Scandinavia and Other NW Europe Germany Central and Eastern Europe Southern Europe Asia Africa Australia and Pacific Islands Mexico Other America
1630‑1700 2,200 —- —- —- —- —- —- —- —- —- —- —- —-
1700-1780 4,325 —- —- —- —- —- —- —- —- —- —- —- —-
1780-1819 9,900 —- —- —- —- —- —- —- —- —- —- —- —-
1820-1831 14,538 1.3 22 45 12 8 0 2 0 0 —- 4 6
1832-1846 71,916 4.3 16 41 9 27 0 1 0 0 —- 1 5
1847-1854 334,506 14.0 13 45 6 32 0 0 1 0 —- 0 3
1855-1864 160,427 5.2 25 28 5 33 0 1 3 0 —- 0 4
1865-1873 327,464 8.4 24 16 10 34 1 1 3 0 0 0 10
1874-1880 260,754 5.6 18 15 14 24 5 3 5 0 0 0 15
1881-1893 525,102 8.9 14 12 16 26 16 8 1 0 0 0 6
1894-1899 276,547 3.9 7 12 12 11 32 22 3 0 0 0 2
1900-1914 891,806 10.2 6 4 7 4 45 26 3 0 0 1 5
1915-1919 234,536 2.3 5 2 8 1 7 21 6 0 1 8 40
1920-1930 412,474 3.6 8 5 8 9 14 16 3 0 0 11 26
1931-1946 50,507 0.4 10 2 9 15 8 12 3 1 1 6 33
1947-1960 252,210 1.5 7 2 6 8 4 10 8 1 1 15 38
1961-1970 332,168 1.7 6 1 4 6 4 13 13 1 1 14 38
1971-1980 449,331 2.1 3 0 1 2 4 8 35 2 1 14 30
1981-1990 733,806 3.1 2 0 1 1 3 2 37 2 1 23 27
1991-2000 909,264 3.4 2 1 1 1 11 2 38 5 1 30 9
2001-2008 1,040,951 4.4 2 0 1 1 9 1 35 7 1 17 27
2009-2015 1,046,459 4.8 1 0 1 1 5 1 40 10 1 14 27

Sources: Years before 1820: Grabbe (1989). 1820-1970: Historical Statistics (1976). Years since 1970: U.S. Immigration and Naturalization Service (various years). Note: Entries with a zero indicate less than one-half of one percent. Entries with dashes indicate no information or no immigrants. 2002-2015: Department of Homeland Security: Office of Immigration Statistics (various years).

Sources of Immigration

The sources of immigration have changed a number of times over the years. In general, four relatively distinct periods can be identified in Table 1. Before 1881, the vast majority of immigrants, almost 86% of the total, arrived from northwest Europe, principally Great Britain, Ireland, Germany, and Scandinavia. During the colonial period, though the data do not allow an accurate breakdown, most immigrants arrived from Britain, with smaller numbers coming from Ireland and Germany. The years between 1881 and 1893 saw a transition in the sources of U.S. immigrants. After 1881, immigrant volume from central, eastern, and southern Europe began to increase rapidly. Between 1894 and 1914, immigrants from southern, central, and eastern Europe accounted for 69% of the total. With the onset of World War I in 1914, the sources of U.S. immigration again changed. From 1915 to the present day, a major source of immigrants to the United States has been the Western Hemisphere, accounting for 46% of the total. In the period between 1915 and 1960, virtually all of the remaining immigrants came from Europe, though no specific part of Europe was dominant. Beginning in the 1960s, immigration from Europe fell off substantially and was replaced by a much larger percentage of immigrants from Asia. Also noteworthy is the rise in immigration from Africa in the twenty-first century. Thus, over the course of U.S. history, the sources of immigration changed from northwestern Europe to southern, central and eastern Europe to the Americas in combination with Europe to the current situation where most immigrants come from the Americas, Asia and Africa.

Duration of Voyage and Method of Travel

Before the 1840s, immigrants arrived on sailing ships. General information on the length of the voyage is unavailable for the colonial and early national periods. By the 1840s, however, the average voyage length for ships from the British Isles was five to six weeks, with those from the European continent taking a week or so longer. In the 1840s, a few steamships began to cross the Atlantic. Over the course of the 1850s, steamships began to account for a larger, though still minority, percentage of immigrant travel. By 1873, virtually all immigrants arrived on steamships (Cohn 2005). As a result, the voyage time fell initially to about two weeks and it continued to decline into the twentieth century. Steamships remained the primary means of travel until after World War II. As a consequence of the boom in airplane travel over the last few decades, most immigrants now arrive via air.

Place of Arrival

Where immigrants landed in the United States varied, especially in the period before the Civil War. During the colonial and early national periods, immigrants arrived not only at New York City but also at a variety of other ports, especially Philadelphia, Boston, New Orleans, and Baltimore. Over time, and especially when most immigrants began arriving via steamship, New York City became the main arrival port. No formal immigration facilities existed at any of the ports until New York City established Castle Garden as its landing depot in 1855. This facility, located at the tip of Manhattan, was replaced in 1892 with Ellis Island, which in turn operated until 1954.

Death Rates during the Voyage

A final aspect to consider is the mortality experienced by the individuals on board the ships. Information taken from the Passenger Lists for the period of the sailing ship between 1820 and 1860 finds a loss rate of one to two percent of the immigrants who boarded (Cohn, 2009). Given the length of the trip and taking into account the ages of the immigrants, this rate represents mortality approximately four times higher than that experienced by non-migrants. Mortality was mainly due to outbreaks of cholera and typhus on some ships, leading to especially high death rates among children and the elderly. There appears to have been little trend over time in mortality or differences in the loss rate by country of origin, though some evidence suggests the loss rate may have differed by port of embarkation. In addition, the best evidence from the colonial period finds a loss rate only slightly higher than that of the antebellum years. In the period after the Civil War, with the change to steamships and the resulting shorter travel time and improved on-board conditions, mortality on the voyages fell, though exactly how much has not been determined.

The Causes of Immigration

Economic historians generally believe no single factor led to immigration. In fact, different studies have tried to explain immigration by emphasizing different factors, with the first important study being done by Thomas (1954). The most recent attempt to comprehensively explain immigration has been by Hatton and Williamson (1998), who focus on the period between 1860 and 1914. Massey (1999) expresses relatively similar views. Hatton and Williamson view immigration from a country during this time as being caused by up to five different factors: (a) the difference in real wages between the country and the United States; (b) the rate of population growth in the country 20 or 30 years before; (c) the degree of industrialization and urbanization in the home country; (d) the volume of previous immigrants from that country or region; and (e) economic and political conditions in the United States. To this list can be added factors not relevant during the 1860 to 1914 period, such as the potato famine, the movement from sail to steam, and the presence or absence of immigration restrictions. Thus, a total of at least eight factors affected immigration.

Causes of Fluctuations in Immigration Levels over Time

As discussed above, the total volume of immigration trended upward until World War I. The initial increase in immigration during the 1830s and 1840s was caused by improvements in shipping, more rapid population growth in Europe, and the potato famine in the latter part of the 1840s, which affected not only Ireland but also much of northwest Europe. As previously noted, the steamship replaced the sailing ship after the Civil War. By substantially reducing the length of the trip and increasing comfort and safety, the steamship encouraged an increase in the volume of immigration. Part of the reason volume increased was that temporary immigration became more likely. In this situation, an individual came to the United States not planning to stay permanently but instead planning to work for a period of time before returning home. All in all, the period from 1865 through 1914, when immigration was not restricted and steamships were dominant, saw an average yearly immigrant volume of almost 529,000. In contrast, average yearly immigration between 1820 and 1860 via sailing ship was only 123,000, and even between 1847 and 1860 was only 266,000.

Another feature of the data in Table 1 is that the yearly volume of immigration fluctuated quite a bit in the period before 1914. The fluctuations are mainly due to changes in economic and political conditions in the United States. Essentially, periods of low volume corresponded with U.S. economic depressions or times of widespread opposition to immigrants. In particular, volume declined during the nativist outbreak in the 1850s and the major depressions of the 1870s and 1890s and the Great Depression of the 1930s. As discussed in the next section, the United States imposed widespread restrictions on immigration beginning in the 1920s. Since then, the volume has been subject to more direct determination by the United States government. Thus, fluctuations in the total volume of immigration over time are due to four of the eight factors discussed in the first paragraph of this section: the potato famine, the movement from sail to steam, economic and political conditions in the United States, and the presence or absence of immigration restrictions.

Factors Influencing Immigration Rates from Particular Countries

The other four factors are primarily used to explain changes in the source countries of immigration. A larger difference in real wages between the country and the United States increased immigration from the country because it meant immigrants had more to gain from the move. Because most immigrants were between 15 and 35 years old, a higher population growth 20 or 30 years earlier meant there were more individuals in the potential immigrant group. In addition, a larger volume of young workers in a country reduced job prospects at home and further encouraged immigration. A greater degree of industrialization and urbanization in the home country typically increased immigration because traditional ties with the land were broken during this period, making laborers in the country more mobile. Finally, the presence of a larger volume of previous immigrants from that country or region encouraged more immigration because potential immigrants now had friends or relatives to stay with who could smooth their transition to living and working in the United States.

Based on these four factors, Hatton and Williamson explain the rise and fall in the volume of immigration from a country to the United States. Immigrant volume initially increased as a consequence of more rapid population growth and industrialization in a country and the existence of a large gap in real wages between the country and the United States. Within a number of years, volume increased further due to the previous immigration that had occurred. Volume remained high until various changes in Europe caused immigration to decline. Population growth slowed. Most of the countries had undergone industrialization. Partly due to the previous immigration, real wages rose at home and became closer to those in the United States. Thus, each source country went through stages where immigration increased, reached a peak, and then declined.

Differences in the timing of these effects then led to changes in the source countries of the immigrants. The countries of northwest Europe were the first to experience rapid population growth and to begin industrializing. By the latter part of the nineteenth century, immigration from these countries was in the stage of decline. At about the same time, countries in central, eastern, and southern Europe were experiencing the beginnings of industrialization and more rapid population growth. This model holds directly only through the 1920s, because U.S. government policy changed. At that point, quotas were established on the number of individuals allowed to immigrate from each country. Even so, many countries, especially those in northwest Europe, had passed the point where a large number of individuals wanted to leave and thus did not fill their quotas. The quotas were binding for many other countries in Europe in which pressures to immigrate were still strong. Even today, the countries providing the majority of immigrants to the United States, those south of the United States and in Asia and Africa, are places where population growth is high, industrialization is breaking traditional ties with the land, and real wage differentials with the United States are large.

Immigration Policy and Nativism

This section summarizes the changes in U.S. immigration policy. Only the most important policy changes are discussed and a number of relatively minor changes have been ignored. Interested readers are referred to Le May (1987) and Briggs (1984) for more complete accounts of U.S. immigration policy.

Few Restrictions before 1882

Immigration into the United States was subject to virtually no legal restrictions before 1882. Essentially, anyone who wanted to enter the United States could and, as discussed earlier, no specified arrival areas existed until 1855. Individuals simply got off the ship and went about their business. Little opposition among U.S. citizens to immigration is apparent until about the 1830s. The growing concern at this time was due to the increasing volume of immigration in both absolute terms and relative to the U.S. population, and the facts that more of the arrivals were Catholic and unskilled. The nativist feeling burst into the open during the 1850s when the Know-Nothing political party achieved a great deal of political success in the 1854 off-year elections. This party did not favor restrictions on the number of immigrants, though they did seek to restrict their ability to quickly become voting citizens. For a short period of time, the Know-Nothings had an important presence in Congress and many state legislatures. With the downturn in immigration in 1855 and the nation’s attention turning more to the slavery issue, their influence receded.

Chinese Exclusion Act

The first restrictive immigration laws were directed against Asian countries. The first law was the Chinese Exclusion Act of 1882. This law essentially prohibited the immigration of Chinese citizens and it stayed in effect until it was removed during World War II. In 1907, Japanese immigration was substantially reduced through a Gentlemen’s Agreement between Japan and the United States. It is noteworthy that the Chinese Exclusion Act also prohibited the immigration of “convicts, lunatics, idiots” and those individuals who might need to be supported by government assistance. The latter provision was used to some extent during periods of high unemployment, though as noted above, immigration fell anyway because of the lack of jobs.

Literacy Test Adopted in 1917

The desire to restrict immigration to the United States grew over the latter part of the nineteenth century. This growth was due partly to the high volume and rate of immigration and partly to the changing national origins of the immigrants; more began arriving from southern, central, and eastern Europe. In 1907, Congress set up the Immigration Commission, chaired by Senator William Dillingham, to investigate immigration. This body issued a famous report, now viewed as flawed, concluding that immigrants from the newer parts of Europe did not assimilate easily and, in general, blaming them for various economic ills. Attempts at restricting immigration were initially made by proposing a law requiring a literacy test for admission to the United States, and such a law was finally passed in 1917. This same law also virtually banned immigration from any country in Asia. Restrictionists were no doubt displeased when the volume of immigration from Europe resumed its former level after World War I in spite of the literacy test. The movement then turned to explicitly limiting the number of arrivals.

1920s: Quota Act and National Origins Act

The Quota Act of 1921 laid the framework for a fundamental change in U.S. immigration policy. It limited the number of immigrants from Europe to a total of about 350,000 per year. National quotas were established in direct proportion to each country’s presence in the U.S. population in 1910. In addition, the act assigned Asian countries quotas near zero. Three years later in 1924, the National Origins Act instituted a requirement that visas be obtained from an American consulate abroad before immigrating, reduced the total European quota to about 165,000, and changed how the quotas were determined. Now, the quotas were established in direct proportion to each country’s presence in the U.S. population in 1890, though this aspect of the act was not fully implemented until 1929. Because relatively few individuals immigrated from southern, central, and eastern Europe before 1890, the effect of the 1924 law was to drastically reduce the number of individuals allowed to immigrate to the United States from these countries. Yet total immigration to the United States remained fairly high until the Great Depression because neither the 1921 nor the 1924 law restricted immigration from the Western Hemisphere. Thus, it was the combination of the outbreak of World War I and the subsequent 1920s restrictions that caused the Western Hemisphere to become a more important source of immigrants to the United States after 1915, though it should be recalled the rate of immigration fell to low levels after 1930.

Immigration and Nationality Act of 1965

The last major change in U.S. immigration policy occurred with the passage of the Immigration and Nationality Act of 1965. This law abolished the quotas based on national origins. Instead, a series of preferences were established to determine who would gain entry. The most important preference was given to relatives of U.S. citizens and permanent resident aliens. By the twenty-first century, about two-thirds of immigrants came through these family channels. Preferences were also given to professionals, scientists, artists, and workers in short supply. The 1965 law kept an overall quota on total immigration for Eastern Hemisphere countries, originally set at 170,000, and no more than 20,000 individuals were allowed to immigrate to the United States from any single country. This law was designed to treat all countries equally. Asian countries were treated the same as any other country, so the virtual prohibition on immigration from Asia disappeared. In addition, for the first time the law also limited the number of immigrants from Western Hemisphere countries, with the original overall quota set at 120,000. It is important to note that neither quota was binding because immediate relatives of U.S. citizens, such as spouses, parents, and minor children, were exempt from the quota. In addition, the United States has admitted large numbers of refugees at different times from Vietnam, Haiti, Cuba, and other countries. Finally, many individuals enter the United States on student visas, enroll in colleges and universities, and eventually get companies to sponsor them for a work visa. Thus, the total number of legal immigrants to the United States since 1965 has always been larger than the combined quotas. This law has led to an increase in the volume of immigration and, by treating all countries the same, has led to Asia recently becoming a more important source of U.S. immigrants.

Though features of the 1965 law have been modified since it was enacted, this law still serves as the basis for U.S. immigration policy today. The most important modifications occurred in 1986 when employer sanctions were adopted for those hiring illegal workers. On the other hand, the same law also gave temporary resident status to individuals who had lived illegally in the United States since before 1982. The latter feature led to very high volumes of legal immigration being recorded in 1989, 1990, and 1991.

The Characteristics of the Immigrants

In this section, various characteristics of the immigrant stream arriving at different points in time are discussed. The following characteristics of immigration are analyzed: gender breakdown, age structure, family vs. individual migration, and occupations listed. Virtually all the information is based on the Passenger Lists, a source discussed above.

Gender and Age

Data are presented in Table 2 on the gender breakdown and age structure of immigration. The gender breakdown and age structure remain fairly consistent in the period before 1930. Generally, about 60% of the immigrants were male. As to age structure, about 20% of immigrants were children, 70% were adults up to age 44, and 10% were older than 44. In most of the period and for most countries, immigrants were typically young single males, young couples, or, especially in the era before the steamship, families. For particular countries, such as Ireland, a large number of the immigrants were single women (Cohn, 1995). The primary exception to this generalization was the 1899-1914 period, when 68% of the immigrants were male and adults under 45 accounted for 82% of the total. This period saw the immigration of a large number of single males who planned to work for a period of months or years and return to their homeland, a development made possible by the steamship shortening the voyage and reducing its cost (Nugent, 1992). The characteristics of the immigrant stream since 1930 have been somewhat different. Males have comprised less than one-half of all immigrants. In addition, the percentage of immigrants over age 45 has increased at the expense of those between the ages of 14 and 44.

Table 2
Immigration by Gender and Age

Percent Males Percent under 14 years Percent 14–44 years Percent 45 years and over
Years
1820-1831 70 19 70 11
1832-1846 62 24 67 10
1847-1854 59 23 67 10
1855-1864 58 19 71 10
1865-1873 62 21 66 13
1873-1880 63 19 69 12
1881-1893 61 20 71 10
1894-1898 57 15 77 8
1899-1914 68 12 82 5
1915-1917 59 16 74 10
1918-1930 56 18 73 9
1931-1946 40 15 67 17
1947-1960 45 21 64 15
1961-1970 45 25 61 14
1971-1980 46 24 61 15
1981-1990 52 18 66 16
1991-2000 51 17 65 18
2001-2008 45 15 64 21
2009-2015 45 15 61 24

Notes: From 1918-1970, the age breakdown is “Under 16” and “16-44.” From 1971 to 1998, the age breakdown is “Under 15” and “15-44.” For 2001-2015, it is again “Under 16” and “16-44.”

Sources: 1820-1970: Historical Statistics (1976). Years since 1970: U.S. Immigration and Naturalization Service (various years). 2002-2015: Department of Homeland Security: Office of Immigration Statistics (various years).

Occupations

Table 3 presents data on the percentage of immigrants who did not report an occupation and the percentage breakdown of those reporting an occupation. The percentage not reporting an occupation declined through 1914. The small percentages between 1894 and 1914 are a reflection of the large number of single males who arrived during this period. As is apparent, the classification scheme for occupations has changed over time. Though there is no perfect way to correlate the occupation categories used in the different time periods, skilled workers comprised about one-fourth of the immigrant stream through 1970. The immigration of farmers was important before the Civil War but declined steadily over time. The percentage of laborers has varied over time, though during some time periods they comprised one-half or more of the immigrants. The highest percentages of laborers occurred during good years for the U.S. economy (1847-54, 1865-73, 1881-93, 1899-1914), because laborers possessed the fewest skills and would have an easier time finding a job when the U.S. economy was strong. Commercial workers, mainly merchants, were an important group of immigrants very early when immigrant volume was low, but their percentage fell substantially over time. Professional workers were always a small part of U.S. immigration until the 1930s. Since 1930, these workers have comprised a larger percentage of immigrants reporting an occupation.

Table 3
Immigration by Occupation

Year Percent with no occup. listed Percent of immigrants with an occupation in each category
Professional Commercial Skilled Farmers Servants Laborers Misc.
1820-1831 61 3 28 30 23 2 14
1832-1846 56 1 12 27 33 2 24
1847-1854 54 0 6 18 33 2 41
1855-1864 53 1 12 23 23 4 37 0
1865-1873 54 1 6 24 18 7 44 1
1873-1880 47 2 4 24 18 8 40 5
1881-1893 49 1 3 20 14 9 51 3
1894-1898 38 1 4 25 12 18 37 3
Professional, technical, and kindred workers Farmers and farm managers Managers, officials, and proprietors, exc. farm Clerical, sales, and kindred workers Craftsmen, foremen, operatives, and kindred workers Private HH workers Service workers, exc. private household Farm laborers and foremen Laborers, exc. farm and mine
1899-1914 26 1 2 3 2 18 15 2 26 33
1915-1919 37 5 4 5 5 21 15 7 11 26
1920-1930 39 4 5 4 7 24 17 6 8 25
1931-1946 59 19 4 15 13 21 13 6 2 7
1947-1960 53 16 5 5 17 31 8 6 3 10
1961-1970 56 23 2 5 17 25 9 7 4 9
1971-1980 59 25 — a 8 12 36 — b 15 5 — c
1981-1990 56 14 — a 8 12 37 — b 22 7 — c
1991-2000 61 17 — a 7 9 23 — b 14 30 — c
2001-2008 76 45 — a — d 14 21 — b 18 5 — c
2009-2015 76 46 — a — d 12 19 — b 19 5 — c

a – included with “Farm laborers and foremen”; b – included with “Service workers, etc.”; c – included with “Craftsmen, etc.”; d – included with “Professional.”

Sources: 1820-1970: Historical Statistics (1976). Years since 1970: U.S. Immigration and Naturalization Service (various years). 2002-2015: Department of Homeland Security: Office of Immigration Statistics (various years). From 1970 through 2001, the INS has provided the following occupational categories: Professional, specialty, and technical (listed above under “Professional”); Executive, administrative, and managerial (listed above under “Managers, etc.”); Sales; Administrative support (these two are combined and listed above under “Clerical, etc.”); Precision production, craft, and repair; Operator, fabricator, and laborer (these two are combined and listed above under “Craftsmen, etc.”); Farming, forestry, and fishing (listed above under “Farm laborers and foremen”); and Service (listed above under “Service workers, etc.). Since 2002, the Department of Homeland Security has combined the Professional and Executive categories.  Note: Entries with a zero indicate less than one-half of one percent. Entries with dashes indicate no information or no immigrants.

Skill Levels

The skill level of the immigrant stream is important because it potentially affects the U.S. labor force, an issue considered in the next section. Before turning to this issue, a number of comments can be made concerning the occupational skill level of the U.S. immigration stream. First, skill levels fell substantially in the period before the Civil War. Between 1820 and 1831, only 39% of the immigrants were farmers, servants, or laborers, the least skilled groups. Though the data are not as complete, immigration during the colonial period was almost certainly at least this skilled. By the 1847-54 period, however, the less-skilled percentage had increased to 76%. Second, the less-skilled percentage did not change dramatically late in the nineteenth century when the source of immigration changed from northwest Europe to other parts of Europe. Comparing 1873-80 with 1899-1914, both periods of high immigration, farmers, servants, and laborers accounted for 66% of the immigrants in the former period and 78% in the latter period. The second figure is, however, similar to that during the 1847-54 period. Third, the restrictions on immigration imposed during the 1920s had a sizable effect on the skill level of the immigrant stream. Between 1930 and 1970, only 31-34% of the immigrants were in the least-skilled group.

Fourth, a deterioration in immigrant skills appears in the numbers in the 1980s and 1990s, and then an improvement appears since 2001. Both changes may be an illusion.. In Table 3 for the 1980s and 1990s, the percentage in the “Professional” category falls while the percentages in the “Service” and “Farm workers” categories rise. These changes are, however, due to the amnesty for illegal immigrants resulting from the 1986 law. The amnesty led to the recorded volume of immigration in 1989, 1990, and 1991 being much higher than typical, and most of the “extra” immigrants recorded their occupation as “Service” or “Farm laborer.” If these years are ignored, then little change occurred in the occupational distribution of the immigrant stream during the 1980s and 1990s. Two caveats, however, should be noted. First, the illegal immigrants can not, of course, be ignored. Second, the skill level of the U.S. labor force was improving over the same period. Thus, relative to the U.S. labor force and including illegal immigration, it is apparent the occupational skill level of the U.S. immigrant stream declined during the 1980s and 1990s.  Turning to the twenty-first century, the percentage of the legal immigrant stream in the highest-skilled category appears to have increased. This conclusion is also not certain because the changes that occurred in how occupations were categorized beginning in 2001 make a straightforward comparison potentially inexact. This uncertainty is increased by the growing percentage of immigrants for which no occupation is reported. It is not clear whether a larger percentage of those arriving actually did not work (recall that a growing percentage of legal immigrants are somewhat older) or if more simply did not list an occupation. Overall, detecting changes in the skill level of the legal immigrant stream since about 1930 is fraught with difficulty.

The Effects of Immigration on the United States Economy

Though immigration has effects on the country from which the immigrants leave, this section only examines the effects on the United States, mainly those occurring over longer periods of time. Over short periods of time, sizeable and potentially negative effects can occur in a specific area when there is a huge influx of immigrants. A large number of arrivals in a short period of time in one city can cause school systems to become overcrowded, housing prices and welfare payments to increase, and jobs to become difficult to obtain. Yet most economists believe the effects of immigration over time are much less harmful than commonly supposed and, in many ways, are beneficial. . The following longer-term issues are discussed: the effects of immigration on the overall wage rate of U.S. workers; the effects on the wages of particular groups of workers, such as those who are unskilled; and the effects on the rate of economic growth, that is, the standard of living, in the United States. Determining the effects of immigration on the United States is complex and virtually none of the conclusions presented here are without controversy.

Immigration’s Impact on Overall Wage Rates

Immigration is popularly thought to lower the overall wage rate in the United States by increasing the supply of individuals looking for jobs. This effect may occur in an area over a fairly short period of time. Over longer time periods, however, wages will only fall if the amounts of other resources don’t change. Wages will not fall if the immigrants bring sufficient amounts of other resources with them, such as capital, or cause the amount of other resources in the economy to increase sufficiently. For example, historically the large-scale immigration from Europe contributed to rapid westward expansion of the United States during most of the nineteenth century. The westward expansion, however, increased the amounts of land and natural resources that were available, factors that almost certainly kept immigration from lowering wage rates. Immigrants also increase the amounts of other resources in the economy through running their own businesses, which both historically and in recent times has occurred at a greater rate among immigrants than native workers. By the beginning of the twentieth century, the westward frontier had been settled. A number of researchers have estimated that immigration did lower wages at this time (Hatton and Williamson, 1998; Goldin, 1994), though others have criticized these findings (Carter and Sutch, 1999). For the recent time period, most studies have found little effect of immigration on the level of wages, though a few have found an effect (Borjas, 1999).

Even if immigration leads to a fall in the wage rate, it does not follow that individual workers are worse off. Workers typically receive income from sources other than their own labor. If wages fall, then many other resource prices in the economy rise. For example, immigration increases the demand for housing and land and existing owners benefit from an increase in the current value of their property. Whether any individual worker is better off or worse off in this case is not easy to determine. It depends on the amounts of other resources each individual possesses.

Immigration’s Impact on Wages of Unskilled Workers

Consider the second issue, the effects of immigration on the wages of unskilled workers. If the immigrants arriving in the country are primarily unskilled, then the larger number of unskilled workers could cause their wage to fall if the overall demand for these workers doesn’t change. A requirement for this effect to occur is that the immigrants be less skilled than the U.S. labor force they enter. As discussed above, during colonial times immigrant volume was small and the immigrants were probably more skilled than the existing U.S. labor force. During the 1830s and 1840s, the volume and rate of immigration increased substantially and the skill level of the immigrant stream fell to approximately match that of the native labor force. Instead of lowering the wages of unskilled workers relative to those of skilled workers, however, the large inflow apparently led to little change in the wages of unskilled workers, while some skilled workers lost and others gained. The explanation for these results is that the larger number of unskilled workers resulting from immigration was a factor in employers adopting new methods of production that used more unskilled labor. As a result of this technological change, the demand for unskilled workers increased so their wage did not decline. As employers adopted these new machines, however, skilled artisans who had previously done many of these jobs, such as iron casting, suffered losses. Other skilled workers, such as many white-collar workers who were not in direct competition with the immigrants, gained. Some evidence exists to support a differential effect on skilled workers during the antebellum period (Williamson and Lindert, 1980; Margo, 2000). After the Civil War, however, the skill level of the immigrant stream was close to that of the native labor force, so immigration probably did not further affect the wage structure through the 1920s (Carter and Sutch, 1999).

Impact since World War II

The lower volume of immigration in the period from 1930 through 1960 meant immigration had little effect on the relative wages of different workers during these years. With the resumption of higher volumes of immigration after 1965, however, and with the immigrants’ skill levels being low through 2000, an effect on relative wages again became possible. In fact, the relative wages of high-school dropouts in the United States deteriorated during the same period, especially after the mid-1970s. Researchers who have studied the question have concluded that immigration accounted for about one-fourth of the wage deterioration experienced by high-school dropouts during the 1980s, though some researchers find a lower effect and others a higher one (Friedberg and Hunt, 1995; Borjas, 1999). Wages are determined by a number of factors other than immigration. In this case, it is thought the changing nature of the economy, such as the widespread use of computers increasing the benefits to education, bears more of the blame for the decline in the relative wages of high-school dropouts.

Economic Benefits from Immigration

Beyond any effect on wages, there are a number of ways in which immigration might improve the overall standard of living in an economy. First, immigrants may engage in inventive or scientific activity, with the result being a gain to everyone. Evidence exists for both the historical and more recent periods that the United States has attracted individuals with an inventive/scientific nature. The United States has always been a leader in these areas. Individuals are more likely to be successful in such an environment than in one where these activities are not as highly valued. Second, immigrants expand the size of markets for various goods, which may lead to lower firms’ average costs due to an increase in firm size. The result would be a decrease in the price of the goods in question. Third, most individuals immigrate between the ages of 15 and 35, so the expenses of their basic schooling are paid abroad. In the past, most immigrants, being of working age, immediately got a job. Thus, immigration increased the percentage of the population in the United States that worked, a factor that raises the average standard of living in a country. Even in more recent times, most immigrants work, though the increased proportion of older individuals in the immigrant stream means the positive effects from this factor may be lower than in the past. Fourth, while immigrants may place a strain on government services in an area, such as the school system, they also pay taxes. Even illegal immigrants directly pay sales taxes on their purchases of goods and indirectly pay property taxes through their rent. Finally, the fact that immigrants are less likely to immigrate to the United States during periods of high unemployment is also beneficial. By reducing the number of people looking for jobs during these periods, this factor increases the likelihood U.S. citizens will be able to find a job.

The Experience of Immigrants in the U.S. Labor Market

This section examines the labor market experiences of immigrants in the United States. The issue of discrimination against immigrants in jobs is investigated along with the issue of the success immigrants experienced over time. Again, the issues are investigated for the historical period of immigration as well as more recent times. Interested readers are directed to Borjas (1999), Ferrie (2000), Carter and Sutch (1999), Hatton and Williamson (1998), and Friedberg and Hunt (1995) for more technical discussions.

Did Immigrants Face Labor Market Discrimination?

Discrimination can take various forms. The first form is wage discrimination, in which a worker of one group is paid a wage lower than an equally productive worker of another group. Empirical tests of this hypothesis generally find this type of discrimination has not existed. At any point in time, immigrants have been paid the same wage for a specific job as a native worker. If immigrants generally received lower wages than native workers, the differences reflected the lower skills of the immigrants. Historically, as discussed above, the skill level of the immigrant stream was similar to that of the native labor force, so wages did not differ much between the two groups. During more recent years, the immigrant stream has been less skilled than the native labor force, leading to the receipt of lower wages by immigrants. A second form of discrimination is in the jobs an immigrant is able to obtain. For example, in 1910, immigrants accounted for over half of the workers in various jobs; examples are miners, apparel workers, workers in steel manufacturing, meat packers, bakers, and tailors. If a reason for the employment concentration was that immigrants were kept out of alternative higher paying jobs, then the immigrants would suffer. This type of discrimination may have occurred against Catholics during the 1840s and 1850s and against the immigrants from central, southern, and eastern Europe after 1890. In both cases, it is possible the immigrants suffered because they could not obtain higher paying jobs. In more recent years, reports of immigrants trained as doctors, say, in their home country but not allowed to easily practice as such in the United States, may represent a similar situation. Yet the open nature of the U.S. schooling system and economy has been such that this effect usually did not impact the fortunes of the immigrants’ children or did so at a much smaller rate.

Wage Growth, Job Mobility, and Wealth Accumulation

Another aspect of how immigrants fared in the U.S. labor market is their experiences over time with respect to wage growth, job mobility, and wealth accumulation. A study done by Ferrie (1999) for immigrants arriving between 1840 and 1850, the period when the inflow of immigrants relative to the U.S. population was the highest, found immigrants from Britain and Germany generally improved their job status over time. By 1860, over 75% of the individuals reporting a low-skilled job on the Passenger Lists had moved up into a higher-skilled job, while fewer than 25% of those reporting a high-skilled job on the Passenger Lists had moved down into a lower-skilled job. Thus, the job mobility for these individuals was high. For immigrants from Ireland, the experience was quite different; the percentage of immigrants moving up was only 40% and the percentage moving down was over 50%. It isn’t clear if the Irish did worse because they had less education and fewer skills or whether the differences were due to some type of discrimination against them in the labor market. As to wealth, all the immigrant groups succeeded in accumulating larger amounts of wealth the longer they were in the United States, though their wealth levels fell short of those enjoyed by natives. Essentially, the evidence indicates antebellum immigrants were quite successful over time in matching their skills to the available jobs in the U.S. economy.

The extent to which immigrants had success over time in the labor market in the period since the Civil War is not clear. Most researchers have thought that immigrants who arrived before 1915 had a difficult time. For example, Hanes (1996) concludes that immigrants, even those from northwest Europe, had slower earnings growth over time than natives, a finding he argues was due to poor assimilation. Hatton and Williamson (1998), on the other hand, criticize these findings on technical grounds and conclude that immigrants assimilated relatively easily into the U.S. labor market. For the period after World War II, Chiswick (1978) argues that immigrants’ wages have increased relative to those of natives the longer the immigrants have been in the United States. Borjas (1999) has criticized Chiswick’s finding by suggesting it is caused by a decline in the skills possessed by the arriving immigrants between the 1950s and the 1990s. Borjas finds that 25- to 34-year-old male immigrants who arrived in the late 1950s had wages 9% lower than comparable native males, but by 1970 had wages 6% higher. In contrast, those arriving in the late 1970s had wages 22% lower at entry. By the late 1990s, their wages were still 12% lower than comparable natives. Overall, the degree of success experienced by immigrants in the U.S. labor market remains an area of controversy.

References

Borjas, George J. Heaven’s Door: Immigration Policy and the American Economy. Princeton: Princeton University Press, 1999.

Briggs, Vernon M., Jr. Immigration and the American Labor Force. Baltimore: Johns Hopkins University Press, 1984.

Carter, Susan B., and Richard Sutch. “Historical Perspectives on the Economic Consequences of Immigration into the United States.” In The Handbook of International Migration: The American Experience, edited by Charles Hirschman, Philip Kasinitz, and Josh DeWind, 319-341. New York: Russell Sage Foundation, 1999

Carter, Susan B., et. al.  Historical Statistics of the United States: Earliest Times to the Present – Millennial Edition. Volume 1: Population. New York: Cambridge University Press, 2006.

Chiswick, Barry R. “The Effect of Americanization on the Earnings of Foreign-Born Men.” Journal of Political Economy 86 (1978): 897-921.

Cohn, Raymond L. “A Comparative Analysis of European Immigrant Streams to the United States during the Early Mass Migration.” Social Science History 19 (1995): 63-89.

Cohn, Raymond L.  “The Transition from Sail to Steam in Immigration to the United States.” Journal of Economic History 65 (2005): 479-495.

Cohn, Raymond L. Mass Migration under Sail: European Immigration to the Antebellum United States. New York: Cambridge University Press, 2009.

Erickson, Charlotte J. Leaving England: Essays on British Emigration in the Nineteenth Century. Ithaca: Cornell University Press, 1994.

Ferenczi, Imre. International Migrations. New York: Arno Press, 1970.

Ferrie, Joseph P. Yankeys Now: Immigrants in the Antebellum United States, 1840-1860. New York: Oxford University Press, 1999.

Friedberg, Rachael M., and Hunt, Jennifer. “The Impact of Immigrants on Host Country Wages, Employment and Growth.” The Journal of Economic Perspectives 9 (1995): 23-44.

Goldin, Claudia. “The Political Economy of Immigration Restrictions in the United States, 1890 to 1921.” In The Regulated Economy: A Historical Approach to Political Economy, edited by Claudia Goldin and Gary D. Libecap, 223-257. Chicago: University of Chicago Press, 1994.

Grabbe, Hans-Jürgen. “European Immigration to the United States in the Early National Period, 1783-1820.” Proceeding of the American Philosophical Society 133 (1989): 190-214.

Hanes, Christopher. “Immigrants’ Relative Rate of Wage Growth in the Late Nineteenth Century.” Explorations in Economic History 33 (1996): 35-64.

Hansen, Marcus L. The Atlantic Migration, 1607-1860. Cambridge, MA.: Harvard University Press, 1940.

Hatton, Timothy J., and Jeffrey G. Williamson. The Age of Mass Migration: Causes and Economic Impact. New York: Oxford University Press, 1998.

Jones, Maldwyn Allen. American Immigration. Chicago: University of Chicago Press, Second Edition, 1960.

Le May, Michael C. From Open Door to Dutch Door: An Analysis of U.S. Immigration Policy Since 1820. New York: Praeger, 1987.

Margo, Robert A. Wages and Labor Markets in the United States, 1820-1860. Chicago: University of Chicago Press, 2000.

Massey, Douglas S. “Why Does Immigration Occur? A Theoretical Synthesis.” In The Handbook of International Migration: The American Experience, edited by Charles Hirschman, Philip Kasinitz, and Josh DeWind, 34-52. New York: Russell Sage Foundation, 1999.

Miller, Kerby A. Emigrants and Exiles: Ireland and the Irish Exodus to North America. Oxford: Oxford University Press, 1985.

Nugent, Walter. Crossings: The Great Transatlantic Migrations, 1870-1914. Bloomington and Indianapolis: Indiana University Press, 1992.

Taylor, Philip. The Distant Magnet. New York: Harper & Row, 1971.

Thomas, Brinley. Migration and Economic Growth: A Study of Great Britain and the Atlantic Economy. Cambridge, U.K.: Cambridge University Press, 1954.

U.S. Department of Commerce. Historical Statistics of the United States. Washington, DC, 1976.

U.S. Immigration and Naturalization Service. Statistical Yearbook of the Immigration and Naturalization Service. Washington, DC: U.S. Government Printing Office, various years.

Walker, Mack. Germany and the Emigration, 1816-1885. Cambridge, MA: Harvard University Press, 1964.

Williamson, Jeffrey G., and Peter H. Lindert, Peter H. American Inequality: A Macroeconomic History. New York: Academic Press, 1980.

Citation: Cohn, Raymond L. “Immigration to the United States”. EH.Net Encyclopedia, edited by Robert Whaples. Revised August 2, 2017. URL http://eh.net/encyclopedia/immigration-to-the-united-states/

Fertility and Mortality in the United States

Michael Haines, Colgate University

Every modern, economically developed nation has experienced the demographic transition from high to low levels of fertility and mortality. America is no exception. In the early nineteenth century, the typical American woman had between seven and eight live births in her lifetime and people probably lived fewer than forty years on average. But America was also distinctive. First, its fertility transition began in the late eighteenth or early nineteenth century at the latest. Other Western nations began their sustained fertility declines in the late nineteenth or early twentieth century, with the exception of France, whose decline also began early. Second, the fertility rate in America commenced its sustained decline long before that of mortality. This contrasts with the more typical demographic transition in which mortality decline precedes or occurs simultaneously with fertility decline. American mortality did not experience a sustained and irreversible decline until about the 1870s. Third, both these processes were influenced by America’s very high level of net in-migration and also by the significant population redistribution to frontier areas and later to cities, towns, and suburbs.

One particular difficulty for American historical demography is lack of data. During the colonial period, there was neither a regular enumeration nor vital registration. Some scholars, however, have conducted family reconstitutions and other demographic reconstructions using genealogies, parish registers, biographical data, and other local records, so we do know something about vital rates and population characteristics. In 1790, of course, the federal government commenced the decennial U.S. census, which has been the principal source for the study of population growth, structure, and redistribution, as well as fertility prior to the twentieth century. But vital registration was left to state and local governments. Massachusetts was the first state to institute continuous recording of births, deaths, and marriages, beginning in 1842 (some individual cities had registered vital events earlier), but the entire nation was not covered until 1933.

For the colonial period, we know more about population size than other matters, since the British colonial authorities did conduct some enumerations. The population of the British mainland colonies increased from several hundred non-Amerindian individuals in the early seventeenth century to about 2.5 million (2 million whites and about half a million blacks) in 1780. Birthrates were high, ranging between over forty and over fifty live births per one thousand people per annum. The high fertility of American women attracted comment from late eighteenth-century observers, including Benjamin Franklin and Thomas Malthus. Mortality rates were probably moderate, with crude death rates ranging from about twenty per one thousand people per annum to over forty. We know a good deal about mortality rates in New England, somewhat less about the Middle Colonies, and least about the South. But apparently mortality was lower from Pennsylvania and New Jersey northward, and higher in the South. Life expectancy at birth ranged from the late twenties to almost forty.

Information on America’s demographic transition becomes more plentiful for the nineteenth and twentieth centuries. The accompanying table provides summary measures of fertility and mortality for the period 1800-2000. It includes, for fertility, the crude birthrate, the child-woman ratio (based solely on census data), and the total fertility rate; and, for mortality, life expectancy at birth and the infant mortality rate. The results are given for the white and black populations separately because of their very different social, economic, and demographic experiences.

Table 1 indicates the sustained decline in white birthrates from at least 1800 and of black fertility from at least 1850. Family sizes were large early in the nineteenth century, being approximately seven children per woman at the beginning of the century and between seven and eight for the largely rural slave population at mid-century. The table also reveals that mortality did not begin to decline until about the 1870s or so. Prior to that, death rates fluctuated, being affected by periodic epidemics and changes in the disease environment. There is some evidence of rising death rates during the 1830s and 1840s. The table also shows that American blacks had both higher fertility and higher mortality relative to the white population, although both groups experienced fertility and mortality transitions. For example, both participated in the rise in birthrates after World War II known as the baby boom, as well as the subsequent resumption of birthrate declines in the 1960s.

Conventional explanations for the fertility transition have involved the rising cost of children because of urbanization, the growth of incomes and nonagricultural employment, the increased value of education, rising female employment, child labor laws and compulsory education, and declining infant and child mortality. Changing attitudes toward large families and contraception, as well as better contraceptive techniques, have also been cited. Recent literature suggests that women were largely responsible for much of the birthrate decline in the nineteenth century — part of a movement for greater control over their lives. The structural explanations fit the American experience since the late nineteenth century, but they are less appropriate for the fertility decline in rural areas prior to about 1870. The increased scarcity and higher cost of good agricultural land has been proposed as a prime factor, although this is controversial. The standard explanations do not adequately explain the post-World War II baby boom and subsequent baby bust. More complex theories, including the interaction of the size of generations with their income prospects, preferences for children versus material goods, and expectations about family size, have been proposed.

The mortality decline since the late nineteenth century seems to have been the result particularly of improvements in public health and sanitation, especially better water supplies and sewage disposal. The improving diet, clothing, and shelter of the American population over the period since about 1870 also played a role. Specific medical interventions beyond more general environmental public health measures were not statistically important until well into the twentieth century. It is difficult to disentangle the separate effects of these factors. But it is clear that much of the decline was due to rapid reductions in specific infectious and parasitic diseases, including tuberculosis, pneumonia, bronchitis, and gastro-intestinal infections, as well as such well-known lethal diseases as cholera, smallpox, diphtheria, and typhoid fever. Nineteenth-century cities were especially unhealthy places, particularly the largest ones. This began to change by about the 1890s, when the largest cities instituted new public works sanitation projects (such as piped water, sewer systems, filtration and chlorination of water) and public health administration. They then experienced rapid improvements in death rates. As for the present, rural-urban mortality differentials have converged and largely disappeared. This, unfortunately, is not true of the differentials between whites and blacks.

Table 1
Fertility and Mortality in the United States, 1800-1999

Approx. Date Birthratea Child-Woman Ratio b Total Fertility Rate c Life Expectancy d Infant Mortality Rate e
White Blackf White Black White Blackf White Blackf White Blackf
1800 55.0 1342 7.04
1810 54.3 1358 6.92
1820 52.8 1295 1191 6.73
1830 51.4 1145 1220 6.55
1840 48.3 1085 1154 6.14
1850 43.3 58.6g 892 1087 5.42 7.90g 39.5 23.0 216.8 340.0
1860 41.4 55.0h 905 1072 5.21 7.58h 43.6 181.3
1870 38.3 55.4i 814 997 4.55 7.69i 45.2 175.5
1880 35.2 51.9j 780 1090 4.24 7.26j 40.5 214.8
1890 31.5 48.1 685 930 3.87 6.56 46.8 150.7
1900 30.1 44.4 666 845 3.56 5.61 51.8k 41.8k 110.8k 170.3
1910 29.2 38.5 631 736 3.42 4.61 54.6l 46.8l 96.5l 142.6
1920 26.9 35.0 604 608 3.17 3.64 57.4 47.0 82.1 131.7
1930 20.6 27.5 506 554 2.45 2.98 60.9 48.5 60.1 99.9
1940 18.6 26.7 419 513 2.22 2.87 64.9 53.9 43.2 73.8
1950 23.0 33.3 580 663 2.98 3.93 69.0 60.7 26.8 44.5
1960 22.7 32.1 717 895 3.53 4.52 70.7 63.9 22.9 43.2
1970 17.4 25.1 507 689 2.39 3.07 71.6 64.1 17.8 30.9
1980 15.1 21.3 300 367 1.77 2.18 74.5 68.5 10.9 22.2
1990 15.8 22.4 298 359 2.00 2.48 76.1 69.1 7.6 18.0
2000 13.9 17.0 343 401 2.05 2.13 77.4 71.7 5.7 14.1

a Births per 1000 population per annum.
b Children aged 0-4 per 1000 women aged 20-44. Taken from U.S. Bureau of the Census, (1975), Series 67-68 for 1800-1970. For the black population 1820-1840, W.S. Thompson and P.K. Whelpton, Population Trends in the United States (New York: McGraw-Hill, 1933), Table 74, adjusted upward 47% for relative under-numeration of black children aged 0-4 for the censuses of 1820-1840.
c Total number of births per woman if she experienced the current period age-specific fertility rates throughout her life.
d Expectation of life at birth for both sexes combined.
e Infant deaths per 1000 live births per annum.
f Black and other population for birth rate (1920-1970), total fertility rate (1940-1990), life expectancy at birth (1950-1960) and infant mortality rate (1920-1970).
g Average for 1850-59.
h Average for 1860-69.
i Average for 1870-79.
j Average for 1880-84.
k Approximately 1895.
l Approximately 1904.

Sources: U.S. Bureau of the Census, Historical Statistics of the United States (Washington, DC: G.P.O, 1975). U.S. Bureau of the Census, Statistical Abstract of the United States, 1986 (Washington, DC: G.P.O, 1985). Statistical Abstract of the United States, 2001 (Washington, DC: G.P.O, 2001). National Center for Health Statistics, National Vital Statistics Reports, various issues. Census 2000 Summary File 1: National File (May, 2003). Ansley J. Coale and Melvin Zelnik, New Estimates of Fertility and Population in the United States (Princeton, NJ: Princeton University Press 1963). Ansley J. Coale and Norfleet W. Rives, “A Statistical Reconstruction of the Black Population of the United States, 1880-1970: Estimates of True Numbers by Age and Sex, Birth Rates, and Total Fertility,” Population Index 39, no. 1 (Jan., 1973): 3-36. Michael R. Haines, “Estimated Life Tables for the United States, 1850-1900,” Historical Methods, 31, no. 4 (Fall 1998): 149-169. Samuel H. Preston and Michael R. Haines, Fatal Years: Child Mortality in Late Nineteenth Century America (Princeton, NJ: Princeton University Press, 1991), Table 2.5. Richard H. Steckel, “A Dreadful Childhood: The Excess Mortality of American Slaves,” Social Science History (Winter 1986): 427-465.

References

Haines, Michael R. and Richard H. Steckel (editors). A Population History of North America. New York: Cambridge University Press, 2001.

Klein, Herbert. A Population History of the United States. New York: Cambridge University Press, 2004.

Vinovskis, Maris (editor). Studies in American Historical Demography. New York: Academic Press, 1979.

Citation: Haines, Michael. “Fertility and Mortality in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 19, 2008. URL http://eh.net/encyclopedia/fertility-and-mortality-in-the-united-states/

Apprenticeship in the United States

Daniel Jacoby, University of Washington, Bothell

Once the principal means by which craft workers learned their trades, apprenticeship plays a relatively small part in American life today. The essence of this institution has always involved an exchange of labor for training, yet apprenticeship has been far from constant over time as its survival in the United States has required nearly continual adaptation to new challenges.

Four distinct challenges define the periods of major apprenticeship changes. The colonial period required the adaptation of Old World practices to New World contexts. In the era of the new republic, apprenticeship was challenged by the clash between traditional authority and the logic of expanding markets and contracts. The main concern after the Civil War was to find a training contract that could resolve the heightening tensions between organized labor and capital. Finally, in the modern era following World War I, industrialization’s skill-leveling effects constituted a challenge to apprenticeship against which it largely failed. Apprenticeship lost ground as schooling was instead increasingly sought as the vehicle for upward social mobility that offset the leveling effects of industrialization. After reviewing these episodes this essay concludes by speculating whether we are now in a new era of challenges that will reshape apprenticeship.

Apprenticeship came to American soil by way of England, where it was the first step on the road to economic independence. In England, master craftsmen hired apprentices in an exchange of training for service. Once their term of apprenticeship was completed, former apprentices traveled from employer to employer earning wages as journeymen. When, or if, they accumulated enough capital, journeymen set up shop as independent masters and became members of their craft guilds. These institutions had the power to bestow and withdraw rights and privileges upon their members, and thereby to regulate competition among themselves.

One major concern of the guilds was to prevent unrestricted trade entry and thus apprenticeship became the object of much regulation. Epstein (1998), however, argues that monopoly or rent-seeking activity (the deliberate production of scarcity) was only incidental to the guilds’ primary interest in supplying skilled workmen. To the extent that guilds successfully regulated apprenticeship in Britain, that pattern was less readily replicated in the Americas whose colonists came to exploit the bounty of natural resources under mercantilistic proscriptions that forbade most forms of manufacturing. The result was an agrarian society practically devoid of large towns and guilds. Absent these entities, the regulation of apprenticeship relied upon government actions that appear to have been become more pronounced towards the mid-eighteenth century. The passage of Britain’s 1563 Statute of Artificers involved government regulation in the Old World as well. However, as Davies (1956) shows, English apprenticeship was different in that craft guilds and their attendant traditions were more significant.

The Colonial Period

During the colonial period, the U.S was predominantly an agrarian society. As late as 1790 no city possessed a population in excess of 50,000. In 1740, the largest colonial city, Philadelphia, possessed 13,000 inhabitants. Even so, the colonies could not operate successfully without some skilled tradesmen in fields like carpentry, cordwaining (shoemaking), and coopering (barrel making). Neither the training of slaves, nor the immigration of skilled European workmen was sufficient to prevent labor short colonies from developing their own apprenticeship systems. No uniform system of apprenticeship developed because municipalities, and even states, lacked the authority either to enforce their rules outside their own jurisdictions or to restore distant runaways to their masters. Accordingly, apprenticeship remained a local institution.

Records from the colonial period are sparse, but both Philadelphia and Boston have preserved important evidence. In Philadelphia, Quimby (1963) traced official apprenticeship back, at least, to 1716. By 1745 the city had recorded 149 indentures in 33 crafts. The stock of apprentices grew more rapidly than did population and after an additional 25 years it had reached 537.

Quimby’s Colonial Philadelphia data indicate that apprenticeship typically consigned boys, aged 14 to 17, to serve their masters until their twenty-first birthdays. Girls, too, were apprenticed, but females comprised less than one-fifth of recorded indentures, most of whom were apprenticed to learn housewifery. One significant variation on the standard indenture involved the binding of parish orphans. Such paupers were usually indented to less remunerative trades, usually farming. Yet another variation involved the coveted apprenticeships with merchants, lawyers, and other professions. In these instances, parents usually paid masters beforehand to take their children.

Apprenticeship’s distinguishing feature was its contract of indenture, which elaborated the terms of the arrangement. This contract differed in two major ways from the contracts of indenture that bound immigrants. First, the apprenticeship contract involved young people and, as such, required the signature of their parents or guardians. Second, indentured servitude, which Galenson (1981) argues was adapted from apprenticeship, substituted Atlantic transportation for trade instruction in the exchange of a servant’s labor. Both forms of labor involved some degree of mutuality or voluntary agreement. In apprenticeship, however, legal or natural parents transferred legal authority over their child to another, the apprentice’s master, for a substantial portion of his or her youth. In exchange for rights to their child’s labor, parents were also relieved of direct responsibility for child rearing and occupational training. Thus the child’s consent could be of less consequence than that of the parents.

The articles of indenture typically required apprentices to serve their terms faithfully and obediently. Indentures commonly included clauses prohibiting specific behaviors, such a playing dice or fornication. Masters generally pledged themselves to raise, feed, lodge, educate, and train apprentices and then to provide “freedom dues” consisting of clothes, tools, or money once they completed the terms of their indentures. Parents or guardian were co-signatories of the agreements. Although practice in the American colonies is incompletely documented, we know that in Canada parents were held financially responsible to apprentice masters when their children ran away.

To enforce their contracts parties to the agreement could appeal to local magistrates. Problems arose for many reasons, but the long duration of the contract inevitably involved unforeseen contingencies giving rise to dissatisfactions with the arrangements. Unlike other simple exchanges of goods, the complications of child rearing inevitably made apprenticeship a messy concern.

The Early Republic

William Rorabaugh (1986) argues that the revolutionary era increased the complications inherent in apprenticeship. The rhetoric of independence could not be contained within the formal political realm involving relations between nations, but instead involved the interpersonal realms wherein the independence to govern one’s self challenged traditions of deference based upon social status. Freedom was increasingly equated with contractual relations and consent. However, exchange based on contract undermined the authority of masters. And so it was with servants and apprentices who, empowered by Republican ideology, began to challenge their masters conceiving themselves, not as willful children, but as free and independent citizens of the Revolution.

The revolutionary logic of contract ate away at the edges of the long-term apprenticeship relationship and such indentures became substantially less common in the first half of the nineteenth century. Gillian Hamilton (2000) has tested whether the decline in apprenticeship stemmed from problems in enforcing long-term contracts, or whether it was the result of a shift by employers to hire unskilled workers for factory work. While neither theory alone explains the decline in the stock of apprenticeship contracts, both demonstrate how emerging contractual relations undermined tradition by providing new choices. During this period she finds that masters began to pay their apprentices, that over time those payments rose more steeply with experience, and that indenture contracts were shortened, all of which suggest employers consciously patterned contracts to reduce the turnover that resulted when apprentices left for preferable situations. That employers increasingly preferred to be freed from the long-term obligations they owed their apprentices suggests that these responsibilities in loco parentis imposed burdens upon masters as well as apprentices. The payment of money wages reflected, in part, costs associated with their parental authorities that could now, more easily, be avoided in urban areas by shifting responsibilities back to youths and their parents.

Hamilton’s evidence comes from Montreal, where indentures were centrally recorded. While Canadian experiences differed in several identifiable ways from those in the United States, the broader trends she describes are consistent with those observed in the United States. In Frederick County Maryland, for example, Rorabaugh (1986) finds that the percentage of white males formally bound as apprentices fell from nearly 20% of boys aged 15 to 20 to less than 1% between 1800 and 1860. The U.S decline however, is more difficult to gage because informal apprenticeship arrangements that were not officially recorded appear to have risen. In key respects issues pertaining to the master’s authority remained an unresolved complication preventing a uniform apprenticeship system and encouraging informal apprenticeship arrangements into the period well after slavery was abolished.

Postbellum Period

While the Thirteenth Amendment to the U.S. Constitution in 1865 formally ended involuntary servitude, the boundary line between involuntary and voluntary contracts remained problematic, especially in regards to apprenticeship. Although courts explained that labor contracts enforced under penalty of imprisonment generally created involuntary servitude, employers explored contract terms that gave them unusual authority over their apprentices. States sometimes developed statutes to protect minors by prescribing the terms of legally enforceable apprenticeship indentures. Yet, doing so necessarily limited freedom of contract: making it difficult, if not impossible, to rearrange the terms of an apprenticeship agreement to fit any particular situation. Both the age of the apprentice and the length of the indenture agreement made the arrangement vulnerable to abuse. However, it proved extremely difficult for lawmakers to specify the precise circumstances warranting statutory indentures without making them unattractive. In good measure this was because representatives of labor and capital seldom agreed when it came to public policy regarding skilled employment. Yet, the need for some policy increased, especially after the labor scarcities created by the Civil War.

Companies, unions and governments all sought solutions to the shortages of skills caused by the Civil War. In Boston and Chicago, for example, women were recruited to perform skilled typography work that had previously been restricted to men. The Connecticut legislature authorized a new company to recruit and contract skilled workers from abroad. Other states either wrote new apprenticeship laws or experimented with new ways of training workers. The success of craft unionism was itself an indication of the dearth of organizations capable of implementing skill standards. Virtually any new action challenged the authority of either labor or capital, leading one or the other to contest them. Jacoby (1996) argues that the most important new strategy involved the introduction of short trade school courses intended to substitute for apprenticeship. Schooling fed employers’ hope that they might sidestep organized labor’s influence in determining the supply of skilled labor.

Independent of the expansion of schooling, issues pertaining to apprenticeship contract rights gained in importance. Firms like Philadelphia’s Baldwin Locomotive held back wages until contract completion in order to keep their apprentices with them. The closer young apprentices were bound to their employers, the less viable became organized labor’s demand to consult over or to unilaterally control the expansion or contraction of training. One-sided long-term apprenticeship contracts provided employers other advantages as well. Once under contract, competitors and unions could be legally enjoined for “enticing” their workers into breaking their contracts. Although employers rarely brought suit against each other for enticement of their apprentices, their associations, like the Metal Manufactures Association in Philadelphia, prevented apprentices from leaving one master for another by requiring consent and recommendation of member employers (Howell, 2000). Employer associations could, in this way, effectively blacklist union supporters and require apprentices to break strikes.

These employer actions did not occur in a vacuum. Many businessmen faulted labor for tying their hands when responding to increased demands for labor. Unions lost support among the working class when they restricted the number of apprentices an employer could hire. Such restrictions frequently involved ethnic, racial and gender preferences that locked minorities out of the well-paid crafts. Organized labor’s control was, nonetheless, less effective than it would have liked: It could not restrict non-union firms from taking on apprentices nor was it able to stem the flow of half-trained craftsmen from the small towns where apprenticeship standards were weak. Yet by fines, boycotts, and walkouts organized labor did intimidate workers and firms who disregarded their rules. Such actions failed to endear it to less skilled workers, who often regarded skilled unionists as a conservative aristocracy only slightly less onerous, if at all, than big business.

This weakness in labor’s support made it vulnerable to Colonel Richard T Auchmuty’s New York Trade School. Auchmuty’s school, begun in 1881, became the leading institution challenging labor’s control over its own supply. The school was designed and marketed as an alternative to apprenticeship and Auchmuty encouraged its use as a weapon in “the battle for the boys” waged by New York City Plumbers in 1886-87. Those years mark the starting point for a series of skirmishes between organized capital and labor in which momentum seesawed back and forth. Those battles encouraged public officials and educators to get involved. Where the public sector took greater interest in training, schooling more frequently supplemented, rather than replaced, on-the-job apprenticeship training. Public involvement also helped formalized the structure of trade learning in ways that apprenticeship laws had failed to do.

The Modern Era

In 1917, with the benefit of prior collaborations involving the public sector, a coalition of labor, business and social services secured passage of the Smith-Hughes Law to provide federal aid for vocational education. Despite this broad support, it is not clear that the bill would have passed had it not been for America’s entry into the First World War and the attendant priority for an increase in the supply of skilled labor. Prior to this law, demands for skilled labor had been partially muted by new mass production technologies and scientific management, both of which reduced industry’s reliance upon craft workers. War changed the equation.

Not only did war spur the Wilson administration into training more workers, it also raised organized labor’s visibility in industries, like shipbuilding, where it had previously been locked out. Under Smith-Hughes, cities as distant as Seattle and New York invited unions to join formal training partnerships. In the twenties, a number of schools systems provided apprentice extension classes where prior employment was made prerequisite, thereby limiting public apprenticeship support to workers who were already unionized. These arrangements made it easier for organized labor to control entry into the craft. This was most true in the building trades, where the unions remained well-organized throughout the twenties. However, in the twenties, the fast expanding factory sector more successfully reduced union influence. The largest firms, such as the General Electric Company, had long since set up their own non-union–usually informal–apprenticeship plans. Large firms able to provide significant employment security, like those that belonged to the National Association for Corporation Schools, typically operated in a union-free environment, which enabled them to establish training arrangements that were flexible and responsive to their needs.

The depression in the early thirties stopped nearly all training. Moreover, the prior industrial transformation shifted power within organized labor from the American Federation of Labor’s bedrock craft unions to the Congress of Industrial Organizations. With this change labor increasingly emphasized pay equality by narrowing skill differentials and accordingly de-emphasized training issues. Even so, by the late 1930s shortages of skilled workers were again felt that led to a national apprenticeship plan. Under the Fitzgerald Act (1937), apprenticeship standards were formalized in indentures that specified the kinds and quantity of training to be provided, as well as the responsibilities of joint labor-management apprenticeship committees. Standards helped minimize incentives to abuse low-wage apprentices through inadequate training and advancement. Nationally, however, the percentage of apprentices nationally remained very small, and overall young people increasingly chose formal education rather than apprenticeship to open opportunity. While the Fitzgerald Law worked to protect labor’s immediate interests, very few firms chose formal apprenticeships when less structured training relationships were possible.

This system persisted through the heyday of organized labor in the forties and fifties, but began to come undone in the late sixties and seventies, particularly when Civil Rights groups attacked the racial and gender discrimination too often used to ration scarce apprenticeship opportunities. Discrimination was sometimes passive, occurring as the result of preferential treatment extended to the sons and friends of craft workers, while in other instances it involved active and deliberate policies aimed at exclusion (Hill, 1968). Affirmative action accords and court orders have forced unions and firms to provide more apprenticeship opportunities for minorities.

Along with a declining influence of labor and civil rights organizations, work relations appear to have changed as we begin the new millennium. Forms of labor contracting that provide fewer benefits and security are on the rise. Incomes once again have become more stratified by education and skill levels, making them a much more important issue. Gary Becker’s (1964) work on human capital theory has encouraged businessmen and educators to rethink the economics of training and apprenticeship. Conceptualizing training as an investment, theory suggests that enforceable long-term apprenticeships enable employers to increase their investments in the skills of their workers. Binding indentures are rationalized as efficient devices to prevent youths from absconding with the capital employers have invested in them. Armed with this understanding, increasingly policy makers have permitted and encouraged arrangements that look more like older-style employer dominated apprenticeships. Whether this is the beginning of new era for apprenticeship, or merely a return to the prior battles over the abuses of one-sided employer control, only time will tell.

References and further reading:

Becker, Gary. Human Capital. Chicago: University of Chicago Press, 1964.

Davies, Margaret. The Enforcement of English Apprenticeship, 1563-1642. Cambridge, MA: Harvard University Press, 1956.

Douglas, Paul. American Apprenticeship and Industrial Education. New York: Columbia University Press, 1921.

Elbaum, Bernard. “Why Apprenticeship Persisted in Britain but Not in the United States.” Journal of Economic History 49 (1989): 337-49.

Epstein, S. R. “Craft Guilds, Apprenticeship and Technological Change in Pre-industrial Europe.” Journal of Economic History 58, no. 3 (1998): 684-713.

Galenson, David. White Servitude in Colonial America: An Economic Analysis. New York: Cambridge University Press, 1981.

Hamilton, Gillian. “The Decline of Apprenticeship in North America: Evidence from Montreal.” Journal of Economic History 60, no. 3, (2000): 627-664.

Harris, Howell John. Bloodless Victories: The Rise and Decline of the Open Shop Movement in Philadelphia; 1890-1940. New York: Cambridge University Press, 2000.

Hill, Herbert. “The Racial Practices of Organized Labor: The Contemporary Record.” In The Negro and The American Labor Movement, edited by Julius Jacobson. Garden City, New York: Doubleday Press, 1968.

Jacoby, Daniel. “The Transformation of Industrial Apprenticeship in the United States.” Journal of Economic History 52, no. 4 (1991): 887- 910.

Jacoby, Daniel. “Plumbing the Origins of American Vocationalism.” Labor History 37, no. 2 (1996): 235-272.

Licht, Walter. Getting Work: Philadelphia, 1840-1950. Cambridge, MA: Harvard University Press, 1992.

Quimby, Ian M.G. “Apprenticeship in Colonial Philadelphia.” Ph.D. Dissertation, University of Delaware, 1963.

Rorabaugh, William. The Craft Apprentice from Franklin to the Machine Age in America. New York: Oxford University Press, 1986.

Citation: Cuff, Timothy. “Historical Anthropometrics”. EH.Net Encyclopedia, edited by Robert Whaples. August 29, 2004. URL http://eh.net/encyclopedia/apprenticeship-in-the-united-states/

Advertising Bans in the United States

Jon P. Nelson, Pennsylvania State University

Freedom of expression has always ranked high on the American scale of values and fundamental rights. This essay addresses regulation of “commercial speech,” which is defined as speech or messages that propose a commercial transaction. Regulation of commercial advertising occurs in several forms, but it is often controversial. In 1938, the Federal Trade Commission (FTC) was given the authority to regulate “unfair or deceptive” advertising. Congressional hearings were first held in 1939 on proposals to ban radio advertising of alcohol beverages (Russell 1940; U.S. Congress 1939, 1952). Actions by the FTC during 1964-69 led to the 1971 ban of radio and television advertising of cigarettes. In 1997, the distilled spirits industry reversed a six decade-old policy and began using cable television advertising. Numerous groups immediately called for removal of the ads, and Rep. Joseph Kennedy II (D, MA) introduced a “Just Say No” bill that would have banned all alcohol advertisements from the airways. In 1998, the Master Settlement Agreement between that state attorneys general and the tobacco industry put an end to billboard advertising of cigarettes. Do these regulations make any difference for the demand for alcohol or cigarettes? When will an advertising ban increase consumer welfare? What legal standards apply to commercial speech that affect the extent and manner in which governments can restrict advertising?

For many years, the Supreme Court held that the broad powers of government to regulate commerce included the “lesser power” to restrict commercial speech.1 In Valentine (1942), the Court held that the First Amendment does not protect “purely commercial advertising.” This view was applied when the courts upheld the ban of broadcast advertising of cigarettes, 333 F. Supp 582 (1971), affirmed per curiam, 405 U.S. 1000 (1972). However, in the mid-1970s this view began to change as the Court invalidated several state regulations affecting advertising of services and products such as abortion providers and pharmaceutical drugs. In Virginia State Board of Pharmacy (1976), the Court struck down a Virginia law that prohibited the advertising of prices for prescription drugs, and held that the First Amendment protects the right to receive information as well as the right to speak. Responding to the claim that advertising bans improved the public image of pharmacists, Justice Blackmun wrote that “an alternative [exists] to this highly paternalistic approach . . . people will perceive their own best interests if only they are well enough informed, and the best means to that end is to open the channels of communication rather than to close them” (425 U.S. 748, at 770). In support of its change in direction, the Court asserted two main arguments: (1) truthful advertising coveys information that consumers need to make informed choices in a free enterprise economy; and (2) such information is indispensable as to how the economic system should be regulated or governed. In Central Hudson Gas & Electric (1980), the Court refined its approach and laid out a four-prong test for “intermediate” scrutiny of restrictions on commercial speech. First, the message content cannot be misleading and must be concerned with a lawful activity or product. Second, the government’s interest in regulating the speech in question must be substantial. Third, the regulation must directly and materially advance that interest. Fourth, the regulation must be no more extensive than necessary to achieve its goal. That is, there must be a “reasonable fit” between means and ends, with the means narrowly tailored to achieve the desired objective. Applying the third and fourth-prongs, in 44 Liquormart (1996) the Court struck down a Rhode Island law that banned retail price advertising of beverage alcohol. In doing so, the Court made clear that the state’s power to ban alcohol entirely did not include the lesser power to restrict advertising. More recently, in Lorillard Tobacco (2001) the Supreme Court invalidated a state regulation on placement of outdoor and in-store tobacco displays. In summary, Central Hudson requires the use of a “balancing” test to examine censorship of commercial speech. The test weighs the government’s obligations toward freedom of expression with its interest in limiting the content of some advertisements. Reasonable constraints on time, place, and manner are tolerated, and false advertising remains illegal.

This article provides a brief economic history of advertising bans, and uses the basic framework contained in the Central Hudson decision. The first section discusses the economics of advertising and addresses the economic effects that might be expected from regulations that prohibit or restrict advertising. Applying the Central Hudson test, the second section reviews the history and empirical evidence on advertising bans for alcohol beverages. The third section reviews bans of cigarette advertising and discusses the regulatory powers that reside with the Federal Trade Commission as the main government agency with the authority to regulate unfair or deceptive advertising claims.

The Economics of Advertising

Judged by the magnitude of exposures and expenditures, advertising is a vital and important activity. A rule of thumb in the advertising industry is that the average American is exposed to more than 1,000 advertising messages every day, but actively notices fewer than 80 ads. According to Advertising Age (http://www.adage.com), advertising expenditures in 2002 in all media totaled $237 billion, including $115 billion in 13 measured media. Ads in newspapers accounted for 19.2% of measured spending, followed by network TV (17.3%), magazines (15.6%), spot TV (14.0%), yellow pages (11.9%), and cable/syndicated TV (11.9%). Internet advertising now accounts for about 5.0% of spending. By product category, automobile producers were the largest advertisers ($16 billion of measured media), followed by retailing ($13.5 billion), movies and media ($6 billion), and food, beverages, and candies ($6.0 billion). Beverage alcohol producers ranked 17th ($1.7 billion) and tobacco producers ranked 23rd ($284 million). Among the top 100 advertisers, Anheuser-Busch occupied the 38th spot and Altria Group (which includes Philip Morris) ranked 17th. Total advertising expenditures in 2002 were about 2.3% of U.S. gross domestic product (GDP). Ad spending tends to vary directly with general economy activity as illustrated by spending reductions during the 2000-2001 recession (Wall Street Journal, Aug. 14, 2001; Nov. 28, 2001; Dec. 12, 2001; Apr. 25, 2002). This pro-cyclical feature is contrary to Galbraith’s view that business firms use advertising to control or manage aggregate consumer demand.

National advertising of branded products developed in the early 1900s as increased urbanization and improvements in communication, transportation, and packaging permitted the development of mass markets for branded products (Chandler 1977). In 1900, the advertising-to-GDP ratio was about 3.1% (Simon 1970). The ratio stayed around 3% until 1929, but declined to 2% during the 1930s and has fluctuated around that value since then. The growth of major national industries was associated with increased promotion, although other economic changes often preceded the use of mass media advertising. For example, refrigeration of railroad cars in the late 1870s resulted in national advertising by meat packers in the 1890s (Pope 1983). Around the turn-of-the-century, Sears Roebuck and Montgomery Ward utilized low-cost transportation and mail-order catalogs to develop efficient systems of national distribution of necessities. By 1920 more Americans were living in urban areas than in rural areas. The location of retailers began to change, with a shift first to downtown shopping districts and later to suburban shopping malls. Commercial radio began in 1922, and advertising expenditures grew from $113 million in 1935 to $625 million in 1952. Commercial television was introduced in 1941, but wartime delayed the diffusion of televison. By 1954, half of the households in the U.S. had at least one television set. Expenditures on TV advertising grew rapidly from $454 million in 1952 to $2.5 billion in 1965 (Backman 1968). These changes affected the development of markets — for instance, new products could be introduced more rapidly and the available range of products was enhanced (Borden 1942).

Market Failure: Incomplete and Asymmetric Information

Because it is costly to acquire and process, the information held by buyers and sellers is necessarily incomplete and possibly unequal as well. However, full or “perfect” information is one of the analytical requirements for the proper functioning of competitive markets — so what happens when information is imperfect or unequal? Suppose, for example, that firms charge different prices for identical products, but some consumers (tourists) are ignorant of the dispersion of prices available in the marketplace. For many years, this question was largely ignored by economists, but two contributions sparked a revolution in economic thinking. Stigler (1961) showed that because information is costly to acquire, consumer search for lower prices will be less than complete. As a result, a dispersion of prices can persist and the “law of one price” is violated. The dispersion will be less if the product represents a large expenditure (e.g., autos), since more individual search is supported and suppliers have an extra incentive to promote the product. Because information has public good characteristics, imperfect information provides a rationale for government intervention, but profit-seeking firms also have reasons to reduce search costs through advertising and brand names. Akerlof (1970) took the analysis a step further by focusing on material aspects of a product that are known to the seller, but not by potential buyers. In Akerlof’s “lemons model,” the seller of a used car has private knowledge of defects, but potential buyers have difficulty distinguishing between good used cars (“creampuffs”) and bad used cars (“lemons”). Under these circumstances, Akerlof showed that a market may not exist or only lower-quality products are offered for sale. Hence, asymmetric information can result in market failure, but a reputation for quality can reduce the uncertainty that consumers face due to hidden defects (Akerlof 1970; Richardson 2000; Stigler 1961).

Under some conditions, branding and advertising of products, including targeting of customer groups, can help reduce market imperfections. Because advertising has several purposes or functions, there is always uncertainty regarding its effects. First, advertising may help inform consumers of the existence of products and brands, better inform them about price and quality dimensions, or better match customers and brands (Nelson 1975). Indeed, the basic message in many advertisements is simply that the brand is available. Consumer valuations can reflect a joint product, which is the product itself and the information about it. However, advertising tends to focus on only the positive aspects of a product, and ignores the negatives. In various ways, advertisers sometimes inform consumers that their brand is “less bad” (Calfee 1997b). An advertisement that announces a particular automobile is more crash resistant also is a reminder that all cars are less than perfectly safe. Second, persuasive or “combative” advertising can serve to differentiate one firm’s brand from those of its rivals. As a consequence, a successful advertiser may gain some discretion over the price it charges (“market power”). Furthermore, reactions by rivals may drive industry advertising to excessive levels or beyond the point where net social benefits of advertising are maximized. In other words, excessive advertising may result from the inability of each firm to reduce advertising without similar reductions by its rivals. Because it illustrates a breakdown of desirable coordination, this outcome is an example of the “prisoners’ dilemma game.” Third, the costs of advertising and promotion by existing or incumbent firms can make it more difficult for new firms to enter a market and compete successfully due to an advertising-cost barrier to entry. Investments in customer loyalty or intangible brand equity are largely sunk costs. Smaller incumbents also may be at a disadvantage relative to their larger rivals, and consequently face a “barrier to mobility” within the industry. However, banning advertising can have much the same effect by making it more difficult for smaller firms and entrants to inform customers of the existence of their brands and products. For example, Russian cigarette producers were successful in banning television advertising by new western rivals. Given multiple effects, systematic empirical evidence is needed to help resolve the uncertainties regarding the effects of advertising (Bagwell 2005).

Substantial empirical evidence demonstrates that advertising of prices increases competition and lowers the average market price and variance of prices. Conversely, banning price advertising can have the opposite effect, but consumers might derive information from other sources — such as direct observation and word-of-mouth — or firms can compete more on quality (Kwoka 1984). Bans of price advertising also affect product quality indirectly by making it difficult to inform consumers of price-quality tradeoffs. Products for which empirical evidence demonstrates that advertising reduces the average price include toys, drugs, eyeglasses, optometric services, gasoline, and grocery products. Thus, for relatively homogeneous goods, banning price advertising is expected to increase average prices and make entry more difficult. A partial offset occurs if significant costs of advertising increases product prices.

The effects of a ban of persuasive advertising also are uncertain. In a differentiated product industry, it is possible that advertising expenditures are so large that an advertising ban reduces costs and product prices, thereby offsetting or defeating the purpose of the ban. For products that are well known to consumers (“mature” products), the presumption is that advertising primarily affects brand shares and has little impact on primary demand (Dekimpe and Hanssens 1995; Scherer and Ross 1990). Advertising bans tend to solidify market shares. Furthermore, most advertising bans are less than complete, such as the ban of broadcast advertising of cigarettes. Producers can substitute other media or use other forms of promotion, such as discount coupons, articles of apparel, and event sponsorship. Thus, government limitations on commercial speech for one product or media often lead to additional efforts to limit other promotions. This “slippery slope” effect is illustrated by the Federal Communications Commission’s fairness doctrine for advertising of cigarettes (discussed below).

The Industry Advertising-Sales Response Function

The effect of a given ban on market demand depends importantly on the nature of the relationship between advertising expenditures and aggregate sales. This relationship is referred to as the industry advertising-sales response function. Two questions regarding this function have been debated. First, it is not clear that a well-defined function exists at the industry level, since persuasive advertising primarily affects brand shares. The issue is the spillover, if any, from brand advertising to aggregate (primary) market demand. Two studies of successful brand advertising in the alcohol industry failed to reveal a spillover effect on market demand (Gius 1996; Nelson 2001). Second, if an industry-level response function exists, it should be subject to diminishing marginal returns, but it is unclear where diminishing returns begin (the inflection point) or the magnitude of this effect. Some analysts argue that diminishing returns only begin at high levels of industry advertising, and sharply increasing returns exist at moderate to low levels (Saffer 1993). According to this view, comprehensive bans of advertising will reduce market demand importantly. However, this argument is at odds with empirical evidence for a variety of mature products, which demonstrates diminishing returns over a broad range of outlays (Assmus et al. 1984; Tellis 2004). Simon and Arndt (1980) found that diminishing returns began immediately for a majority of 100-plus products. Furthermore, average advertising elasticities for most mature products are only about 0.1 in magnitude (Sethuraman and Tellis 1991). As a result, limited bans of advertising will not reduce sales of mature products or the effect is likely to be extremely small in magnitude. It is unlikely that elasticities this small could support the third prong of the Central Hudson test.

Suppose that advertising for a particular product convinces some consumers to use Brand X, and this results in more sales of the brand at a higher price. Are consumers better or worse off as a consequence? A shift in consumer preferences toward a fortified brand of breakfast cereal might be described as either a “shift in tastes,” an increase in demand for nutrition, or an increase in joint demand for the cereal and information. Because it concerns individual utility, it is not clear whether a “shift in tastes” reduces or increases consumer satisfaction. Social commentators usually respond that consumers just think they are better off or the demand effect is spurious in nature. Much of the social criticism of advertising is concerned with its pernicious effect on consumer beliefs, tastes, and desires. Vance Packard’s, The Hidden Persuaders (1957), was an early, but possibly misguided, effort along these lines (Rogers 1992). Packard wrote that advertisers can “channel our unthinking habits, our purchasing decisions, and our thought processes by the use of insights gleaned from psychiatry and the social sciences.” Of course, once a “hidden secret” is revealed, such manipulation is less effective in the marketplace for products due to cynicism toward advertisers or outright rejection of the advertising claims.

Dixit and Norman (1978) argued that because profit-maximizing firms tend to over-advertise, small decreases in advertising will raise consumer welfare. In their analysis, this result holds regardless of the change in tastes or what product features are being advertised. Becker and Murphy (1993) responded that advertising is usually a complement to products, so it is unclear that equilibrium prices will always be higher as advertising increases. Further, it does not follow that social welfare is higher without any advertising. Targeting by advertisers also helps to increase the efficiency of advertising and reduces the tendency to waste advertising dollars on uninterested consumers through redundant ads. Nevertheless, this common practice also is criticized by social commentators and regulatory agencies. In summary, the evaluation of advertising bans requires empirical evidence. Much of the evidence on advertising bans is econometric and most of it concerns two products, alcohol beverages and cigarettes.

Advertising Bans: Beverage Alcohol

In an interesting way, the history of alcohol consumption follows the laws of supply and demand. The consumption of ethyl alcohol as a beverage began some 10,000 years ago. Due to the uncertainties of contaminated water supplies in the West, alcohol is believed to have been the most popular and safe daily beverage for centuries (Valle 1998). In the East, boiled water in the form of teas solved the problem of potable beverages. Throughout the Middle Ages, beer and ale were drunk by common folk and wine by the affluent. Following the decline of the Roman Empire, the Catholic Church entered the profitable production of wines. Distillation of alcohol was developed in the Arab world in 700 A.D. and gradually spread to Europe, where distilled spirits were used ineffectively as a cure for plague in the 14th century. During the 17th century, several non-alcohol beverages became popular, including coffee, tea, and cocoa. In the late eighteenth century, religious sentiment turned against alcohol and temperance activity figured prominently in the concerns of the Baptist, Friends, Methodist, Mormon, Presbyterian, and Unitarian churches. It was not until the late nineteenth century that filtration and treatment made safe drinking water supplies more widely available.

During the colonial period, retail alcohol sellers were licensed by states, local courts, or town councils (Byse 1940). Some colonies fixed the number of licenses or bonded the retailer. Fixing of maximum prices by legislatures and the courts encouraged adulteration and misbranding by retailers. In 1829, the state of Maine passed the first local option law and in 1844, the territory of Oregon enacted a general prohibition law. Experimentation with statewide monopoly of the retail sale of alcohol began in 1893 in South Carolina. As early as 1897, federal regulation of labeling was enacted through the Bottling in Bond Act. Following the repeal of Prohibition in 1933, the Federal Alcohol Control Administration was created by executive order (O’Neill 1940). The Administration immediately set about creating “fair trade codes” that governed false and misleading advertising, unfair trade practices, and prices that were “oppressively high or destructively low.” These codes discouraged price and advertising competition, and encouraged shipping expansion by the major midwestern brewers (McGahan 1991). The Administration ceased to function in 1935 when the National Industrial Recovery Act was declared unconstitutional. The passage of Federal Alcohol Administration Act in 1935 created the Federal Alcohol Administration (FAA) within the Treasury Department, which regulated trade practices and enforced the producer permit system required by the Act. In 1939, the FAA was abolished and its duties were transferred to the Alcohol Tax Unit of the Internal Revenue Service (later named the Bureau of Alcohol, Tobacco, and Firearms). The ATF presently administers a broad range of provisions regarding the formulation, labeling, and advertising of alcohol beverages.

Alcohol Advertising: Analytical Methods

Three types of econometric studies examine the effects of advertising on the market demand for beverage alcohol. First, time-series studies examine the relationship between alcohol consumption and annual or quarterly advertising expenditures. Recent examples of such studies include Calfee and Scheraga (1994), Coulson et al. (2001), Duffy (1995, 2001), Lariviere et al. (2000), Lee and Tremblay (1992), and Nelson (1999). All of these studies find that advertising has no effect on total alcohol consumption and small or nonexistent effects on beverage demand (Nelson 2001). This result is not affected by disaggregating advertising to account for different effects by media (Nelson 1999). Second, cross-sectional and panel studies examine the relationship between alcohol consumption and state regulations, such as state bans of billboards. Panel studies combine cross-sectional (e.g., all 50 states) and time-series information (50 states for the period 1980-2000), which alters the amount of variation in the data. Third, cross-national studies examine the relationship between alcohol consumption and advertising bans for a panel of countries. This essay discusses results obtained in the second and third types of studies.

Background: State Regulation of Billboard Advertising

In the United States, the distribution and retail sale of alcohol beverages is regulated by the individual states. The Twenty-First Amendment, passed in 1933, repealed Prohibition and granted the states legal powers over the sale of alcohol, thereby resolving the conflicting interests of “wets” and “drys” (Goff and Anderson 1994; Munger and Schaller 1997; Shipman 1940; Strumpf and Oberholzer-Gee 2000). As a result, alcohol laws vary importantly by state, and these differences represent a natural experiment with regard to the economic effects of regulation. Long-standing differences in state laws potentially affect the organization of the industry and alcohol demand, reflecting incentives that alter or shape individual behaviors. State laws also differ by beverage, suggesting that substitution among beverages is one possible consequence of regulation. For example, state laws for distilled spirits typically are more stringent than similar laws applied to beer and wine. While each state has adopted its own unique regulatory system, several broad categories can be identified. Following repeal, eighteen states adopted public monopoly control of the distribution of distilled spirits. Thirteen of these states operate off-premise retail stores for the sale of spirits, and two states also control retail sales of table wine. In five states, only the wholesale distribution of distilled spirits is controlled. No state has monopolized beer sales, but laws in three states provide for restrictions on private beer sales by alcohol content. In the private license states, an Alcohol Beverage Control (ABC) agency determines the number and type of retail licenses, subject to local wet-dry options. Because monopoly states have broad authority to restrict the marketing of alcohol, the presumption is that total alcohol consumption will be lower in the control states compared to the license states. Monopoly control also raises search costs by restricting outlet numbers, hours of operation, and product variety. Because beer and wine are substitutes or complements for spirits, state monopoly control can increase or decrease total alcohol use, or the net effect may be zero (Benson et al. 1997; Nelson 1990, 2003a).

A second broad experiment includes state regulations banning advertising of alcohol beverages or which restrict the advertising of prices. Following repeal, fourteen states banned billboard advertising of distilled spirits, including seven of the license states. Because the bans have been in existence for many years and change infrequently over time, these regulations provide evidence on the long-term effectiveness of advertising bans. It is often argued that billboards have an important effect on youth behaviors, and this belief has been a basis for municipal ordinances banning billboard advertising of tobacco and alcohol. Given long-standing bans, it might be expected that youth alcohol behaviors will show up as cross-state differences in adult per capita consumption. Indeed, these two variables are highly correlated (Cook and Moore 2000, 2001). Further, fifteen states banned price advertising by retailers using billboards, newspapers, and visible store displays. In general, a ban of price advertising reduces retail competition and increases search costs of consumers. However, these regulations were not intended to advance temperance, but rather were anti-competitive measures obtained by alcohol retailers (McGahan 1995). For example, in 44 Liquormart (1996) the lower court noted that Rhode Island’s ban of price advertising was designed to protect smaller retailers from in-state and out-of-state competition, and was closely monitored by the liquor retailers association. A price advertising ban could reduce alcohol consumption by elevating full prices (search costs plus monetary prices). Because many states banned only price advertising of spirits, substitution among beverages also is a possible outcome.

Table 1 illustrates historical changes since 1935 in alcohol consumption in the United States and three individual states. Also, Table 1 shows nominal and real advertising expenditures for the U.S. After peaking in the early 1980s, per capita alcohol consumption is now at roughly the level experienced in the early 1960s. Nationally, the decline in alcohol consumption from 1980 to 2000 was 21.0%. This decline has occurred despite continued high levels of advertising and promotion. At the state-level, the percentage changes in consumption are Illinois, -25.3%; Ohio, -15.5%; and Pennsylvania, -20.5%. Pennsylvania is a state monopoly for spirits and wines and also banned price advertising of alcohol, including beer, prior to 1997. However, the change in per capita consumption in Pennsylvania parallels what has occurred nationally.

Econometric Results: State-Level Studies of Billboard Bans

Seven econometric studies estimate the relationship between state billboard bans and alcohol consumption: Hoadley et al. (1984), Nelson (1990, 2003a), Ornstein and Hanssens (1985), Schweitzer et al. (1983), and Wilkinson (1985, 1987). Two studies used a single year, but the other five employed panel data covering five to 25 years. Two studies estimated demand functions for beer or distilled spirits only, which ignores substitution. None of the studies obtained a statistically significant reduction in total alcohol consumption due to bans of billboards. In several studies, billboard bans increased spirits consumption significantly. A positive effect of a ban is contrary to general expectations, but consistent with various forms of substitution. The study by Nelson (2003a) covered 45 states for the time period 1982-1997. In contrast to earlier studies, Nelson (2003a) focused on substitution among alcohol beverages and the resulting net effect on total ethanol consumption. Several subsamples were examined, including all 45 states, ABC-license states, and two time periods, 1982-1988 and 1989-1997. A number of other variables also were considered, including prices, income, tourism, age demographics, and the minimum drinking age. During both time periods, state billboard bans increased consumption of wine and spirits, and reduced consumption of beer. The net effect on total ethanol consumption was significantly positive during 1982-1988, and insignificant thereafter. During both time periods, bans of price advertising of spirits were associated with lower consumption of spirits, higher consumption of beer, and no effect on wine or total alcohol consumption. The results in this study demonstrate that advertising regulations have different effects by beverage, indicating the importance of substitution. Public policy statements that suggest that limited bans have a singular effect are ignoring market realities. The empirical results in Nelson (2003a) and other studies are consistent with the historic use of billboard bans as a device to suppress competition, with little or no effect on temperance.

Econometric Results: Cross-National Studies of Broadcast Bans

Many Western nations have restrictions on radio and television advertising of alcohol beverages, especially distilled spirits. These controls range from time-of-day restrictions and content guidelines to outright bans of broadcast advertising of all alcohol beverages. Until quite recently, the trend in most countries has been toward stricter rather than more lenient controls. Following repeal, U.S. producers of distilled spirits adopted a voluntary Code of Good Practice that barred radio advertising after 1936 and television advertising after 1948. When this voluntary agreement ended in late 1996, cable television stations began carrying ads for distilled spirits. The major TV networks continued to refuse such commercials. Voluntary or self-regulatory codes also have existed in a number of other countries, including Australia, Belgium, Germany, Italy, and Netherlands. By the end of the 1980s, a number of countries had banned broadcast advertising of spirits, including Austria, Canada, Denmark, Finland, France, Ireland, Norway, Spain, Sweden, and United Kingdom (Brewers Association of Canada 1997).

Table 1
Advertising and Alcohol Consumption (gallons of ethanol per capita, 14+ yrs)

Illinois Ohio Pennsylvania U.S. Alcohol Ads Real Ads Real Ads Percent
Year (gal. p.c.) (gal. p.c.) (gal. p.c.) (gal. p.c.) (mil. $) (mil. 96$) per capita Broadcast
1935 1.20
1940 1.56
1945 2.25
1950 2.04
1955 2.00
1960 2.07
1965 2.27 242.2 1018.5 7.50 38.7
1970 2.82 2.22 2.28 2.52 278.4 958.0 6.41 34.7
1975 2.99 2.21 2.35 2.69 395.6 979.9 5.99 44.0
1980 3.00 2.33 2.39 2.76 906.9 1580.5 8.83 55.1
1981 2.91 2.25 2.37 2.76 1014.9 1618.7 8.91 56.6
1982 2.83 2.28 2.36 2.72 1108.7 1667.0 9.07 58.1
1983 2.80 2.22 2.29 2.69 1182.9 1708.4 9.18 62.0
1984 2.77 2.26 2.25 2.65 1284.4 1788.9 9.50 66.0
1985 2.72 2.20 2.22 2.62 1293.0 1746.1 9.16 68.2
1986 2.68 2.17 2.23 2.58 1400.2 1850.6 9.61 73.5
1987 2.66 2.17 2.20 2.54 1374.7 1766.1 9.09 73.5
1988 2.64 2.11 2.11 2.48 1319.4 1639.8 8.37 74.4
1989 2.56 2.07 2.10 2.42 1200.4 1436.6 7.27 68.2
1990 2.62 2.09 2.15 2.45 1050.4 1209.7 6.10 64.8
1991 2.48 2.03 2.05 2.30 1119.5 1247.2 6.22 66.4
1992 2.43 1.98 1.99 2.30 1074.7 1172.0 5.78 68.5
1993 2.38 1.95 1.96 2.23 970.7 1030.9 5.04 70.4
1994 2.35 1.85 1.93 2.18 1000.9 1041.1 5.03 69.4
1995 2.29 1.90 1.86 2.15 1027.5 1046.4 5.00 68.2
1996 2.30 1.93 1.86 2.16 1008.8 1008.8 4.77 68.5
1997 2.26 1.91 1.84 2.14 1087.0 1069.2 5.01 66.5
1998 2.25 1.97 1.86 2.14 1187.6 1154.6 5.36 66.3
1999 2.27 2.00 1.87 2.16 1242.2 1189.5 5.45 64.2
2000 2.24 1.97 1.90 2.18 1422.6 1330.8 5.89 62.8

Sources: 1965-70 ad data from Adams-Jobson Handbooks; 1975-91 data from Impact; and 1992-2000 data from LNA/Competitive Media. Nominal data deflated by the GDP implicit price deflator (1996 = 100). Alcohol data from National Institute on Alcohol Abuse and Alcoholism, U.S. Apparent Consumption of Alcoholic Beverages (1997) and 2003 supplement. Real advertising per capita is for ages 14+ based on NIAAA and author’s population estimates.

The possible effects of broadcast bans are examined in four studies: Nelson and Young (2001), Saffer (1991), Saffer and Dave (2002), and Young (1993). Because alcohol behavior or “cultural sentiment” varies by country, it is important that the social setting is considered. In particular, the level of alcohol consumption in the wine-drinking countries is substantially greater. In France, Italy, Luxembourg, Portugal, and Spain, alcohol consumption is about one-third greater than average (Nelson and Young 2001). Further, 20 to 25% of consumption in the Scandinavian countries is systematically under-reported due to cross-border purchases, smuggling, and home production. In contrast to other studies, Nelson and Young (2001) accounted for these differences. The study examined alcohol demand and related behaviors in a sample of 17 OECD countries (western Europe, Canada, and the U.S.) for the period 1977 to 1995. Control variables included prices, income, tourism, age demographics, unemployment, and drinking sentiment. The results indicated that bans of broadcast advertising of spirits did not decrease per capita alcohol consumption. During the sample period, five countries adopted broadcast bans of all alcohol beverage advertisements, apart from light beer (Denmark, Finland, France, Norway, Sweden). The regression estimates for complete bans were insignificantly positive. The results indicated that bans of broadcast advertising had no effect on alcohol consumption relative to countries that did not ban broadcast advertising. For the U.S., the cross-country results are consistent with studies of successful brands, studies of billboard bans, and studies of advertising expenditures (Nelson 2001). The results are inconsistent with an advertising-response function with a well-defined inflection point.

Advertising Bans: Cigarettes

Prior to 1920, consumption of tobacco in the U.S. was mainly in the form of cigars, pipe tobacco, chewing tobacco, and snuff. It was not until 1923 that cigarette consumption by weight surpassed that of cigars (Forey et al. 2002). Several early developments contributed to the rise of the cigarette (Borden 1942). First, the Bonsak cigarette-making machine was patented in 1880 and perfected in 1884 by James Duke. Second, the federal excise tax on cigarettes, instituted to help pay for the Civil War, was reduced in 1883 from $1.75 to 50 cents a thousand pieces. Third, during World War I, cigarette consumption by soldiers was encouraged by ease of use and low cost. Fourth, the taboo against public smoking by women began to wane, although participation by women remained substantially below that of men. By 1935, about 50% of men smoked compared to only 20% of women. Fifth, advertising has been credited with expanding the market for lighter-blends of tobacco, although evidence in support of this claim is lacking (Tennant 1950). Some early advertising claims were linked to health, such as a 1928 ad for Lucky Strike that emphasized, “No Throat Irritation — No Cough.” During this time, the FTC banned numerous health claims by de-nicotine products and devices, e.g., 10 FTC 465 (1925).

Cigarette advertising has been especially controversial since the early 1950s, reflecting known health risks associated with smoking and the belief that advertising is a causal factor in smoking behaviors. Warning labels on cigarette packages were first proposed in 1955, following new health reports by the American Cancer Society, the British Medical Research Council, and Reader’s Digest (1952). Regulation of cigarette advertising and marketing, especially by the FTC, increased over the years to include content restrictions (1942, 1950-52); advertising guidelines (1955, 1960, 1966); package warning labels (1965, 1970, 1984); product testing and labeling (1967, 1970); public reporting on advertising trends (1964, 1967, 1981); warning messages in advertisements (1970); and advertising bans (1971, 1998). The history of these regulations is discussed below.

Background: Cigarette Prohibition and Early Health Reports

During the 17th century, several of the northern colonies banned public smoking. In 1638, the Plymouth colony passed a law forbidding smoking in the streets and, in 1798, Boston banned the carrying of a lighted pipe or cigar in public. Beginning around 1850, a number of anti-tobacco groups were formed (U.S. Surgeon General 2000), including the American Anti-Tobacco Society in 1849, American Health and Temperance Association (1878), Anti-Cigarette League (1899), Non-Smokers Protective League (1911), and the Department of Narcotics of the Women’s Christian Temperance Union (1883). The WCTU was a force behind the cigarette prohibition movement in Canada and the U.S. During the Progressive Era, fifteen states passed laws prohibiting the sale of cigarettes to adults and another twenty-one states considered such laws (Alston et al. 2002). North Dakota and Iowa were the first states to adopt smoking bans in 1896 and 1897, respectively. In West Virginia, cigarettes were taxed so heavily that they were de facto prohibited. In 1920, Lucy Page Gaston of the WCTU made a bid for the Republican nomination for president on an anti-tobacco platform. However, the movement waned as the laws were largely unenforceable. By 1928, cigarettes were again legal for sale to adults in every state.

As the popularity of cigarette smoking spread, so too did concerns about its health consequences. As a result, the hazards of smoking have long been common knowledge. A number of physicians took early notice of a tobacco-cancer relationship in their patients. In 1912, Isaac Adler published a book on lung cancer that implicated smoking. In 1928, adverse health effects of smoking were reported in the New England Journal of Medicine. A Scientific American report in 1933 tentatively linked cigarette “tars” to lung cancer. Writing in Science in 1938, Raymond Pearl of Johns Hopkins University demonstrated a statistical relationship between smoking and longevity (Pearl 1938). The addictive properties of nicotine were reported in 1942 in the British medical journal, The Lancet. These and other reports attracted little attention from the popular press, although Reader’s Digest (1924, 1941) was an early crusader against smoking. In 1950, three classic scientific papers appeared that linked smoking and lung cancer. Shortly thereafter, major prospective studies began to appear in 1953-54. At this time, the research findings were more widely reported in the popular press (e.g., Time 1953). In 1957, the Public Health Service accepted a causal relationship between smoking and lung cancer (Burney 1959; Joint Report 1957). Between 1950 and 1963, researchers published more than 3,000 articles on the health effects of smoking.

Cigarette Advertising: Analytical Methods

Given the rising concern about the health effects of smoking, it is not surprising that cigarette advertising would come under fire. The ability of advertising to stimulate primary demand is not debated by public health officials, since in their eyes cigarette advertising is inherently deceptive. The econometric evidence is much less clear. Three methods are used to assess the relationship between cigarette consumption and advertising. First, time-series studies examine the relationship between cigarette consumption and annual or quarterly advertising expenditures. These studies have been reviewed several times, including comprehensive surveys by Cameron (1998), Duffy (1996), Lancaster and Lancaster (2003), and Simonich (1991). Most time-series studies find little or no effect of advertising on primary demand for cigarettes. For example, Duffy (1996) concluded that “advertising restrictions (including bans) have had little or no effect upon aggregate consumption of cigarettes.” A meta-analysis by Andrews and Franke (1991) found that the average elasticity of cigarette consumption with respect to advertising expenditure was only 0.142 during 1964-1970, and declined to -0.007 thereafter. Second, cross-national studies examine the relationship between per capita cigarette consumption and advertising bans for a panel of countries. Third, several time-series studies examine the effects of health scares and the 1971 ban of broadcast advertising. This essay discusses results obtained in the second and third types of econometric studies.

Econometric Results: Cross-National Studies of Broadcast Bans

Systematic tests of the effect of advertising bans are provided by four cross-national panel studies that examine annual per capita cigarette consumption among OECD countries: Laugesen and Meads (1991); Stewart (1993); Saffer and Chaloupka (2000); and Nelson (2003b). Results in the first three studies are less than convincing for several reasons. First, advertising bans might be endogenously determined together with cigarette consumption, but earlier studies treated advertising bans as exogenous. In order to avoid the potential bias associated with endogenous regressors, Nelson (2003b) estimated a structural equation for the enabling legislation that restricts advertising. Second, annual data on cigarette consumption contain pronounced negative trends, and the data series in levels are unlikely to be stationary. Nelson (2003b) tested for unit roots and used consumption growth rates (log first-differences) to obtain stationary data series for a sample of 20 OECD countries. Third, the study also tested for structural change in the smoking-advertising relationship. The motivation was based on the following set of observations: by the mid-1960s the risks associated with smoking were well known and cigarette consumption began to decline in most countries. For example, per capita consumption in the United States increased to an all-time high in 1963 and declined modestly until about 1978. Between 1978 and 1995, cigarette consumption in the U.S. declined on average by -2.44% per year. Further, the decline in consumption was accompanied by reductions in smoking prevalence. In the U.S., male smoking prevalence declined from 52% of the population in 1965 to 33% in 1985 and 27% in 1995 (Forey et al. 2002). Smoking also is increasingly concentrated among individuals with lower incomes or lower levels of education (U.S. Public Health Service 1994). Changes in prevalence suggest that the sample of smokers will not be homogeneous over time, which implies that empirical estimates may not be robust across different time periods.

Nelson (2003b) focused on total cigarettes, defined as the sum of manufactured and hand-rolled cigarettes for 1970-1995. Data on cigarette and tobacco consumption were obtained from International Smoking Statistics (Forey et al. 2002). This comprehensive source includes estimates of sales in OECD countries for manufactured cigarettes, hand-rolled cigarettes, and total consumption by weight of all tobacco products. The data series begin around 1948 and extend to 1995. Regulatory information on advertising bans and health warnings were obtained from Health New Zealand’s International Tobacco Control Database and the World Health Organization’s International Digest of Health Legislation. For each country and year, HNZ reports the media in which cigarette advertising are banned. Nine media are covered, including television, radio, cinema, outdoor, newspapers, magazines, shop ads, sponsorships, and indirect advertising such as brand names on non-tobacco products. Based on these data, three dummy variables were defined: TV-RADIO (= 1 if only television and radio are banned, zero otherwise); MODERATE (= 1 if 3 or 4 media are banned); and STRONG (= 1 if 5 or more media are banned). On average, 4 to 5 media were banned in the 1990s compared to only 1 or 2 in the 1970s. Except for Austria, Japan and Spain, all OECD countries by 1995 had enacted moderate or strong bans of cigarette advertising. In 1995, there were 9 countries in the strong category compared to 5 in 1990, 4 in 1985, and only 3 countries in 1980 and earlier. Additional control variables in the study included prices, income, warning labels, unemployment rates, percent filter cigarettes, and demographics.

The results in Nelson (2003b) indicate that cigarette consumption is determined importantly by prices, income, and exogenous country-specific factors. The dummy variables for advertising bans were never significantly negative. The income elasticity was significantly positive and the price elasticity was significantly negative. The price elasticity estimate of -0.39 is identical to the consensus estimate of -0.4 for aggregate data (Chaloupka and Warner 2000). Beginning about 1985, the decline in smoking prevalence resulted in a shift in price and income elasticities. There also was a change in the political climate favoring additional restrictions on advertising that followed rather than caused reductions in smoking and smoking prevalence, which is “reverse causality.” Thus, advertising bans had no demonstrated influence on cigarette demand in the OECD countries, including the U.S. The advertising-response model that motivates past studies is not supported by these results. Data and estimation procedures used in three previous studies are picking-up the substantial declines in consumption that began in the late-1970s, which were unrelated to major changes in advertising restrictions.

Background: Regulation of Cigarettes by the Federal Trade Commission

At the urging of President Wilson, the Federal Trade Commission (FTC) was created by Congress in 1914. The Commission was given the broad mandate to prevent “unfair methods of competition.” From the very beginning, this mandate was interpreted to include false and deceptive advertising, even though advertising per se was not an antitrust issue. Indeed, the first cease-and-desist order issued by the FTC concerned false advertising, 1 FTC 13 (1916). It was the age of the patent medicines and health-claims devices. As early as 1925, FTC orders against false and misleading advertising constituted 75 percent of all orders issued each year. However, in Raladam (1931) the Supreme Court held that false advertising could be prevented only in situations where injury to a competitor could be demonstrated. The Wheeler-Lea Act of 1938 added a prohibition of “unfair or deceptive acts or practices” in or affecting commerce. This amendment broadened Section 5 of the FTC Act to include consumer interests as well as business concerns. The FTC could thereafter proceed against unfair and deceptive methods without regard to alleged effects on competitors.

As an independent regulatory agency, the FTC has rulemaking and adjudicatory authorities (Fritschler and Hoefler 1996). Its rulemaking powers are quasi-legislative, including the authority to hold hearings and trade practice conferences, subpoena witnesses, conduct investigations, and issue industry guidelines and proposals for legislation. Its adjudicatory powers are quasi-judicial, including the authority to issue cease-and-desist orders, consent decrees, injunctions, trade regulation rules, affirmative disclosure and substantiation orders, corrective advertising orders, and advisory opinions. Administrative complaints are adjudicated before an administrative law judge in trial-like proceedings. Rulemaking by the FTC is characterized by broad applicability to all firms in an industry, whereas judicial policy is based on a single case and affects directly only those named in the suit. Of course, once a precedent is established, it may affect other firms in the same situation. Lacking a well-defined constituency, except possibly small business, the FTC’s use of its manifest powers has always been controversial (Clarkson and Muris 1981; Hasin 1987; Miller 1989; Posner 1969, 1973; Stone 1977).

Beginning in 1938, the FTC used its authority to issue “unfair and deceptive” advertising complaints against the major cigarette companies. These actions, known collectively as the “health claims cases,” resulted in consent decrees or cease-and-desist orders involving several major brands during the 1940s and early 1950s. As several cases neared the final judgment phase, in September 1954 the FTC sent a letter to all companies proposing a seven-point list of advertising standards in light of “scientific developments with regard to the [health] effects of cigarette smoking.” A year later, the FTC issued its Cigarette Advertising Guides, which forbade any reference to physical effects of smoking and representations that a brand of cigarette is low in nicotine or tars that “has not been established by a competent scientific proof.” Following several articles in Reader’s Digest, cigarette advertising in 1957-1959 shifted to emphasis on tar and nicotine reduction during the “tar derby.” The FTC initially tolerated these ads if based on tests conducted by Reader’s Digest or Consumer Reports. In 1958, the FTC hosted a two-day conference on tar and nicotine testing, and in 1960 it negotiated a trade practice agreement that “all representations of low or reduced tar or nicotine, whether by filtration or otherwise, will be construed as health claims.” This action was blamed for halting a trend toward increased consumption of lower-tar cigarettes (Calfee 1997a; Neuberger 1963). The FTC vacated this agreement in 1966 when it informed the companies that it would no longer consider advertising that contained “a factual statement of tar and nicotine content” a violation of its Advertising Guides.

On January 11, 1964, the Surgeon General’s Advisory Committee on Smoking and Health issued its famous report on Smoking and Health (U.S. Surgeon General 1964). One week after the report’s release, the FTC initiated proceedings “for promulgation of trade regulation rules regarding unfair and deceptive acts or practices in the advertising and labeling of cigarettes” (notice, 29 Fed Reg 530, January 22, 1964; final rule, 29 Fed Reg 8325, July 2, 1964). The proposed Rule required that all cigarette packages and advertisements disclose prominently the statement, “Caution: Cigarette smoking is dangerous to health [and] may cause death from cancer and other diseases.” Failure to include the warning would be regarded as a violation of the FTC Act. The industry challenged the Rule on grounds that the FTC lacked the statutory authority to issue industry-wide trade rules, absent congressional guidance. The major companies also established their own Cigarette Advertising Code, which prohibited advertising aimed at minors, health-related claims, and celebrity endorsements.

The FTC’s Rule resulted in several congressional bills that culminated in the Federal Cigarette Labeling and Advertising Act of 1965 (P.L. 89-92, effective Jan. 1, 1966). The Labeling Act required each cigarette package to contain the statement, “Caution: Cigarette Smoking May Be Hazardous to Your Health.” According to the Act’s declaration of policy, the warnings were required so that “the public may be adequately informed that cigarette smoking may be hazardous to the health.” The Act also required the FTC to report annually to Congress concerning (a) the effectiveness of cigarette labeling, (b) current practices and methods of cigarette advertising and promotion, and (c) such recommendations for legislation as it may deem appropriate. Beginning in 1967, the FTC commenced its annual reporting to Congress on advertising of cigarettes. It recommended that health warning be extended to advertising and strengthened to conform to its original proposal, and it called for research on less-hazardous cigarettes. These recommendations were repeated in 1968 and 1969, and a recommendation was added that advertising on television and radio should be banned.

Several other important regulatory actions also took place in 1967-1970. First, the FTC established a laboratory to conduct standardized testing of tar and nicotine content for each brand. In November 1967, the FTC commenced public reporting of tar and nicotine levels by brand, together with reports of overall trends in smoking behaviors. Second, in June of 1967, the Federal Communications Commission (FCC) ruled that the “fairness doctrine” was applicable to cigarette advertising, which resulted in numerous free anti-smoking commercials by the American Cancer Society and other groups during July 1967 to December 1970.2 Third, in early 1969 the FCC issued a notice of proposed rulemaking to ban broadcast advertising of cigarettes (34 Fed Reg 1959, Feb. 11, 1969). The proposal was endorsed by the Television Code Review Board of the National Association of Broadcasters, and its enactment was anticipated by some industry observers. Following the FCC’s proposal, the FTC issued a notice of proposed rulemaking (34 Fed Reg 7917, May 20, 1969) to require more forceful statements on packages and extend the warnings to all advertising as a modification of its 1964 Rule in the “absence of contrary congressional direction.” Congress again superseded the FTC’s actions, and passed the Public Health Smoking Act of 1969 (P.L. 91-222, effective Nov. 1, 1970), which banned broadcast advertising after January 1, 1971 and modified the package label to read, “Warning: The Surgeon General Has Determined that Cigarette Smoking Is Dangerous to Your Health.” In 1970, the FTC negotiated agreements with the major companies to (1) disclose tar and nicotine levels in cigarette advertising using the FTC Test Method, and (2) include the health warning in advertising. By 1972, the FTC believed that it had achieved the recommendations in its initial reports to Congress.3

In summary, the FTC has engaged in continuous surveillance of cigarette advertising and marketing practices. Industry-wide regulation began in the early 1940s. As a result, the advertising of cigarettes in the U.S. is more restricted than other lawful consumer products. Some regulations are primarily informational (warning labels), while others affect advertising levels directly (broadcast ban). During a six-decade period, the FTC regulated the overall direction of cigarette marketing, including advertising content and placement, warning labels, and product development. Through its testing program, it has influenced the types of cigarettes produced and consumed. The FTC engaged in continuous monitoring of cigarette advertising practices and prepared in-depth reports on these practices; it held hearings on cigarette testing, advertising, and labeling; and it issued consumer advisories on smoking. Directly or indirectly, the FTC has initiated or influenced promotional and product developments in the cigarette industry. However, it remains to be shown that these actions had an important or noticeable effect on cigarette consumption and/or industry advertising expenditures. Is there empirical evidence that federal regulation has affected aggregate cigarette consumption or advertising? If the answer is negative or the effects are limited in magnitude, it suggests that the Congressional and FTC actions after 1964 did not add materially to information already in the marketplace or these actions were otherwise misguided.4

Table 2 displays information on smoking prevalence, cigarette consumption, and advertising. Smoking prevalence has declined considerably compared to the 1950s and 1960s. Consumption per capita reached an all-time high in 1963 (4,345 cigarettes per capita) and began a steep decline around 1978. By 1985, consumption was below the level experienced in 1947. Cigarette promotion has changed greatly over the years as producers substituted away from traditional advertising media. As reported by the FTC, the category of non-price promotions includes expenditures on point-of-sale displays, promotional allowances, samples, specialty items, public entertainment, direct mail, endorsements and testimonials, internet, and audio-visual ads. The shift away from media advertising reflects the broadcast and billboard bans as well as the controversies that surround advertising of cigarettes. As a result, spending on traditional media now amounts to only $356 million, or about 7% of the total marketing outlay of $5.0 billion. Clearly, regulation has affected the type of promotion, but not the overall expenditure.

Econometric Results: U.S. Time-Series Studies of the 1971 Advertising Ban

Several econometric studies examine the effects of the 1971 broadcast ban on cigarette demand, including Franke (1994), Gallet (1999), Ippolito et al. (1979), Kao and Tremblay (1988), and Simonich (1991). None of these studies found that the 1971 broadcast ban had a noticeable effect on cigarette demand. The studies by Franke and Simonich employed quarterly data on cigarette sales. The study by Ippolito et al. covered an extended time period from 1926 to 1975. The studies by Gallet and Kao and Tremblay employed simultaneous-equations methods, but each study concluded that the broadcast advertising ban did not have a significant effect on cigarette demand. Although health reports in 1953 and 1964 may have reduced the demand for tobacco, the results do not support a negative effect of the 1971 Congressional broadcast ban. By 1964 or earlier, the adverse effects of smoking appear to have been incorporated in consumers’ decisions regarding smoking. Hence, the advertising restrictions did not contribute to consumer information and therefore did not affect cigarette consumption.

Conclusions

The First Amendment protects commercial speech, although the degree of protection afforded is less than political speech. Commercial speech jurisprudence has changed profoundly since Congress passed a flat ban on broadcast advertising of cigarettes in 1971. The courts have recognized the vital need for consumers to be informed about market conditions — an environment that is conducive to operation of competitive markets. The Central Hudson test requires the courts and agencies to balance the benefits and costs of censorship. The third-prong of the test requires that censorship must directly and materially advance a substantial goal. This essay has discussed the difficulty of establishing a material effect of limited and comprehensive bans of alcohol and cigarette advertisements.

Sales per cap. 5-media Non-Price Total per cap.

Table 2
Advertising and Cigarette Consumption

Prevalence: Total Cig Sales Cigs
per cap.
Ad Spending:
5-media
Promotion:
Non-Price
Real Total Real Total
per cap.
Male Female
Year (%) (%) (bil.) (ages 18+) (mil. $) (mil. $) (mil 96$) (ages 18+)
1920 44.6 665
1925 79.8 1,085
1930 119.3 1,485 26.0 213.1
1935 53 18 134.4 1,564 29.2 286.3
1940 181.9 1,976 25.3 245.6
1947 345.4 3,416 44.1 269.7 2.70
1950 54 33 369.8 3,552 65.5 375.4 3.61
1955 50 24 396.4 3,597 104.6 528.8 4.83
1960 47 27 484.4 4,171 193.1 870.2 7.53
1965 52 34 528.8 4,258 249.9 1050.9 8.49
1970 44 31 536.5 3,985 296.6 64.4 1242.3 9.26
1975 39 29 607.2 4,122 330.8 160.5 1227.3 8.28
1980 38 29 631.5 3,849 790.1 452.2 2177.9 13.29
1985 33 28 594.0 3,370 932.0 1544.4 3360.6 19.09
1986 583.8 3,274 796.3 1586.1 3163.5 17.78
1987 32 27 575.0 3,197 719.2 1861.3 3326.2 18.49
1988 31 26 562.5 3,096 824.5 1576.3 2993.1 16.44
1989 540.0 2,926 868.3 1788.7 3190.8 17.35
1990 28 23 525.0 2,817 835.2 1973.0 3246.1 17.52
1991 28 24 510.0 2,713 772.6 2054.6 3153.2 16.86
1992 28 25 500.0 2,640 621.5 2435.0 3328.1 17.62
1993 28 23 485.0 2,539 542.1 2933.9 3695.9 19.38
1994 28 23 486.0 2,524 545.1 3039.5 3733.6 19.41
1995 27 23 487.0 2,505 564.2 2982.6 3615.5 18.62
1996 487.0 2,482 578.2 3220.8 3799.0 19.37
1997 28 22 480.0 2,423 575.7 3561.4 4058.0 20.47
1998 26 22 465.0 2,320 645.6 3908.0 4412.4 22.03
1999 26 22 435.0 2,136 487.7 4659.0 4918.0 24.29
2000 26 21 430.0 2,092 355.8 5015.0 5043.0 24.53
Sources: Smoking prevalence and cigarette sales from Forey et al (2002) and U.S. Public Health Service (1994). Data on advertising compiled by the author from FTC Reports to Congress (various issues); 1930-1940 data derived from Borden (1942). Nominal data deflated by the GDP implicit price deflator (1996=100). Advertising expenditures include TV, radio, newspapers, magazine, outdoor and transit ads. Promotions exclude price-promotions using discount coupons and retail value-added offers (“buy one, get one free”). Real total includes advertising and non-price promotions.

Law Cases

44 Liquormart, Inc., et al. v. Rhode Island and Rhode Island Liquor Stores Assoc., 517 U.S. 484 (1996).

Central Hudson Gas & Electric Corp. v. Public Service Commission of New York, 447 U.S. 557 (1980).

Federal Trade Commission v. Raladam Co., 283 U.S. 643 (1931).

Food and Drug Administration, et al. v. Brown & Williamson Tobacco Corp., et al., 529 U.S. 120 (2000).

Lorillard Tobacco Co., et al. v. Thomas F. Reilly, Attorney General of Massachusetts, et al., 533 U.S. 525 (2001).

Red Lion Broadcasting Co. Inc., et al. v. Federal Communications Commission, et al., 395 U.S. 367 (1969).

Valentine, Police Commissioner of the City of New York v. Chrestensen, 316 U.S. 52 (1942).

Virginia State Board of Pharmacy, et al. v. Virginia Citizens Consumer Council, Inc., et al., 425 U.S. 748 (1976).

References

Akerlof, George A. “The Market for ‘Lemons’: Quality Uncertainty and the Market Mechanism.” Quarterly Journal of Economics 84 (1970): 488-500.

Alston, Lee J., Ruth Dupre, and Tomas Nonnenmacher. “Social Reformers and Regulation: The Prohibition of Cigarettes in the U.S. and Canada.” Explorations in Economic History 39 (2002): 425-45.

Andrews, Rick L. and George R. Franke. “The Determinants of Cigarette Consumption: A Meta-Analysis.” Journal of Public Policy & Marketing 10 (1991): 81-100.

Assmus, Gert, John U. Farley, and Donald R. Lehmann. “How Advertising Affects Sales: Meta-Analysis of Econometric Results.” Journal of Marketing Research 21 (1984): 65-74.

Backman, Jules. Advertising and Competition. New York: New York University Press, 1967.

Bagwell, Kyle. “The Economic Analysis of Advertising.” In Handbook of Industrial Organization, vol. 3, edited by M. Armstrong and R. Porter. Amsterdam: North-Holland, forthcoming 2005.

Becker, Gary and Kevin Murphy. “A Simple Theory of Advertising as a Good or Bad,” Quarterly Journal of Economics 108 (1993): 941-64.

Benson, Bruce L., David W. Rasmussen, and Paul R. Zimmerman. “Implicit Taxes Collected by State Liquor Monopolies.” Public Choice 115 (2003): 313-31.

Borden, Neil H. The Economic Effects of Advertising. Chicago: Irwin, 1942.

Brewers Association of Canada. Alcoholic Beverage Taxation and Control Policies: International Survey, 9th ed. Ottawa: BAC, 1997.

Burney, Leroy E. “Smoking and Lung Cancer: A Statement of the Public Health Service.” Journal of the American Medical Association 171 (1959): 135-43.

Byse, Clark. “Alcohol Beverage Control Before Repeal.” Law and Contemporary Problems 7 (1940): 544-69.

Calfee, John E. “The Ghost of Cigarette Advertising Past.” Regulation 20 (1997a): 38-45.

Calfee, John E. Fear of Persuasion: A New Perspective on Advertising and Regulation. LaVergne, TN: AEI, 1997b.

Calfee, John E. and Carl Scheraga. “The Influence of Advertising on Alcohol Consumption: A Literature Review and an Econometric Analysis of Four European Nations.” International Journal of Advertising 13 (1994): 287-310.

Cameron, Sam. “Estimation of the Demand for Cigarettes: A Review of the Literature.” Economic Issues 3 (1998): 51-72.

Chaloupka, Frank J. and Kenneth E. Warner. “The Economics of Smoking.” In The Handbook of Health Economics, vol. 1B, edited by A.J. Culyer and J.P. Newhouse, 1539-1627. New York: Elsevier, 2000.

Chandler, Alfred D. The Visible Hand: The Managerial Revolution in American Business. Cambridge, MA: Belknap Press, 1977.

Clarkson, Kenneth W. and Timothy J. Muris, eds. The Federal Trade Commission since 1970: Economic Regulation and Bureaucratic Behavior. Cambridge: Cambridge University Press, 1981.

Cook, Philip J. and Michael J. Moore. “Alcohol.” In The Handbook of Health Economics, vol. 1B, edited by A.J. Culyer and J.P. Newhouse, 1629-73. Amsterdam: Elsevier, 2000.

Cook, Philip J. and Michael J. Moore. “Environment and Persistence in Youthful Drinking Patterns.” In Risky Behavior Among Youths: An Economic Analysis, edited by J. Gruber, 375-437. Chicago: University of Chicago Press, 2001.

Coulson, N. Edward, John R. Moran, and Jon P. Nelson. “The Long-Run Demand for Alcoholic Beverages and the Advertising Debate: A Cointegration Analysis.” In Advertising and Differentiated Products, vol. 10, edited by M.R. Baye and J.P. Nelson, 31-54. Amsterdam: JAI Press, 2001.

Dekimpe, Marnick G. and Dominique Hanssens. “Empirical Generalizations about Market Evolution and Stationarity.” Marketing Science 14 (1995): G109-21.

Dixit, Avinash and Victor Norman. “Advertising and Welfare.” Bell Journal of Economics 9 (1978): 1-17.

Duffy, Martyn. “Advertising in Demand Systems for Alcoholic Drinks and Tobacco: A Comparative Study.” Journal of Policy Modeling 17 (1995): 557-77.

Duffy, Martyn. “Econometric Studies of Advertising, Advertising Restrictions and Cigarette Demand: A Survey.” International Journal of Advertising 15 (1996): 1-23.

Duffy, Martyn. “Advertising in Consumer Allocation Models: Choice of Functional Form.” Applied Economics 33 (2001): 437-56.

Federal Trade Commission. Staff Report on the Cigarette Advertising Investigation. Washington, DC: FTC, 1981.

Forey, Barbara, et al., eds. International Smoking Statistics, 2nd ed. London: Oxford University Press, 2002.

Franke, George R. “U.S. Cigarette Demand, 1961-1990: Econometric Issues, Evidence, and Implications.” Journal of Business Research 30 (1994): 33-41.

Fritschler, A. Lee and James M. Hoefler. Smoking and Politics: Policy Making and the Federal Bureaucracy, 5th ed. Upper Saddle River, NJ: Prentice-Hall, 1996.

Gallet, Craig A. “The Effect of the 1971 Advertising Ban on Behavior in the Cigarette Industry.” Managerial and Decision Economics 20 (1999): 299-303.

Gius, Mark P. “Using Panel Data to Determine the Effect of Advertising on Brand-Level Distilled Spirits Sales.” Journal of Studies on Alcohol 57 (1996): 73-76.

Goff, Brian and Gary Anderson. “The Political Economy of Prohibition in the United States, 1919-1933.” Social Science Quarterly 75 (1994): 270-83.

Hasin, Bernice R. Consumers, Commissions, and Congress: Law, Theory and the Federal Trade Commission, 1968-1985. New Brunswick, NJ: Transaction Books, 1987.

Hazlett, Thomas W. “The Fairness Doctrine and the First Amendment.” The Public Interest 96 (1989): 103-16.

Hoadley, John F., Beth C. Fuchs, and Harold D. Holder. “The Effect of Alcohol Beverage Restrictions on Consumption: A 25-year Longitudinal Analysis.” American Journal of Drug and Alcohol Abuse 10 (1984): 375-401.

Ippolito, Richard A., R. Dennis Murphy, and Donald Sant. Staff Report on Consumer Responses to Cigarette Health Information. Washington, DC: Federal Trade Commission, 1979.

Joint Report of the Study Group on Smoking and Health. “Smoking and Health.” Science 125 (1957): 1129-33.

Kao, Kai and Victor J. Tremblay. “Cigarette ‘Health Scare,’ Excise Taxes, and Advertising Ban: Comment.” Southern Economic Journal 54 (1988): 770-76.

Kwoka, John E. “Advertising and the Price and Quality of Optometric Services.” American Economic Review 74 (1984): 211-16.

Lancaster, Kent M. and Alyse R. Lancaster. “The Economics of Tobacco Advertising: Spending, Demand, and the Effects of Bans.” International Journal of Advertising 22 (2003): 41-65.

Lariviere, Eric, Bruno Larue, and Jim Chalfant. “Modeling the Demand for Alcoholic Beverages and Advertising Specifications.” Agricultural Economics 22 (2000): 147-62.

Laugesen, Murray and Chris Meads. “Tobacco Advertising Restrictions, Price, Income and Tobacco Consumption in OECD Countries, 1960-1986.” British Journal of Addiction 86 (1991): 1343-54.

Lee, Byunglak and Victor J. Tremblay. “Advertising and the US Market Demand for Beer.” Applied Economics 24 (1992): 69-76.

McGahan, A.M. “The Emergence of the National Brewing Oligopoly: Competition in the American Market, 1933-1958.” Business History Review 65 (1991): 229-84.

McGahan, A.M. “Cooperation in Prices and Advertising: Trade Associations in Brewing after Repeal.” Journal of Law and Economics 38 (1995): 521-59.

Miller, James C. The Economist as Reformer: Revamping the FTC, 1981-1985. Washington, DC: American Enterprise Institute, 1989.

Munger, Michael and Thomas Schaller. “The Prohibition-Repeal Amendments: A Natural Experiment in Interest Group Influence.” Public Choice 90 (1997): 139-63.

Nelson, Jon P. “State Monopolies and Alcoholic Beverage Consumption.” Journal of Regulatory Economics 2 (1990): 83-98.

Nelson, Jon P. “Broadcast Advertising and U.S. Demand for Alcoholic Beverages.” Southern Economic Journal 66 (1999): 774-90.

Nelson, Jon P. “Alcohol Advertising and Advertising Bans: A Survey of Research Methods, Results, and Policy Implications.” In Advertising and Differentiated Products, vol. 10, edited by M.R. Baye and J.P. Nelson, 239-95. Amsterdam: JAI Press, 2001.

Nelson, Jon P. “Advertising Bans, Monopoly, and Alcohol Demand: Testing for Substitution Effects Using State Panel Data.” Review of Industrial Organization 22 (2003a): 1-25.

Nelson, Jon P. “Cigarette Demand, Structural Change, and Advertising Bans: International Evidence, 1970-1995.” Contributions to Economic Analysis & Policy 2 (2003b): 1-28. http://www.bepress.com/bejeap/contributions (electronic journal).

Nelson, Jon P. and Douglas J. Young. “Do Advertising Bans Work? An International Comparison.” International Journal of Advertising 20 (2001): 273-96.

Nelson, Phillip. “The Economic Consequences of Advertising.” Journal of Business 48 (1975): 213-41.

Neuberger, Maurine B. Smoke Screen: Tobacco and the Public Welfare. Englewood Cliffs, NJ: Prentice-Hall, 1963.

O’Neill, John E. “Federal Activity in Alcoholic Beverage Control.” Law and Contemporary Problems 7 (1940): 570-99.

Ornstein, Stanley O. and Dominique M. Hanssens. “Alcohol Control Laws and the Consumption of Distilled Spirits and Beer.” Journal of Consumer Research 12 (1985): 200-13.

Packard, Vance O. The Hidden Persuaders. New York: McKay, 1957.

Pearl, Raymond. “Tobacco Smoking and Longevity.” Science 87 (1938): 216-17.

Pope, Daniel. The Making of Modern Advertising. New York: Basic Books, 1983.

Posner, Richard A. “The Federal Trade Commission.” University of Chicago Law Review 37 (1969): 47-89.

Posner, Richard A. Regulation of Advertising by the FTC. Washington, DC: AEI, 1973.

“Does Tobacco Harm the Human Body?” (by I. Fisher). Reader’s Digest (Nov. 1924): 435. “Nicotine Knockout, or the Slow Count” (by G. Tunney). Reader’s Digest (Dec. 1941): 21. “Cancer by the Carton” (by R. Norr). Reader’s Digest (Dec. 1952): 7.

Richardson, Gary. “Brand Names before the Industrial Revolution.” Unpub. working paper, Department of Economics, University of California at Irvine, 2000.

Rogers, Stuart. “How a Publicity Blitz Created the Myth of Subliminal Advertising.” Public Relations Quarterly 37 (1992): 12-17.

Russell, Wallace A. “Controls Over Labeling and Advertising of Alcoholic Beverages.” Law and ContemporaryProblems 7 (1940): 645-64.

Saffer, Henry. “Alcohol Advertising Bans and Alcohol Abuse: An International Perspective.” Journal of Health Economics 10 (1991): 65-79.

Saffer, Henry. “Advertising under the Influence.” In Economics and the Prevention of Alcohol-Related Problems, edited by M.E. Hilton, 125-40. Washington, DC: National Institute on Alcohol Abuse and Alcoholism, 1993.

Saffer, Henry and Frank Chaloupka. “The Effect of Tobacco Advertising Bans on Tobacco Consumption.” Journal of Health Economics 19 (2000): 1117-37.

Saffer, Henry and Dhaval Dave. “Alcohol Consumption and Alcohol Advertising Bans.” Applied Economics 34 (2002): 1325-34.

Scherer, F. M. and David Ross. Industrial Market Structure and Economic Performance. 3rd ed. Boston: Houghton Mifflin, 1990.

Schweitzer, Stuart O., Michael D. Intriligator, and Hossein Salehi. “Alcoholism.” In Economics and Alcohol: Consumption and Controls, edited by M. Grant, M. Plant, and A. Williams, 107-22. New York: Harwood, 1983.

Sethuraman, Raj and Gerard J. Tellis. “An Analysis of the Tradeoff Between Advertising and Price Discounting.” Journal of Marketing Research 28 (1991): 160-74.

Shipman, George A. “State Administrative Machinery for Liquor Control.” Law and Contemporary Problems 7 (1940): 600-20.

Simmons, Steven J. The Fairness Doctrine and the Media. Berkeley, CA: University of California Press, 1978.

Simon, Julian L. Issues in the Economics of Advertising. Urbana, IL: University of Illinois Press, 1970.

Simon, Julian L. and John Arndt. “The Shape of the Advertising Response Function.” Journal of Advertising Research 20 (1980): 11-28.

Simonich, William L. Government Antismoking Policies. New York: Peter Lang, 1991.

Stewart, Michael J. “The Effect on Tobacco Consumption of Advertising Bans in OECD Countries.” International Journal of Advertising 12 (1993): 155-80.

Stigler, George J. “The Economics of Information.” Journal of Political Economy 69 (1961): 213-25.

Stone, Alan. Economic Regulation and the Public Interest: The Federal Trade Commission in Theory and Practice. Ithaca, NY: Cornell University Press, 1977.

Strumpf, Koleman S. and Felix Oberholzer-Gee. “Local Liquor Control from 1934 to 1970.” In Public Choice Interpretations of American Economic History, edited by J.C. Heckelman, J.C. Moorhouse, and R.M. Whaples, 425-45. Boston: Kluwer Academic, 2000.

Tellis, Gerard J. Effective Advertising: Understanding When, How, and Why Advertising Works. Thousand Oaks, CA: Sage, 2004.

Tennant, Richard B. The American Cigarette Industry. New Haven, CT: Yale University Press, 1950.

“Beyond Any Doubt.” Time (Nov. 30, 1953): 60.

U.S. Congress. Senate. To Prohibit the Advertising of Alcoholic Beverages by Radio. Hearings before the Subcommittee on S. 517. 76th Congress, 1st Session. Washington, DC: U.S. Government Printing Office, 1939.

U.S. Congress. Senate. Liquor Advertising Over Radio and Television. Hearings on S. 2444. 88th Congress, 2nd Session. Washington, DC: U.S. Government Printing Office, 1952.

U.S. Public Health Service. Smoking and Health. Report of the Advisory Committee to the Surgeon General of the Public Health Service. Washington, DC: U.S. Department of Health, Education, and Welfare, 1964.

U.S. Public Health Service. Surveillance for Selected Tobacco-Use Behaviors — United States, 1900-1994. Atlanta: U.S. Department of Health and Human Services, 1994.

U.S. Public Health Service. Reducing Tobacco Use. A Report of the Surgeon General. Atlanta: U.S. Department of Health and Human Services, 2000.

Vallee, Bert L. “Alcohol in the Western World.” Scientific American 278 (1998): 80-85.

Wilkinson, James T. “Alcohol and Accidents: An Economic Approach to Drunk Driving.” Ph.D. diss., Vanderbilt University, 1985.

Wilkinson, James T. “The Effects of Regulation on the Demand for Alcohol.” Unpub. working paper, Department of Economics, University of Missouri, 1987.

Young, Douglas J. “Alcohol Advertising Bans and Alcohol Abuse: Comment.” Journal of Health Economics 12 (1993): 213-28.

Endnotes

1. See, for example, Packer Corp. v. Utah, 285 U.S. 105 (1932); Breard v. Alexandria, 341 U.S. 622 (1951); E.F. Drew v. FTC, 235 F.2d 735 (1956), cert. denied, 352 U.S. 969 (1957).

2. In 1963, the Federal Communications Commission (FCC) notified broadcast stations that they would be required to give “fair coverage” to controversial public issues (40 FCC 571). The Fairness Doctrine ruling was upheld by the Supreme Court in Red Lion Broadcasting (1969). At the request of John Banzhaf, the FCC in 1967 applied the Fairness Doctrine to cigarette advertising (8 FCC 2d 381). The FCC opined that the cigarette advertising was a “unique situation” and extension to other products “would be rare,” but Commissioner Loevinger warned that the FCC would have difficulty distinguishing cigarettes from other products (9 FCC 2d 921). The FCC’s ruling was upheld by the D.C. Circuit Court, which argued that First Amendment rights were not violated because advertising was “marginal speech” (405 F.2d 1082). During the period 1967-70, broadcasters were required to include free antismoking messages as part of their programming. In February 1969, the FCC issued a notice of proposed rulemaking to ban broadcast advertising of cigarettes, absent voluntary action by cigarette producers (16 FCC 2d 284). In December 1969, Congress passed the Smoking Act of 1969, which contained the broadcast ban (effective Jan. 1, 1971). With regard to the Fairness Doctrine, Commissioner Loevinger’s “slippery slope” fears were soon realized. During 1969-1974, the FCC received thousands of petitions for free counter-advertising for diverse products, such as nuclear power, Alaskan oil development, gasoline additives, strip mining, electric power rates, clearcutting of forests, phosphate-based detergents, trash compactors, military recruitment, children’s toys, airbags, snowmobiles, toothpaste tubes, pet food, and the United Way. In 1974, the FCC began an inquiry into the Fairness Doctrine, which concluded that “standard product commercials, such as the old cigarette ads, make no meaningful contribution toward informing the public on any side of an issue . . . the precedent is not at all in keeping with the basic purposes of the fairness doctrine” (48 FCC 2d 1, at 24). After numerous inquires and considerations, the FCC finally announced in 1987 that the Fairness Doctrine had a “chilling effect,” on speech generally, and could no longer be sustained as an effective public policy (2 FCC Rcd 5043). Thus ended the FCC’s experiment with regulatory enforcement of a “right to be heard” (Hazlett 1989; Simmons 1978).

3. During the remainder of the 1970s, the FTC concentrated on enforcement of its advertising regulations. It issued consent orders for unfair and deceptive advertising to force companies to include health warnings “clearly and conspicuously in all cigarette advertising.” It required 260 newspapers and 40 magazines to submit information on cigarette advertisements, and established a task force with the Department of Health, Education and Welfare to determine if newspaper ads were deceptive. In 1976, the FTC announced that it was again investigating “whether there may be deception and unfairness in the advertising and promotion of cigarettes.” It subpoenaed documents from 28 cigarette manufacturers, advertising agencies, and other organizations, including copy tests, consumer surveys, and marketing plans. Five years later, it submitted to Congress the results of this investigation in its Staff Report on Cigarette Investigation (FTC 1981). The report proposed a system of stronger rotating warnings and covered issues that had emerged regarding low-tar cigarettes, including compensatory behaviors by smokers and the adequacy of the FTC’s Test Method for determining tar and nicotine content. In 1984, President Reagan signed the Comprehensive Smoking Education Act (P.L. 98-474, effective Oct.12, 1985), which required four rotating health warnings for packages and advertising. Also, in 1984, the FTC revised its definition of deceptive advertising (103 FTC 110). In 2000, the FTC finally acknowledged the shortcoming of its tar and nicotine test method.

4. The Food and Drug Administration (FDA) has jurisdiction over cigarettes as drugs in cases involving health claims for tobacco, additives, and smoking devices. Under Dr. David Kessler, the FDA in 1996 unsuccessfully attempted to regulate all cigarettes as addictive drugs and impose advertising and other restrictions designed to reduce the appeal and use of tobacco by children (notice, 60 Fed Reg 41313, Aug. 11, 1995; final rule, 61 Fed Reg 44395, Aug. 28, 1996); vacated by FDA v. Brown & Williamson Tobacco Corporation, et al., 529 U.S. 120 (2000)

Citation: Nelson, Jon. “Advertising Bans, US”. EH.Net Encyclopedia, edited by Robert Whaples. May 20, 2004. URL http://eh.net/encyclopedia/nelson-adbans/