EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Manpower in Economic Growth: The American Record since 1800

Author(s):Lebergott, Stanley
Reviewer(s):Margo, Robert A.

Classic Reviews in Economic History

Stanley Lebergott, Manpower in Economic Growth: The American Record since 1800. New York: McGraw-Hill, 1964. xii + 561 pp.

Review Essay by Robert A. Margo, Department of Economics, Boston University.

Manpower after Forty Years

During the first half of the twentieth century classical musicians routinely incorporated their personalities into their performances. One recognizes immediately Schnabel in Beethoven, Fisher in Bach, Cortot in Chopin, or Segovia in just about anything written for guitar. As the century progressed performance practice evolved to where the “text” — the music — became paramount. The ideal was to reveal the composer’s intent rather than putting one’s own stamp on the notes — the performer as conduit per se rather than co-composer.

Personal style played a major role in the early years of the cliometrics revolution. Hand a cliometrician an unpublished essay by Robert Fogel or Stanley Engerman, and I am quite sure she could identify the author after reading the first couple of paragraphs (if not the first couple of sentences). No one can possibly mistake a book by Doug North for a book by Peter Temin or an essay by Paul David for one by Lance Davis or Jeffrey Williamson. To some extent this is because personal style mattered at the time in economics generally — think Milton Friedman or Robert Solow. But mostly it mattered, I think, because these cliometricians were on a mission. Men and women on a mission put their personalities up front, because they are trying to shake up the status quo.

So it is with Stanley Lebergott. Indeed, of all the personalities who figured in the transformation of economic history from a sub-field of economics (I am tempted to write “intellectual backwater”) that eschewed advances in economic theory and econometrics to one that embraced them (I am tempted to write “for better and for worse”), Lebergott’s style was perhaps the most personal. In re-reading Lebergott’s most famous book — his Manpower in Economic Growth: The American Record since 1800 (1964) — one sees that style front and center on nearly every page, as well as the conflicting emotions as its author tried, not always successfully, to marry the anecdotal and archival snippets beloved by historians with the methods of economics. Manpower was (and is) substantively important for two reasons. First, prior to Manpower, the “economic history of labor” meant unions and labor legislation. By contrast, Lebergott made the labor market — the demand and supply of labor — his central focus and in doing so elevated markets and market forces to a central tendency in the writing of economic history. Second, Lebergott produced absolutely fundamental data — estimates of the labor force, industrial composition, unemployment, real wages, self-employment, and the like — that economic historians have relied on (or embellished) ever since.

These two accomplishments aside, I emphasize style not because, in Manpower‘s case, it is light years from the average article that I accept for publication in Explorations in Economic History. Economic history, like all economics, is vastly more technical than it was in the early 1960s. Burrowing into the style of Manpower reveals an author transfixed with what he perceived to be the grandness of the American experiment, the transformation of a second-rate colony into the greatest economy the world had yet seen. The core of Manpower would always be its 33 appendix tables and 252 (!) pages of accompanying explanatory text lovingly produced and so relentlessly documented as to drive any reader to distraction (or tears). So much the line in the sand, daring — indeed, taunting — the reader to do better. Lebergott knew that, in principle, one could do better, because he did not have ready access to all the relevant archival materials. I would conjecture, however, that he would always be surprised if anyone did, in fact, do better. Tom Weiss, himself one of the great compilers of American economic statistics, spent several years redoing Lebergott’s labor force estimates using census micro data rather than the published volumes that Lebergott relied on (Weiss 1986). In commenting on Weiss’s work, Lebergott (1986) characterized the differences between his original figures and the revisions as “very small beer” and then took Weiss to task for failing (in Lebergott’s) view to fully justify the revisions. “One awaits with interest,” he concluded, “further work by the National Bureau of Economic Research project of which this is a part.” When Georgia Villaflor and I (Margo and Villaflor 1987) produced a series of real wage estimates for the antebellum period drawing on archival sources that Lebergott did not use, I received a polite letter congratulating me but requesting more details and admonishing me to think harder about certain estimates that Lebergott felt did not mesh fully with his priors. There are thousands of numbers in those 33 appendix tables and one’s sense is that each number received the undivided attention of its creator for many, many, many hours.

But numbers do not a narrative make. Chapter One, “The Matrix,” has little in common with the archetypal introduction that gives the reader a roadmap and a flavor of the findings. It begins rather with an 1802 quote from “The Reverend Stanley Griswold” about the frontier that lay before the good minister. “This good land, which stretches around us to such a vast extent … large like the munificence of heaven … [s]uch a noble present never before was given to any people.” (Reviewer’s note: any people? Which people?) The first sentence goes on to describe an incongruous scene from Kentucky in 1832, “a petit bon homme” and his wife and their “little pile of trunks” sitting in a restaurant in the middle of (literally) nowhere. We then learn of a “great theme” of American history, that which motivated those who wrested the land from the “wilderness” — a belief in an open society, of which there were three elements. First, “hope” — an unabashed belief that things will always get better, and were better in America than in Europe. Second, “ignorance” — Americans were always willing to try something new, no matter how crazy. Third, America had a huge amount of space for people to spread out in. OK, the reader says, but where’s the economics? Ca. page 13 Lebergott emphasizes that the three elements made Americans unusually restless people, willing to move all the time. Ordinarily, Lebergott opines, it is the smaller (geographically-speaking) countries that have higher labor productivity because, ordinarily, people do not like to move. But Americans liked to move, he claims, and they did so on the slightest provocation. Excessive optimism, misinformation, and folly are core attributes of the American spirit and key factors in the American success story. In the end, the errors didn’t matter anyway (“small beer” indeed) because the land was so rich. More people moved to California in 1850 than could be rationally justified by the expected returns to gold mining but, as a result, California entered the aggregate production function sooner than otherwise. Labor mobility per se was a Good Thing, and American had it in abundance.

Chapter Two asks where all the workers would come from. Lebergott notes that certain labor supplies were highly predictable — slaves, for example. But once the slave trade was abolished the supply of slave labor grew at whatever the natural rate of increase. If the riches of America were to be tapped, free labor would have to be found — all the more difficult if the required number of workers to be assembled in any given spot was very large.

Another element of the Lebergott style is a dry wit, as evidenced in his exchange with Weiss. In a section on “[t]he Labor Force: Definition” we are told that ‘[t]he baby has contributed more to the gaiety of nations than have all the nightclub comics in history. We include the comic in the labor force … as we include [his] wages in the national income but set no value on the endearing talents provided by the baby.” In discussing the then-fashionable notion that the aggregate labor force participation rate (like other Great Ratios) was “invariant to economic conditions” Lebergott notes that small changes can nevertheless have great import. “The United States Calvary,” he observes, “was sent to the State of Utah because of the difference between 1.0 wives per husband and a slightly greater number.” The remainder of the chapter considers segments of the labor force whose labor was, indeed, “responsive to economic conditions” — European immigrants, internal migrants, (some) women and children as well as the impact of social and political factors on labor supply; it demonstrates the extraordinary flexibility of the American labor force and its responsiveness to incentives. While this conclusion would not surprise anyone today it was, I think, quite revolutionary at the time. It is as good an example of any I know of the power of historical thinking to debunk conventional wisdom derived from today’s numbers.

By now the reader is accustomed to Lebergott’s modus operandi — the opening paragraph that sometimes seems to be beside the point but really isn’t; quotations in the text from travelogues, diaries, plays, literature and what-not; obscure (to say the least) references in the footnotes; all interspersed with economic reasoning that has more than a tinge of what would be called today “behavioral” economics. In Chapter Three Lebergott talks about the “process” of labor mobility, which is really one extended probing into the relationship between mobility of various sorts and wage differentials. We get to see some univariate regression lines, superimposed in scatter-plots of decade-by-decade changes in the labor force at, say, the state level, against initial wage rates. Generally, labor flows were directed at states with higher initial wage rates, although Lebergott is quick to assert that “[m]igrants suboptimized” because the cross-state pattern was far less apparent at the level of regions. Next, Lebergott takes on the notion that economic development is an inexorable process of labor shifting out of agriculture. The American case, Lebergott claimed, challenges this notion. American workers shifted out of agriculture when the economic incentives were right; that is, when the value of the marginal product of labor was higher outside of agriculture.

The remainder of Chapter 3 is divided into two brief sections, both of which contain some of the most interesting writing in the book. In “Social Mobility and the Division of Labor,” Lebergott examines the relationship between occupational specialization and growth. In the nineteenth century most workers possessed a myriad of skills, farmers especially. They were jacks of all trades, masters of none. Lebergott speculates that this was a good thing because the master of none was more inclined to try something new, rather than assume he was, well, the master and therefore knew everything. If some fraction of novel techniques were successful, this could (under strong assumptions) lead to a higher rate of technical progress. “Origins of the Factory System” considers the problem posed earlier in the book of assembling large numbers of workers at a given location. Rather than pay higher wages, manufacturers turned to an under-utilized source of labor, women and children. Some years later, the ideas presented in this section would develop in full bloom in a celebrated article by Claudia Goldin and Kenneth Sokoloff (Goldin and Sokoloff 1982) on the role of female and child labor in early industrialization.

At 89 pages, Chapter Four, “Some Consequences,” is the longest chapter in the book. The first few pages, highly influential, are given to the formation of a national labor market, revealed by changes over time in the coefficient of variation of wages across locations. We are then given an extended tour of the history of American real wages, back and forth between the relevant tables in the appendix, quotations from contemporaries and other anecdotal evidence. The “Determinants of Real Wage Trends” comes next. The first, productivity, is no surprise. The second, “Slavery,” isn’t really either, but here Lebergott’s contrarian instincts, I think, get the better of him. Lebergott would have the reader believe that, first, free and slave labor were close to perfect substitutes; and, second, slave rental rates contained a premium above what the slave would have commanded in a free labor market. Consequently, when slavery ended, wages fell and there was downward pressure on real wage growth for a time. No question that wages fell in the South after the Civil War but Lebergott’s analysis is incomplete at best. Slave labor was highly productive before the Civil War because of the gang system, and when the gang system ended, the demand for labor fell in the South. Because labor supplies were not perfectly elastic, wages fell too. “Immigration,” the third purported influence, had negative short run effects on wages but positive long run effects via productivity growth.

What follows next is a 25-page section that years later produced two high-profile controversies in macroeconomics. This is the (celebrated) section where Lebergott presents his long-term estimates of unemployment. In thinking today about his work, we would do well to remember that, at the time he prepared his estimates, the United States had only a relatively brief experience with the direct and regular measurement of unemployment, courtesy of the 1940 Census and the subsequent Current Population Survey (CPS). (By “direct” I mean answers to questions about a worker’s time allocation during a specific period of time — if you did not have a job during the survey week, were you looking for one?)

Like all the estimates in the book, Lebergott’s unemployment figures were the product of detailed, painstaking work that, inevitably, required strong assumptions. The fundamental problem was that, if one wanted annual estimates of unemployment, there was no way to obtain these directly from survey evidence prior to the CPS. For some benchmark dates one could produce tolerable direct estimates from the federal census, but the federal census was useless if one wanted to generate an estimate, say, for 1893 or, for that matter, 1933.

Lebergott’s solution was to rely on an identity. By definition, the labor force was the sum of employed and unemployed workers. One might not know the number of unemployed workers but perhaps one could extrapolate between benchmark dates the number of workers in the labor force and employment, one could estimate unemployment levels via subtraction.

The first high profile controversy involved Lebergott’s estimates for the 1930s, which included in the count of unemployed workers persons on work relief. After 1933 there were many such workers, and so, by historical standards, unemployment looks, of course, rather high. This generated a lot of theoretical work for macroeconomists who thought they had to explain how unemployment rates could remain above 10 percent while real wages were rising (after 1933).

Michael Darby (1976) suggested that this effort was misplaced because Lebergott “should” have included the persons on work relief in the count of employed workers. Darby showed that doing so made the recovery after 1933 look much more normal. I’ve written a few papers on this issue, and my view is somewhere in-between Darby and Lebergott (Margo 1991; Finegan and Margo 1994; see also Kesselman and Savin 1978). Ideally, in constructing labor force statistics we should be consistent over time, so if persons on work relief were “employed” in the 1930s we should consider adding, say, “workfare” recipients to the labor force (or, possibly, prisoners making license plates) today, but this ideal may not be achievable in practice. The real issue with New Deal work relief is not the resolution of a crusty debate between competing macroeconomic theories but whether the program affected individual behavior. Here I think the answer is a resounding yes — unemployed individuals in the 1930s did respond to incentives built into New Deal policies. Wives were far more likely to be “added workers” if their unemployed spouses had no work whatsoever, than if the spouse held a work relief job, so much so that, in the aggregate, the added work effect disappeared entirely in the late 1930s, because so many unemployed men were on work relief.

The second high-profile debate involved Christina Romer’s important work on the long-term properties of the American business cycle. Prior to her work it was (and in some quarters still is) a “stylized fact” that the business cycle today is less volatile than it was in the past. Lebergott’s original unemployment series combined with standard post-war series were often used to buttress claims that the macroeconomy become much more stable over time. Statistical measures of volatility estimated from the combined series clearly suggest this, whether volatility is measured by the average “distance” (in percentage points) between peaks and troughs or standard deviations.

Romer (1986) argued that, to a large degree, this apparent decline in volatility was a figment of the way the original data were constructed. In particular, in constructing his annual series, Lebergott assumed (among other things) that deviations in employment followed one-for-one deviations in output. Romer invoked Okun’s law, arguing that the true relationship was more like 1:3. Constructing post-war series by replicating (as close as possible) Lebergott’s procedures produced a new series that was not less volatile than the pre-war series, thereby contradicting the stylized fact that the macroeconomy became more stable over time. This was, needless to say, a controversial conclusion, with many subsequently weighing in. Now that the dust is settled, my own view — a view I think that many share, although I could be wrong — is that there is definitely something to Romer’s argument; at the very least, she demonstrated (as she claimed in her original article) that before one draws conclusions from historical time series, one should be very familiar with how the series are constructed. Chapter Four ends with another of Lebergott’s meditations on the alleged constancy of aggregate parameters — in this case, factor shares.

Chapter Five (“Some Inferences”) concludes the narrative portion of the book. It repeats the book’s earlier mantra that “Yankee ingenuity” and initiative, especially that embodied in immigrants, were central to American success as opposed, say, to “factor endowments.” It ruminates on how highly mobile labor influenced the choice of technique, in ways familiar to the first generation of cliometricians, especially those who found H.J. Habakkuk a source of (repeated) inspiration. It notes how “thickening markets” made finding continuous work easier over time, reducing the wage premium associated with unemployment risk. Today’s economic historians, infatuated with “institutions” v. “geography” would probably disagree with the emphases in the chapter but I think there is much to admire in Lebergott’s “inferences.”

Some economic historians make their mark as much through their graduate students as their writings. Lebergott spent his academic career in a liberal arts college and did not, therefore, directly produce graduate students like a William Parker, Robert Fogel or (more recently) Joel Mokyr. In certain ways he was an outsider to economic history, an economist with a vast and deep appreciation for history in all of its flavors, who saw the past for what it can say about the present, not as an end in itself like a more “traditional” historian would. Compared with other classic works of cliometrics such as Fogel’s Railroads and American Economic Growth or North and Thomas’s The Rise of the Western World, Manpower‘s quirkiness can be a frustrating, more suitable for dabbling than a sustained read. By today’s standards the book falls short in its treatment of racial and ethnic differences (gender is more balanced) although this would hardly distinguish it from most other work in economics and economic history at the time. Yet Lebergott’s influence on economic history has been profound. There are few activities that economic historians can engage in of greater consequence than reconstructing the hard numbers. In this line of work Lebergott had few peers. Manpower put the labor force — people — at the center of economic history, not the bloodless “agents” of economic models but real people. As if to underscore this, the style asserts, like a triple fff in music: a real person not a (bloodless) “social scientist” wrote this book, one in deep and abiding awe of the economic accomplishment of his forbearers.

References:

Darby, Michael. 1976. “Three and a Half Million US Employees Have Been Mislaid: Or, An Explanation of Unemployment, 1934-1941,” Journal of Political Economy 84 (February): 1-16.

Finegan, T. Aldrich and Robert A. Margo. 1994. “Work Relief and the Labor Force Participation of Married Women in 1940,” Journal of Economic History 54 (March): 64-84.

Goldin, Claudia and Kenneth Sokoloff. 1982. “Women, Children, and Industrialization in the Early Republic: Evidence from the Manufacturing Censuses,” Journal of Economic History 42 (December): 741-774.

Kesselman, Jonathan R. and N. E. Savin. 1978. “Three and a Half Million Workers Were Never Lost,” Economic Inquiry 16 (April): 186-191.

Lebergott, Stanley. 1964. Manpower in Economic Growth: The American Record since 1800. New York: McGraw-Hill.

Lebergott, Stanley. 1986. “Comment,” in Stanley Engerman and Robert Gallman, eds., Long Term Factors in American Economic Growth, pp. 671-673. Chicago: University of Chicago Press.

Margo, Robert A. 1991 “The Microeconomics of Depression Unemployment,” Journal of Economic History 51 (June): 333-341.

Margo, Robert A. and Georgia Villaflor. 1987. “The Growth of Wages in Antebellum America: New Evidence,” Journal of Economic History 47 (December): 873-895.

Romer, Christina. 1986. “Spurious Volatility in Historical Unemployment Data,” Journal of Political Economy 94 (February): 1-37.

Weiss, Thomas. 1986. “Revised Estimates of the United States Workforce, 1880-1860,” in Stanley Engerman and Robert Gallman, eds., Long Term Factors in American Economic Growth, pp.641-671. Chicago: University of Chicago Press.

Robert A. Margo is Professor of Economics and African-American Studies, Boston University, and Research Associate, National Bureau of Economic Research. He is also the editor of Explorations in Economic History.

Subject(s):Labor and Employment History
Geographic Area(s):North America
Time Period(s):20th Century: WWII and post-WWII

Economic History Classics

Selections for 2006

During 2006 EH.NET published a series of “Classic Reviews.” Modeled along the lines of our earlier Project 2000 and Project 2001 series, reviewers were asked to “reintroduce” each of the books to the profession, “explaining its significance at the time of publication and why it has endured as a classic.” Each review summarizes the book’s key findings, methods and arguments, as it puts it into the larger context and discusses any weaknesses.

This year’s selections are (alphabetically by author):

Selection Committee

  • Gareth Austin, London School of Economics
  • Ann Carlos, University of Colorado
  • John Murray, University of Toledo
  • Lawrence Officer, University of Illinois at Chicago
  • Cormac Ó Gráda, University College Dublin
  • Peter Scott, University of Reading
  • Catherine Schenk, University of Glasgow
  • Pierre van der Eng, Australian National University
  • Jenny Wahl, Carleton College

Economic History of Retirement in the United States

Joanna Short, Augustana College

One of the most striking changes in the American labor market over the twentieth century has been the virtual disappearance of older men from the labor force. Moen (1987) and Costa (1998) estimate that the labor force participation rate of men age 65 and older declined from 78 percent in 1880 to less than 20 percent in 1990 (see Table 1). In recent decades, the labor force participation rate of somewhat younger men (age 55-64) has been declining as well. When coupled with the increase in life expectancy over this period, it is clear that men today can expect to spend a much larger proportion of their lives in retirement, relative to men living a century ago.

Table 1

Labor Force Participation Rates of Men Age 65 and Over

Year Labor Force Participation Rate (percent)
1850 76.6
1860 76.0
1870 —–
1880 78.0
1890 73.8
1900 65.4
1910 58.1
1920 60.1
1930 58.0
1940 43.5
1950 47.0
1960 40.8
1970 35.2
1980 24.7
1990 18.4
2000 17.5

Sources: Moen (1987), Costa (1998), Bureau of Labor Statistics

Notes: Prior to 1940, ‘gainful employment’ was the standard the U.S. Census used to determine whether or not an individual was working. This standard is similar to the ‘labor force participation’ standard used since 1940. With the exception of the figure for 2000, the data in the table are based on the gainful employment standard.

How can we explain the rise of retirement? Certainly, the development of government programs like Social Security has made retirement more feasible for many people. However, about half of the total decline in the labor force participation of older men from 1880 to 1990 occurred before the first Social Security payments were made in 1940. Therefore, factors other than the Social Security program have influenced the rise of retirement.

In addition to the increase in the prevalence of retirement over the twentieth century, the nature of retirement appears to have changed. In the late nineteenth century, many retirements involved a few years of dependence on children at the end of life. Today, retirement is typically an extended period of self-financed independence and leisure. This article documents trends in the labor force participation of older men, discusses the decision to retire, and examines the causes of the rise of retirement including the role of pensions and government programs.

Trends in U.S. Retirement Behavior

Trends by Gender

Research on the history of retirement focuses on the behavior of men because retirement, in the sense of leaving the labor force permanently in old age after a long career, is a relatively new phenomenon among women. Goldin (1990) concludes that “even as late as 1940, most young working women exited the labor force on marriage, and only a small minority would return.” The employment of married women accelerated after World War II, and recent evidence suggests that the retirement behavior of men and women is now very similar. Gendell (1998) finds that the average age at exit from the labor force in the U.S. was virtually identical for men and women from 1965 to 1995.

Trends by Race and Region

Among older men at the beginning of the twentieth century, labor force participation rates varied greatly by race, region of residence, and occupation. In the early part of the century, older black men were much more likely to be working than older white men. In 1900, for example, 84.1 percent of black men age 65 and over and 64.4 percent of white men were in the labor force. The racial retirement gap remained at about twenty percentage points until 1920, then narrowed dramatically by 1950. After 1950, the racial retirement gap reversed. In recent decades older black men have been slightly less likely to be in the labor force than older white men (see Table 2).

Table 2

Labor Force Participation Rates of Men Age 65 and Over, by Race

Labor Force Participation Rate (percent)
Year White Black
1880 76.7 87.3
1890 —- —-
1900 64.4 84.1
1910 58.5 86.0
1920 57.0 76.8
1930 —- —-
1940 44.1 54.6
1950 48.7 51.3
1960 40.3 37.3
1970 36.6 33.8
1980 27.1 23.7
1990 18.6 15.7
2000 17.8 16.6

Sources: Costa (1998), Bureau of Labor Statistics

Notes: Census data are unavailable for the years 1890 and 1930.

With the exception of the figures for 2000, participation rates are based on the gainful employment standard

Similarly, the labor force participation rate of men age 65 and over living in the South was higher than that of men living in the North in the early twentieth century. In 1900, for example, the labor force participation rate for older Southerners was sixteen percentage points higher than for Northerners. The regional retirement gap began to narrow between 1910 and 1920, and narrowed substantially by 1940 (see Table 3).

Table 3

Labor Force Participation Rates of Men Age 65 and Over, by Region

Labor Force Participation Rate (percent)
Year North South
1880 73.7 85.2
1890 —- —-
1900 66.0 82.9
1910 56.6 72.8
1920 58.8 69.9
1930 —- —-
1940 42.8 49.4
1950 43.2 42.9

Source: Calculated from Ruggles and Sobek, Integrated Public Use Microdata Series for 1880, 1900, 1910, 1920, 1940, and 1950, Version 2.0, 1997

Note: North includes the New England, Middle Atlantic, and North Central regions

South includes the South Atlantic and South Central regions

Differences in retirement behavior by race and region of residence are related. One reason Southerners appear less likely to retire in the late nineteenth and early twentieth centuries is that a relatively large proportion of Southerners were black. In 1900, 90 percent of black households were located in the South (see Maloney on African Americans in this Encyclopedia). In the early part of the century, black men were effectively excluded from skilled occupations. The vast majority worked for low pay as tenant farmers or manual laborers. Even controlling for race, southern per capita income lagged behind the rest of the nation well into the twentieth century. Easterlin (1971) estimates that in 1880, per capita income in the South was only half that in the Midwest, and per capita income remained less than 70 percent of the Midwestern level until 1950. Lower levels of income among blacks, and in the South as a whole during this period, may have made it more difficult for these men to accumulate resources sufficient to rely on in retirement.

Trends by Occupation

Older men living on farms have long been more likely to be working than men living in nonfarm households. In 1900, for example, 80.6 percent of farm residents and 62.7 percent of nonfarm residents over the age of 65 were in the labor force. Durand (1948), Graebner (1980), and others have suggested that older farmers could remain in the labor force longer than urban workers because of help from children or hired labor. Urban workers, on the other hand, were frequently forced to retire once they became physically unable to keep up with the pace of industry.

Despite the large difference in the labor force participation rates of farm and nonfarm residents, the actual gap in the retirement rates of farmers and nonfarmers was not that great. Confusion on this issue stems from the fact that the labor force participation rate of farm residents does not provide a good representation of the retirement behavior of farmers. Moen (1994) and Costa (1995a) point out that farmers frequently moved off the farm in retirement. When the comparison is made by occupation, farmers have labor force participation rates only slightly higher than laborers or skilled workers. Lee (2002) finds that excluding the period 1900-1910 (a period of exceptional growth in the value of farm property), the labor force participation rate of older farmers was on average 9.3 percentage points higher than that of nonfarmers from 1880-1940.

Trends in Living Arrangements

In addition to the overall rise of retirement, and the closing of differences in retirement behavior by race and region, over the twentieth century retired men became much more independent. In 1880, nearly half of retired men lived with children or other relatives. Today, fewer than 5 percent of retired men live with relatives. Costa (1998) finds that between 1910 and 1940, men who were older, had a change in marital status (typically from married to widowed), or had low income were much more likely to live with family members as a dependent. Rising income appears to explain most of the movement away from coresidence, suggesting that the elderly have always preferred to live by themselves, but they have only recently had the means to do so.

Explaining Trends in the Retirement Decision

One way to understand the rise of retirement is to consider the individual retirement decision. In order to retire permanently from the labor force, one must have enough resources to live on to the end of the expected life span. In retirement, one can live on pension income, accumulated savings, and anticipated contributions from family and friends. Without at least the minimum amount of retirement income necessary to survive, the decision-maker has little choice but to remain in the labor force. If the resource constraint is met, individuals choose to retire once the net benefits of retirement (e.g., leisure time) exceed the net benefits of working (labor income less the costs associated with working). From this model, we can predict that anything that increases the costs associated with working, such as advancing age, an illness, or a disability, will increase the probability of retirement. Similarly, an increase in pension income increases the probability of retirement in two ways. First, an increase in pension income makes it more likely the resource constraint will be satisfied. In addition, higher pension income makes it possible to enjoy more leisure in retirement, thereby increasing the net benefits of retirement.

Health Status

Empirically, age, disability, and pension income have all been shown to increase the probability that an individual is retired. In the context of the individual model, we can use this observation to explain the overall rise of retirement. Disability, for example, has been shown to increase the probability of retirement, both today and especially in the past. However, it is unlikely that the rise of retirement was caused by increases in disability rates — advances in health have made the overall population much healthier. Costa (1998), for example, shows that chronic conditions were much more prevalent for the elderly born in the nineteenth century than for men born in the twentieth century.

The Decline of Agriculture

Older farmers are somewhat more likely to be in the labor force than nonfarmers. Furthermore, the proportion of people employed in agriculture has declined steadily, from 51 percent of the work force in 1880, to 17 percent in 1940, to about 2 percent today (Lebergott, 1964). Therefore, as argued by Durand (1948), the decline in agriculture could explain the rise in retirement. Lee (2002) finds, though, that the decline of agriculture only explains about 20 percent of the total rise of retirement from 1880 to 1940. Since most of the shift away from agricultural work occurred before 1940, the decline of agriculture explains even less of the retirement trend since 1940. Thus, the occupational shift away from farming explains part of the rise of retirement. However, the underlying trend has been a long-term increase in the probability of retirement within all occupations.

Rising Income: The Most Likely Explanation

The most likely explanation for the rise of retirement is the overall increase in income, both from labor market earnings and from pensions. Costa (1995b) has shown that the pension income received by Union Army veterans in the early twentieth century had a strong effect on the probability that the veteran was retired. Over the period from 1890 to 1990, economic growth has led to nearly an eightfold increase in real gross domestic product (GDP) per capita. In 1890, GDP per capita was $3430 (in 1996 dollars), which is comparable to the levels of production in Morocco or Jamaica today. In 1990, real GDP per capita was $26,889. On average, Americans today enjoy a standard of living commensurate with eight times the income of Americans living a century ago. More income has made it possible to save for an extended retirement.

Rising income also explains the closing of differences in retirement behavior by race and region by the 1950s. Early in the century blacks and Southerners earned much lower income than Northern whites, but these groups made substantial gains in earnings by 1950. In the second half of the twentieth century, the increasing availability of pension income has also made retirement more attractive. Expansions in Social Security benefits, Medicare, and growth in employer-provided pensions all serve to increase the income available to people in retirement.

Costa (1998) has found that income is now less important to the decision to retire than it once was. In the past, only the rich could afford to retire. Income is no longer a binding constraint. One reason is that Social Security provides a safety net for those who are unable or unwilling to save for retirement. Another reason is that leisure has become much cheaper over the last century. Television, for example, allows people to enjoy concerts and sporting events at a very low price. Golf courses and swimming pools, once available only to the rich, are now publicly provided. Meanwhile, advances in health have allowed people to enjoy leisure and travel well into old age. All of these factors have made retirement so much more attractive that people of all income levels now choose to leave the labor force in old age.

Financing Retirement

Rising income also provided the young with a new strategy for planning for old age and retirement. Ransom and Sutch (1986a,b) and Sundstrom and David (1988) hypothesize that in the nineteenth century men typically used the promise of a bequest as an incentive for children to help their parents in old age. As more opportunities for work off the farm became available, children left home and defaulted on the implicit promise to care for retired parents. Children became an unreliable source of old age support, so parents stopped relying on children — had fewer babies — and began saving (in bank accounts) for retirement.

To support the “babies-to-bank accounts” theory, Sundstrom and David look for evidence of an inheritance-for-old age support bargain between parents and children. They find that many wills, particularly in colonial New England and some ethnic communities in the Midwest, included detailed clauses specifying the care of the surviving parent. When an elderly parent transferred property directly to a child, the contracts were particularly specific, often specifying the amount of food and firewood with which the parent was to be supplied. There is also some evidence that people viewed children and savings as substitute strategies for retirement planning. Haines (1985) uses budget studies from northern industrial workers in 1890 and finds a negative relationship between the number of children and the savings rate. Short (2001) conducts similar studies for southern men that indicate the two strategies were not substitutes until at least 1920. This suggests that the transition from babies to bank accounts occurred later in the South, only as income began to approach northern levels.

Pensions and Government Retirement Programs

Military and Municipal Pensions (1781-1934)

In addition to the rise in labor market income, the availability of pension income greatly increased with the development of Social Security and the expansion of private (employer-provided) pensions. In the U.S., public (government-provided) pensions originated with the military pensions that have been available to disabled veterans and widows since the colonial era. Military pensions became available to a large proportion of Americans after the Civil War, when the federal government provided pensions to Union Army widows and veterans disabled in the war. The Union Army pension program expanded greatly as a result of the Pension Act of 1890. As a result of this law, pensions were available for all veterans age 65 and over who had served more than 90 days and were honorably discharged, regardless of current employment status. In 1900, about 20 percent of all white men age 55 and over received a Union Army pension. The Union Army pension was generous even by today’s standards. Costa (1995b) finds that the average pension replaced about 30 percent of the income of a laborer. At its peak of nearly one million pensioners in 1902, the program consumed about 30 percent of the federal budget.

Each of the formerly Confederate states also provided pensions to its Confederate veterans. Most southern states began paying pensions to veterans disabled in the war and to war widows around 1880. These pensions were gradually liberalized to include most poor or disabled veterans and their widows. Confederate veteran pensions were much less generous than Union Army pensions. By 1910, the average Confederate pension was only about one-third the amount awarded to the average Union veteran.

By the early twentieth century, state and municipal governments also began paying pensions to their employees. Most major cities provided pensions for their firemen and police officers. By 1916, 33 states had passed retirement provisions for teachers. In addition, some states provided limited pensions to poor elderly residents. By 1934, 28 states had established these pension programs (See Craig in this Encyclopedia for more on public pensions).

Private Pensions (1875-1934)

As military and civil service pensions became available to more men, private firms began offering pensions to their employees. The American Express Company developed the first formal pension in 1875. Railroads, among the largest employers in the country, also began providing pensions in the late nineteenth century. Williamson (1992) finds that early pension plans, like that of the Pennsylvania Railroad, were funded entirely by the employer. Thirty years of service were required to qualify for a pension, and retirement was mandatory at age 70. Because of the lengthy service requirement and mandatory retirement provision, firms viewed pensions as a way to reduce labor turnover and as a more humane way to remove older, less productive employees. In addition, the 1926 Revenue Act excluded from current taxation all income earned in pension trusts. This tax advantage provided additional incentive for firms to provide pensions. By 1930, a majority of large firms had adopted pension plans, covering about 20 percent of all industrial workers.

In the early twentieth century, labor unions also provided pensions to their members. By 1928, thirteen unions paid pension benefits. Most of these were craft unions, whose members were typically employed by smaller firms that did not provide pensions.

Most private pensions survived the Great Depression. Exceptions were those plans that were funded under a ‘pay as you go’ system — where benefits were paid out of current earnings, rather than from built-up reserves. Many union pensions were financed under this system, and hence failed in the 1930s. Thanks to strong political allies, the struggling railroad pensions were taken over by the federal government in 1937.

Social Security (1935-1991)

The Social Security system was designed in 1935 to extend pension benefits to those not covered by a private pension plan. The Social Security Act consisted of two programs, Old Age Assistance (OAA) and Old Age Insurance (OAI). The OAA program provided federal matching funds to subsidize state old age pension programs. The availability of federal funds quickly motivated many states to develop a pension program or to increase benefits. By 1950, 22 percent of the population age 65 and over received OAA benefits. The OAA program peaked at this point, though, as the newly liberalized OAI program began to dominate Social Security. The OAI program is administered by the federal government, and financed by payroll taxes. Retirees (and later, survivors, dependents of retirees, and the disabled) who have paid into the system are eligible to receive benefits. The program remained small until 1950, when coverage was extended to include farm and domestic workers, and average benefits were increased by 77 percent. In 1965, the Social Security Act was amended to include Medicare, which provides health insurance to the elderly. The Social Security program continued to expand in the late 1960s and early 1970s — benefits increased 13 percent in 1968, another 15 percent in 1969, and 20 percent in 1972.

In the late 1970s and early 1980s Congress was finally forced to slow the growth of Social Security benefits, as the struggling economy introduced the possibility that the program would not be able to pay beneficiaries. In 1977, the formula for determining benefits was adjusted downward. Reforms in 1983 included the delay of a cost-of-living adjustment, the taxation of up to half of benefits, and payroll tax increases.

Today, Social Security benefits are the main source of retirement income for most retirees. Poterba, Venti, and Wise (1994) find that Social Security wealth was three times as large as all the other financial assets of those age 65-69 in 1991. The role of Social Security benefits in the budgets of elderly households varies greatly. In elderly households with less than $10,000 in income in 1990, 75 percent of income came from Social Security. Higher income households gain larger shares of income from earnings, asset income, and private pensions. In households with $30,000 to $50,000 in income, less than 30 percent was derived from Social Security.

The Growth of Private Pensions (1935-2000)

Even in the shadow of the Social Security system, employer-provided pensions continued to grow. The Wage and Salary Act of 1942 froze wages in an attempt to contain wartime inflation. In order to attract employees in a tight labor market, firms increasingly offered generous pensions. Providing pensions had the additional benefit that the firm’s contributions were tax deductible. Therefore, pensions provided firms with a convenient tax shelter from high wartime tax rates. From 1940 to 1960, the number of people covered by private pensions increased from 3.7 million to 23 million, or to nearly 30 percent of the labor force.

In the 1960s and 1970s, the federal government acted to regulate private pensions, and to provide tax incentives (like those for employer-provided pensions) for those without access to private pensions to save for retirement. Since 1962, the self-employed have been able to establish ‘Keogh plans’ — tax deferred accounts for retirement savings. In 1974, the Employment Retirement Income Security Act (ERISA) regulated private pensions to ensure their solvency. Under this law, firms are required to follow funding requirements and to insure against unexpected events that could cause insolvency. To further level the playing field, ERISA provided those not covered by a private pension with the option of saving in a tax-deductible Individual Retirement Account (IRA). The option of saving in a tax-advantaged IRA was extended to everyone in 1981.

Over the last thirty years, the type of pension plan that firms offer employees has shifted from ‘defined benefit’ to ‘defined contribution’ plans. Defined benefit plans, like Social Security, specify the amount of benefits the retiree will receive. Defined contribution plans, on the other hand, specify only how much the employer will contribute to the plan. Actual benefits then depend on the performance of the pension investments. The switch from defined benefit to defined contribution plans therefore shifts the risk of poor investment performance from the employer to the employee. The employee stands to benefit, though, because the high long-run average returns on stock market investments may lead to a larger retirement nest egg. Recently, 401(k) plans have become a popular type of pension plan, particularly in the service industries. These plans typically involve voluntary employee contributions that are tax deductible to the employee, employer matching of these contributions, and more choice as far as how the pension is invested.

Summary and Conclusions

The retirement pattern we see today, typically involving decades of self-financed leisure, developed gradually over the last century. Economic historians have shown that rising labor market and pension income largely explain the dramatic rise of retirement. Rather than being pushed out of the labor force because of increasing obsolescence, older men have increasingly chosen to use their rising income to finance an earlier exit from the labor force. In addition to rising income, the decline of agriculture, advances in health, and the declining cost of leisure have contributed to the popularity of retirement. Rising income has also provided the young with a new strategy for planning for old age and retirement. Instead of being dependent on children in retirement, men today save for their own, more independent, retirement.

References

Achenbaum, W. Andrew. Social Security: Visions and Revisions. New York: Cambridge University Press, 1986. Bureau of Labor Statistics, cpsaat3.pdf

Costa, Dora L. The Evolution of Retirement: An American Economic History, 1880-1990. Chicago: University of Chicago Press, 1998.

Costa, Dora L. “Agricultural Decline and the Secular Rise in Male Retirement Rates.” Explorations in Economic History 32, no. 4 (1995a): 540-552.

Costa, Dora L. “Pensions and Retirement: Evidence from Union Army Veterans.” Quarterly Journal of Economics 110, no. 2 (1995b): 297-319.

Durand, John D. The Labor Force in the United States 1890-1960. New York: Gordon and Breach Science Publishers, 1948.

Easterlin, Richard A. “Interregional Differences in per Capita Income, Population, and Total Income, 1840-1950.” In Trends in the American Economy in the Nineteenth Century: A Report of the National Bureau of Economic Research, Conference on Research in Income and Wealth. Princeton, NJ: Princeton University Press, 1960.

Easterlin, Richard A. “Regional Income Trends, 1840-1950.” In The Reinterpretation of American Economic History, edited by Robert W. Fogel and Stanley L. Engerman. New York: Harper & Row, 1971.

Gendell, Murray. “Trends in Retirement Age in Four Countries, 1965-1995.” Monthly Labor Review 121, no. 8 (1998): 20-30.

Glasson, William H. Federal Military Pensions in the United States. New York: Oxford University Press, 1918.

Glasson, William H. “The South’s Pension and Relief Provisions for the Soldiers of the

Confederacy.” Publications of the North Carolina Historical Commission, Bulletin no. 23, Raleigh, 1918.

Goldin, Claudia. Understanding the Gender Gap: An Economic History of American Women. New York: Oxford University Press, 1990.

Graebner, William. A History of Retirement: The Meaning and Function of an American Institution, 1885-1978. New Haven: Yale University Press, 1980.

Haines, Michael R. “The Life Cycle, Savings, and Demographic Adaptation: Some Historical Evidence for the United States and Europe.” In Gender and the Life Course, edited by Alice S. Rossi, pp. 43-63. New York: Aldine Publishing Co., 1985.

Kingson, Eric R. and Edward D. Berkowitz. Social Security and Medicare: A Policy Primer. Westport, CT: Auburn House, 1993.

Lebergott, Stanley. Manpower in Economic Growth. New York: McGraw Hill, 1964.

Lee, Chulhee. “Sectoral Shift and the Labor-Force Participation of Older Males in the United States, 1880-1940.” Journal of Economic History 62, no. 2 (2002): 512-523.

Maloney, Thomas N. “African Americans in the Twentieth Century.” EH.Net Encyclopedia, edited by Robert Whaples, Jan 18, 2002. http://www.eh.net/encyclopedia/contents/maloney.african.american.php

Moen, Jon R. Essays on the Labor Force and Labor Force Participation Rates: The United States from 1860 through 1950. Ph.D. dissertation, University of Chicago, 1987.

Moen, Jon R. “Rural Nonfarm Households: Leaving the Farm and the Retirement of Older Men, 1860-1980.” Social Science History 18, no. 1 (1994): 55-75.

Ransom, Roger and Richard Sutch. “Babies or Bank Accounts, Two Strategies for a More Secure Old Age: The Case of Workingmen with Families in Maine, 1890.” Paper prepared for presentation at the Eleventh Annual Meeting of the Social Science History Association, St. Louis, 1986a.

Ransom, Roger L. and Richard Sutch. “Did Rising Out-Migration Cause Fertility to Decline in Antebellum New England? A Life-Cycle Perspective on Old-Age Security Motives, Child Default, and Farm-Family Fertility.” California Institute of Technology, Social Science Working Paper, no. 610, April 1986b.

Ruggles, Steven and Matthew Sobek, et al. Integrated Public Use Microdata Series: Version 2.0. Minneapolis: Historical Census Projects, University of Minnesota, 1997.

http://www.ipums.umn.edu

Short, Joanna S. “The Retirement of the Rebels: Georgia Confederate Pensions and Retirement Behavior in the New South.” Ph.D. dissertation, Indiana University, 2001.

Sundstrom, William A. and Paul A. David. “Old-Age Security Motives, Labor Markets, and Farm Family Fertility in Antebellum America.” Explorations in Economic History 25, no. 2 (1988): 164-194.

Williamson, Samuel H. “United States and Canadian Pensions before 1930: A Historical Perspective.” In Trends in Pensions, U.S. Department of Labor, Vol. 2, 1992, pp. 34-45.

Williamson, Samuel H. The Development of Industrial Pensions in the United States during the Twentieth Century. World Bank, Policy Research Department, 1995.

Citation: Short, Joanna. “Economic History of Retirement in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. September 30, 2002. URL http://eh.net/encyclopedia/economic-history-of-retirement-in-the-united-states/

The Economics of the Civil War

Roger L. Ransom, University of California, Riverside

The Civil War has been something of an enigma for scholars studying American history. During the first half of the twentieth century, historians viewed the war as a major turning point in American economic history. Charles Beard labeled it “Second American Revolution,” claiming that “at bottom the so-called Civil War – was a social war, ending in the unquestioned establishment of a new power in the government, making vast changes – in the course of industrial development, and in the constitution inherited from the Fathers” (Beard and Beard 1927: 53). By the time of the Second World War, Louis Hacker could sum up Beard’s position by simply stating that the war’s “striking achievement was the triumph of industrial capitalism” (Hacker 1940: 373). The “Beard-Hacker Thesis” had become the most widely accepted interpretation of the economic impact of the Civil War. Harold Faulkner devoted two chapters to a discussion of the causes and consequences of the war in his 1943 textbook American Economic History (which was then in its fifth edition), claiming that “its effects upon our industrial, financial, and commercial history were profound” (1943: 340).

In the years after World War II, a new group of economic historians — many of them trained in economics departments — focused their energies on the explanation of economic growth and development in the United States. As they looked for the keys to American growth in the nineteenth century, these economic historians questioned whether the Civil War — with its enormous destruction and disruption of society — could have been a stimulus to industrialization. In his 1955 textbook on American economic history, Ross Robertson mirrored a new view of the Civil War and economic growth when he argued that “persistent, fundamental forces were at work to forge the economic system and not even the catastrophe of internecine strife could greatly affect the outcome” (1955: 249). “Except for those with a particular interest in the economics of war,” claimed Robertson, “the four year period of conflict [1861-65] has had little attraction for economic historians” (1955: 247). Over the next two decades, this became the dominant view of the Civil War’s role industrialization of the United States.

Historical research has a way of returning to the same problems over and over. The efforts to explain regional patterns of economic growth and the timing of the United States’ “take-off” into industrialization, together with extensive research into the “economics” of the slave system of the South and the impact of emancipation, brought economic historians back to questions dealing with the Civil War. By the 1990s a new generation of economic history textbooks once again examined the “economics” of the Civil War (Atack and Passell 1994; Hughes and Cain 1998; Walton and Rockoff 1998). This reconsideration of the Civil War by economic historians can be loosely grouped into four broad issues: the “economic” causes of the war; the “costs” of the war; the problem of financing the War; and a re-examination of the Hacker-Beard thesis that the War was a turning point in American economic history.

Economic Causes of the War

No one seriously doubts that the enormous economic stake the South had in its slave labor force was a major factor in the sectional disputes that erupted in the middle of the nineteenth century. Figure 1 plots the total value of all slaves in the United States from 1805 to 1860. In 1805 there were just over one million slaves worth about $300 million; fifty-five years later there were four million slaves worth close to $3 billion. In the 11 states that eventually formed the Confederacy, four out of ten people were slaves in 1860, and these people accounted for more than half the agricultural labor in those states. In the cotton regions the importance of slave labor was even greater. The value of capital invested in slaves roughly equaled the total value of all farmland and farm buildings in the South. Though the value of slaves fluctuated from year to year, there was no prolonged period during which the value of the slaves owned in the United States did not increase markedly. Looking at Figure 1, it is hardly surprising that Southern slaveowners in 1860 were optimistic about the economic future of their region. They were, after all, in the midst of an unparalleled rise in the value of their slave assets.

A major finding of the research into the economic dynamics of the slave system was to demonstrate that the rise in the value of slaves was not based upon unfounded speculation. Slave labor was the foundation of a prosperous economic system in the South. To illustrate just how important slaves were to that prosperity, Gerald Gunderson (1974) estimated what fraction of the income of a white person living in the South of 1860 was derived from the earnings of slaves. Table 1 presents Gunderson’s estimates. In the seven states where most of the cotton was grown, almost one-half the population were slaves, and they accounted for 31 percent of white people’s income; for all 11 Confederate States, slaves represented 38 percent of the population and contributed 23 percent of whites’ income. Small wonder that Southerners — even those who did not own slaves — viewed any attempt by the federal government to limit the rights of slaveowners over their property as a potentially catastrophic threat to their entire economic system. By itself, the South’s economic investment in slavery could easily explain the willingness of Southerners to risk war when faced with what they viewed as a serious threat to their “peculiar institution” after the electoral victories of the Republican Party and President Abraham Lincoln the fall of 1860.

Table 1

The Fraction of Whites’ Incomes from Slavery

State Percent of the Population That Were Slaves Per Capita Earnings of Free Whites (in dollars) Slave Earnings per Free White (in dollars) Fraction of Earnings Due to Slavery
Alabama 45 120 50 41.7
South Carolina 57 159 57 35.8
Florida 44 143 48 33.6
Georgia 44 136 40 29.4
Mississippi 55 253 74 29.2
Louisiana 47 229 54 23.6
Texas 30 134 26 19.4
Seven Cotton States 46 163 50 30.6
North Carolina 33 108 21 19.4
Tennessee 25 93 17 18.3
Arkansas 26 121 21 17.4
Virginia 32 121 21 17.4
All 11 States 38 135 35 25.9
Source: Computed from data in Gerald Gunderson (1974: 922, Table 1)

The Northern states also had a huge economic stake in slavery and the cotton trade. The first half of the nineteenth century witnessed an enormous increase in the production of short-staple cotton in the South, and most of that cotton was exported to Great Britain and Europe. Figure 2 charts the growth of cotton exports from 1815 to 1860. By the mid 1830s, cotton shipments accounted for more than half the value of all exports from the United States. Note that there is a marked similarity between the trends in the export of cotton and the rising value of the slave population depicted in Figure 1. There could be little doubt that the prosperity of the slave economy rested on its ability to produce cotton more efficiently than any other region of the world.

The income generated by this “export sector” was a major impetus for growth not only in the South, but in the rest of the economy as well. Douglass North, in his pioneering study of the antebellum U.S. economy, examined the flows of trade within the United States to demonstrate how all regions benefited from the South’s concentration on cotton production (North 1961). Northern merchants gained from Southern demands for shipping cotton to markets abroad, and from the demand by Southerners for Northern and imported consumption goods. The low price of raw cotton produced by slave labor in the American South enabled textile manufacturers — both in the United States and in Britain — to expand production and provide benefits to consumers through a declining cost of textile products. As manufacturing of all kinds expanded at home and abroad, the need for food in cities created markets for foodstuffs that could be produced in the areas north of the Ohio River. And the primary force at work was the economic stimulus from the export of Southern Cotton. When James Hammond exclaimed in 1859 that “Cotton is King!” no one rose to dispute the point.

With so much to lose on both sides of the Mason-Dixon Line, economic logic suggests that a peaceful solution to the slave issue would have made far more sense than a bloody war. Yet no solution emerged. One “economic” solution to the slave problem would be for those who objected to slavery to “buy out” the economic interest of Southern slaveholders. Under such a scheme, the federal government would purchase slaves. A major problem here was that the costs of such a scheme would have been enormous. Claudia Goldin estimates that the cost of having the government buy all the slaves in the United States in 1860, would be about $2.7 billion (1973: 85, Table 1). Obviously, such a large sum could not be paid all at once. Yet even if the payments were spread over 25 years, the annual costs of such a scheme would involve a tripling of federal government outlays (Ransom and Sutch 1990: 39-42)! The costs could be reduced substantially if instead of freeing all the slaves at once, children were left in bondage until the age of 18 or 21 (Goldin 1973:85). Yet there would remain the problem of how even those reduced costs could be distributed among various groups in the population. The cost of any “compensated” emancipation scheme was so high that even those who wished to eliminate slavery were unwilling to pay for a “buyout” of those who owned slaves.

The high cost of emancipation was not the only way in which economic forces produced strong regional tensions in the United States before 1860. The regional economic specialization, previously noted as an important cause of the economic expansion of the antebellum period, also generated very strong regional divisions on economic issues. Recent research by economic, social and political historians has reopened some of the arguments first put forward by Beard and Hacker that economic changes in the Northern states were a major factor leading to the political collapse of the 1850s. Beard and Hacker focused on the narrow economic aspects of these changes, interpreting them as the efforts of an emerging class of industrial capitalists to gain control of economic policy. More recently, historians have taken a broader view of the situation, arguing that the sectional splits on these economic issues reflected sweeping economic and social changes in the Northern and Western states that were not experienced by people in the South. The term most historians have used to describe these changes is a “market revolution.”

Source: United States Population Census, 1860.

Perhaps the best single indicator of how pervasive the “market revolution” was in the Northern and Western states is the rise of urban areas in areas where markets have become important. Map 1 plots the 292 counties that reported an “urban population” in 1860. (The 1860 Census Office defined an “urban place” as a town or city having a population of at least 2,500 people.) Table 2 presents some additional statistics on urbanization by region. In 1860 6.1 million people — roughly one out of five persons in the United States — lived in an urban county. A glance at either the map or Table 2 reveals the enormous difference in urban development in the South compared to the Northern states. More than two-thirds of all urban counties were in the Northeast and West; those two regions accounted for nearly 80 percent of the urban population of the country. By contrast, less than 7 percent of people in the 11 Southern states of Table 2 lived in urban counties.

Table 2

Urban Population of the United States in 1860a

Region Counties with Urban Populations Total Urban Population in the Region Percent of Region’s Population Living in Urban Counties Region’s Urban Population as Percent of U.S. Urban Population
Northeastb 103 3,787,337 35.75 61.66
Westc 108 1,059,755 13.45 17.25
Borderd 23 578,669 18.45 9.42
Southe 51 621,757 6.83 10.12
Far Westf 7 99,145 15.19 1.54
Totalg 292 6,141,914 19.77 100.00
Notes:

a Urban population is people living in a city or town of at least 2,500

b Includes: Connecticut, Maine, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont.

c Includes: Illinois, Indiana, Iowa, Kansas, Minnesota, Nebraska, Ohio, and Wisconsin.

d Includes: Delaware, Kentucky, Maryland, and Missouri.

e Includes: Alabama, Arkansas, Florida, Georgia, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee, Texas, and Virginia.

f Includes: Colorado, California, Dakotas, Nevada, New Mexico, Oregon, Utah and Washington

g includes District of Columbia

Source: U.S Census of Population, 1860.

The region along the north Atlantic Coast, with its extensive development of commerce and industry, had the largest concentration of urban population in the United States; roughly one-third of the population of the nine states defined as the Northeast in Table 2 lived in urban counties. In the South, the picture was very different. Cotton cultivation with slave labor did not require local financial services or nearby manufacturing activities that might generate urban activities. The 11 states of the Confederacy had only 51 urban counties and they were widely scattered throughout the region. Western agriculture with its emphasis on foodstuffs encouraged urban activity near to the source of production. These centers were not necessarily large; indeed, the West had roughly the same number of large and mid-sized cities as the South. However there were far more small towns scattered throughout settled regions of Ohio, Indiana, Illinois, Wisconsin and Michigan than in the Southern landscape.

Economic policy had played a prominent role in American politics since the birth of the republic in 1790. With the formation of the Whig Party in the 1830s, a number of key economic issues emerged at the national level. To illustrate the extent to which the rise of urban centers and increased market activity in the North led to a growing crisis in economic policy, historians have re-examined four specific areas of legislative action singled out by Beard and Hacker as evidence of a Congressional stalemate in 1860 (Egnal 2001; Ransom and Sutch 2001; 1989; Bensel 1990; McPherson 1988).

Land Policy

1. Land Policy. Settlement of western lands had always been a major bone of contention for slave and free-labor farms. The manner in which the federal government distributed land to people could have a major impact on the nature of farming in a region. Northerners wanted to encourage the settlement of farms which would depend primarily on family labor by offering cheap land in small parcels. Southerners feared that such a policy would make it more difficult to keep areas open for settlement by slaveholders who wanted to establish large plantations. This all came to a head with the “Homestead Act” of 1860 that would provide 160 acres of free land for anyone who wanted to settle and farm the land. Northern and western congressmen strongly favored the bill in the House of Representatives but the measure received only a single vote from slave states’ representatives. The bill passed, but President Buchanan vetoed it. (Bensel 1990: 69-72)

Transportation Improvements

2. Transportation Improvements. Following the opening of the Erie Canal in 1823, there was growing support in the North and the Northwest for government support of improvement in transportation facilities — what were termed in those days “internal improvements”. The need for government- sponsored improvements was particularly urgent in the Great Lakes region (Egnal 2001: 45-50). The appearance of the railroad in the 1840s gave added support for those advocating government subsidies to promote transportation. Southerners required far fewer internal improvements than people in the Northwest, and they tended to view federal subsidies for such projects to be part of a “deal” between western and eastern interests that held no obvious gains for the South. The bill that best illustrates the regional disputes on transportation was the Pacific Railway Bill of 1860, which proposed a transcontinental railway link to the West Coast. The bill failed to pass the House, receiving no votes from congressmen representing districts of the South where there was a significant slave population (Bensel 1990: 70-71).

The Tariff

3. The Tariff. Southerners, with their emphasis on staple agriculture and need to buy goods produced outside the South, strongly objected to the imposition of duties on imported goods. Manufacturers in the Northeast, on the other hand, supported a high tariff as protection against cheap British imports. People in the West were caught in the middle of this controversy. Like the agricultural South they disliked the idea of a high “protective” tariff that raised the cost of imports. However the tariff was also the main source of federal revenue at this time, and Westerners needed government funds for the transportation improvements they supported in Congress. As a result, a compromise reached by western and eastern interests during in the tariff debates of 1857 was to support a “moderate” tariff; with duties set high enough to generate revenue and offer some protection to Northern manufacturers while not putting too much of a burden on Western and Eastern consumers. Southerners complained that even this level of protection was excessive and that it was one more example of the willingness of the West and the North to make economic bargains at the expense of the South (Ransom and Sutch 2001; Egnal 2001:50-52).

Banking

4. Banking. The federal government’s role in the chartering and regulation of banks was a volatile political issue throughout the antebellum period. In 1834 President Andrew Jackson created a major furor when he vetoed a bill to recharter the Second Bank of the United States. Jackson’s veto ushered in a period of that was termed “free banking” in the United States, where the chartering and regulation of banks was left entirely in the hands of state governments. Banks were a relatively new economic institution at this point in time, and opinions were sharply divided over the degree to which the federal government should regulate banks. In the Northeast, where over 60 percent of all banks were located, there was strong support by 1860 for the creation of a system of banks that would be chartered and regulated by the federal government. But in the South, which had little need for local banking services, there was little enthusiasm for such a proposal. Here again, the western states were caught in the middle. While they worried that a system of “national” banks that would be controlled by the already dominant eastern banking establishment, western farmers found themselves in need of local banking services for financing their crops. By 1860 many were inclined to support the Republican proposal for a National Banking System, however Southern opposition killed the National Bank Bill in 1860 (Ransom and Sutch 2001; Bensel 1990).

The growth of an urbanized market society in the North produced more than just a legislative program of political economy that Southerners strongly resisted. Several historians have taken a much broader view of the market revolution and industrialization in the North. They see the economic conflict of North and South, in the words of Richard Brown, as “the conflict of a modernizing society” (1976: 161). A leading historian of the Civil War, James McPherson, argues that Southerners were correct when they claimed that the revolutionary program sweeping through the North threatened their way of life (1983; 1988). James Huston (1999) carries the argument one step further by arguing that Southerners were correct in their fears that the triumph of this coalition would eventually lead to an assault by Northern politicians on slave property rights.

All this provided ample argument for those clamoring for the South to leave the Union in 1861. But why did the North fight a war rather than simply letting the unhappy Southerners go in peace? It seems unlikely that anyone will ever be able to show that the “gains” from the war outweighed the “costs” in economic terms. Still, war is always a gamble, and with the neither the costs nor the benefits easily calculated before the fact, leaders are often tempted to take the risk. The evidence above certainly lent strong support for those arguing that it made sense for the South to fight if a belligerent North threatened the institution of slavery. An economic case for the North is more problematic. Most writers argue that the decision for war on Lincoln’s part was not based primarily on economic grounds. However, Gerald Gunderson points out that if, as many historians argue, Northern Republicans were intent on controlling the spread of slavery, then a war to keep the South in the Union might have made sense. Gunderson compares the “costs” of the war (which we discuss below) with the cost of “compensated” emancipation and notes that the two are roughly the same order of magnitude — 2.5 to 3.7 billion dollars (1974: 940-42). Thus, going to war made as much “economic sense” as buying out the slaveholders. Gunderson makes the further point, which has been echoed by other writers, that the only way that the North could ensure that their program to contain slavery could be “enforced” would be if the South were kept in the Union. Allowing the South to leave the Union would mean that the North could no longer control the expansion of slavery anywhere in the Western Hemisphere (Ransom 1989; Ransom and Sutch 2001; Weingast 1998; Weingast 1995; Wolfson 1995). What is novel about these interpretations of the war is that they argue it was economic pressures of “modernization” in the North that made Northern policy towards secession in 1861 far more aggressive than the traditional story of a North forced into military action by the South’s attack on Fort Sumter.

That is not to say that either side wanted war — for economic or any other reason. Abraham Lincoln probably summarized the situation as well as anyone when he observed in his second inaugural address that: “Both parties deprecated war, but one of them would make war rather than let the nation survive, and the other would accept war rather than let it perish, and the war came.”

The “Costs” of the War

The Civil War has often been called the first “modern” war. In part this reflects the enormous effort expended by both sides to conduct the war. What was the cost of this conflict? The most comprehensive effort to answer this question is the work of Claudia Goldin and Frank Lewis (1978; 1975). The Goldin and Lewis estimates of the costs of the war are presented in Table 3. The costs are divided into two groups: the direct costs which include the expenditures of state and local governments plus the loss from destruction of property and the loss of human capital from the casualties; and what Goldin and Lewis term the indirect costs of the war which include the subsequent implications of the war after 1865. Goldin and Lewis estimate that the combined outlays of both governments — in 1860 dollars — totaled $3.3 billion. To this they add $1.8 billion to account for the discounted economic value of casualties in the war, and they add $1.5 billion to account for the destruction of the war in the South. This gives a total of $6.6 billion in direct costs — with each region incurring roughly half the total.

Table 3

The Costs of the Civil War

(Millions of 1860 Dollars)

South

North

Total

Direct Costs:

Government Expenditures

1,032

2,302

3,334

Physical Destruction

1,487

1,487

Loss of Human Capital

767

1,064

1,831

Total Direct Costs of the War

3,286

3,366

6,652

Per capita

376

148

212

Indirect Costs:

Total Decline in Consumption

6,190

1,149

7,339

Less:

Effect of Emancipation

1,960

Effect of Cotton Prices

1,670

Total Indirect Costs of The War

2,560

1,149

3,709

Per capita

293

51

118

Total Costs of the War

5,846

4,515

10,361

Per capita

670

199

330

Population in 1860 (Million)

8.73

27.71

31.43

Source: Ransom, (1998: 51, Table 3-1); Goldin and Lewis. (1975; 1978)

While these figures are only a very rough estimate of the actual costs, they provide an educated guess as to the order of magnitude of the economic effort required to wage the war, and it seems likely that if there is a bias, it is to understate the total. (Thus, for example, the estimated “economic” losses from casualties ignore the emotional cost of 625,000 deaths, and the estimates of property destruction were quite conservative.) Even so, the direct cost of the war as calculated by Goldin and Lewis was 1.5 times the total gross national product of the United States for 1860 — an enormous sum in comparison with any military effort by the United States up to that point. What stands out in addition to the enormity of the bill is the disparity in the burden these costs represented to the people in the North and the South. On a per capita basis, the costs to the North population were about $150 — or roughly equal to one year’s income. The Southern burden was two and a half times that amount — $376 per man, woman and child.

Staggering though these numbers are, they represent only a fraction of the full costs of the war, which lingered long after the fighting had stopped. One way to measure the full “costs” and “benefits” of the war, Goldin and Lewis argue, is to estimate the value of the observed postwar stream of consumption in each region and compare that figure to the estimated hypothetical stream of consumption had there been no war (1975: 309-10). (All the figures for the costs in Table 3 have been adjusted to reflect their discounted value in 1860.) The Goldin and Lewis estimate for the discounted value of lost consumption for the South was $6.2 billion; for the North the estimate was $1.15 billion. Ingenious though this methodology is, it suffers from the serious drawback that consumption lost for any reason — not just the war — is included in the figure. Particularly for the South, not all the decline in output after 1860 could be directly attributed to the war; the growth in the demand for cotton that fueled the antebellum economy did not continue, and there was a dramatic change in the supply of labor due to emancipation. Consequently, Goldin and Lewis subsequently adjusted their estimate of lost consumption due to the war down to $2.56 billion for the South in order to exclude the effects of emancipation and the collapse of the cotton market. The magnitudes of the indirect effects are detailed in Table 3. After the adjustments, the estimated costs for the war totaled more than $10 billion. Allocating the costs to each region produces a per capita burden of $670 in the South and $199 in the North. What Table 3 does not show is the extent to which these expenses were spread out over a long period of time. In the North, consumption had regained its prewar level by 1873, however in the South consumption remained below its 1860 level to the end of the century. We shall return to this issue below.

Financing the War

No war in American history strained the economic resources of the economy as the Civil War did. Governments on both sides were forced to resort to borrowing on an unprecedented scale to meet the financial obligations for the war. With more developed markets and an industrial base that could ultimately produce the goods needed for the war, the Union was clearly in a better position to meet this challenge. The South, on the other hand, had always relied on either Northern or foreign capital markets for their financial needs, and they had virtually no manufacturing establishments to produce military supplies. From the outset, the Confederates relied heavily on funds borrowed outside the South to purchase supplies abroad.

Figure 3 shows the sources of revenue collected by the Union government during the war. In 1862 and 1863 the government covered less than 15 percent of its total expenditures through taxes. With the imposition of a higher tariff, excise taxes, and the introduction of the first income tax in American history, this situation improved somewhat, and by the war’s end 25 percent of the federal government revenues had been collected in taxes. But what of the other 75 percent? In 1862 Congress authorized the U.S. Treasury to issue currency notes that were not backed by gold. By the end of the war, the treasury had printed more than $250 million worth of these “Greenbacks” and, together with the issue of gold-backed notes, the printing of money accounted for 18 percent of all government revenues. This still left a huge shortfall in revenue that was not covered by either taxes or the printing of money. The remaining revenues were obtained by borrowing funds from the public. Between 1861 and 1865 the debt obligation of the Federal government increased from $65 million to $2.7 billion (including the increased issuance of notes by the Treasury). The financial markets of the North were strained by these demands, but they proved equal to the task. In all, Northerners bought almost $2 billion worth of treasury notes and absorbed $700 million of new currency. Consequently, the Northern economy was able to finance the war without a significant reduction in private consumption. While the increase in the national debt seemed enormous at the time, events were to prove that the economy was more than able to deal with it. Indeed, several economic historians have claimed that the creation and subsequent retirement of the Civil War debt ultimately proved to be a significant impetus to post-war growth (Williamson 1974; James 1984). Wartime finance also prompted a significant change in the banking system of the United States. In 1862 Congress finally passed legislation creating the National Banking System. Their motive was not only to institute the program of banking reform pressed for many years by the Whigs and the Republicans; the newly-chartered federal banks were also required to purchase large blocs of federal bonds to hold as security against the issuance of their national bank notes.

The efforts of the Confederate government to pay for their war effort were far more chaotic than in the North, and reliable expenditure and revenue data are not available. Figure 4 presents the best revenue estimates we have for the Richmond government from 1861 though November 1864 (Burdekin and Langdana 1993). Several features of Confederate finance immediately stand out in comparison to the Union effort. First is the failure of the Richmond government to finance their war expenditures through taxation. Over the course of the war, tax revenues accounted for only 11 percent of all revenues. Another contrast was the much higher fraction of revenues accounted for by the issuance of currency on the part of the Richmond government. Over a third of the Confederate government’s revenue came from the printing press. The remainder came in the form of bonds, many of which were sold abroad in either London or Amsterdam. The reliance on borrowed funds proved to be a growing problem for the Confederate treasury. By mid-1864 the costs of paying interest on outstanding government bonds absorbed more than half all government expenditures. The difficulties of collecting taxes and floating new bond issues had become so severe that in the final year of the war the total revenues collected by the Confederate Government actually declined.

The printing of money and borrowing on such a huge scale had a dramatic effect on the economic stability of the Confederacy. The best measure of this instability and eventual collapse can be seen in the behavior of prices. An index of consumer prices is plotted together with the stock on money from early 1861 to April 1865 in Figure 5. By the beginning of 1862 prices had already doubled; by middle of 1863 they had increased by a factor of 13. Up to this point, the inflation could be largely attributed to the money placed in the hands of consumers by the huge deficits of the government. Prices and the stock of money had risen at roughly the same rate. This represented a classic case of what economists call demand-pull inflation: too much money chasing too few goods. However, from the middle of 1863 on, the behavior of prices no longer mirrors the money supply. Several economic historians have suggested that at this point the prices reflect people’s confidence in the future of the Confederacy as a viable state (Burdekin and Langdana 1993; Weidenmier 2000). Figure 5 identifies three major military “turning points” between 1863 and 1865. In late 1863 and early 1864, following the Confederate defeats at Gettysburg and Vicksburg, prices rose very sharply despite a marked decrease in the growth of the money supply. When the Union offensives in Georgia and Virginia stalled in the summer of 1864, prices stabilized for a few months, only to resume their upward spiral after the fall of Atlanta in September 1864. By that time, of course, the Confederate cause was clearly doomed. By the end of the war, inflation had reached a point where the value of the Confederate currency was virtually zero. People had taken to engaging in barter or using Union dollars (if they could be found) to conduct their transactions. The collapse of the Confederate monetary system was a reflection of the overall collapse of the economy’s efforts to sustain the war effort.

The Union also experienced inflation as a result of deficit finance during the war; the consumer price index rose from 100 at the outset of the war to 175 by the end of 1865. While this is nowhere near the degree of economic disruption caused by the increase in prices experienced by the Confederacy, a doubling of prices did have an effect on how the burden of the war’s costs were distributed among various groups in each economy. Inflation is a tax, and it tends to fall on those who are least able to afford it. One group that tends to be vulnerable to a sudden rise in prices is wage earners. Table 4 presents data on prices and wages in the United States and the Confederacy. The series for wages has been adjusted to reflect the decline in purchasing power due to inflation. Not surprisingly, wage earners in the South saw the real value of their wages practically disappear by the end of the war. In the North the situation was not as severe, but wages certainly did not keep pace with prices; the real value of wages fell by about 20 percent. It is not obvious why this happened. The need for manpower in the army and the demand for war production should have created a labor shortage that would drive wages higher. While the economic situation of laborers deteriorated during the war, one must remember that wage earners in 1860 were still a relatively small share of the total labor force. Agriculture, not industry, was the largest economic sector in the north, and farmers fared much in terms of their income during the war than did wage earners in the manufacturing sector (Ransom 1998:255-64; Atack and Passell 1994:368-70).

Table 4:

Indices of Prices and Real Wages During the Civil War

(1860=100)

Union Confederate
Year Prices Real Wages Prices Real Wages
1860 100 100 100 100
1861 101 100 121 86
1862 113 93 388 35
1863 139 84 1,452 19
1864 176 77 3,992 11
1865 175 82
Source: Union: (Atack and Passell 1994: 367, Table 13.5)

Confederate: (Lerner 1954)

Overall, it is clear that the North did a far better job of mobilizing the economic resources needed to carry on the war. The greater sophistication and size of Northern markets meant that the Union government could call upon institutional arrangements that allowed for a more efficient system of redirecting resources into wartime production than was possible in the South. The Confederates depended far more upon outside resources and direct intervention in the production of goods and services for their war effort, and in the end the domestic economy could not bear up under the strain of the effort. It is worth noting in this regard, that the Union blockade, which by 1863 had largely closed down not only the external trade of the South with Europe, but also the coastal trade that had been an important element in the antebellum transportation system, may have played a more crucial part in bringing about the eventual collapse of the Southern war effort than is often recognized (Ransom 2002).

The Civil War as a Watershed in American Economic History

It is easy to see why contemporaries believed that the Civil War was a watershed event in American History. With a cost of billions of dollars and 625,000 men killed, slavery had been abolished and the Union had been preserved. Economic historians viewing the event fifty years later could note that the half-century following the Civil War had been a period of extraordinary growth and expansion of the American economy. But was the war really the “Second American Revolution” as Beard (1927) and Louis Hacker (1940) claimed? That was certainly the prevailing view as late as 1960, when Thomas Cochran (1961) published an article titled “Did the Civil War Retard Industrialization?” Cochran pointed out that, until the 1950s, there was no quantitative evidence to prove or disprove the Beard-Hacker thesis. Recent quantitative research, he argued, showed that the war had actually slowed the rate of industrial growth. Stanley Engerman expanded Cochran’s argument by attacking the Beard-Hacker claim that political changes — particularly the passage in 1862 of the Republican program of political economy that had been bottled up in Congress by Southern opposition — were instrumental in accelerating economic growth (Engerman 1966). The major thrust of these arguments was that neither the war nor the legislation was necessary for industrialization — which was already well underway by 1860. “Aside from commercial banking,” noted one commentator, “the Civil War appears not to have started or created any new patterns of economic institutional change” (Gilchrist and Lewis 1965: 174). Had there been no war, these critics argued, the trajectory of economic growth that emerged after 1870 would have done so anyway.

Despite this criticism, the notion of a “second” American Revolution lives on. Clearly the Beards and Hacker were in error in their claim that industrial growth accelerated during the war. The Civil War, like most modern wars, involved a huge effort to mobilize resources to carry on the fight. This had the effect of making it appear that the economy was expanding due to the production of military goods. However, Beard and Hacker — and a good many other historians — mistook this increased wartime activity as a net increase in output when in fact what happened is that resources were shifted away from consumer products towards wartime production (Ransom 1989: Chapter 7). But what of the larger question of political change resulting from the war? Critics of Beard and Hacker claimed that the Republican program would have eventually been enacted even if there been no war; hence the war was not a crucial turning point in economic development. The problem with this line of argument is that it completely misses the point of the Beard-Hacker argument. They would readily agree that in the absence of a war the Republican program of political economy would triumph — and that is why there was a war! Historians who argue that economic forces were an underlying cause of sectional conflicts go on to point out that war was probably the only way to settle those conflicts. In this view, the war was a watershed event in the economic development of the United States because the Union military victory ensured that the “market revolution” would not be stymied by the South’s attempt to break up the Union (Ransom 1999).

Whatever the effects of the war on industrial growth, economic historians agree that the war had a profound effect on the South. The destruction of slavery meant that the entire Southern economy had to be rebuilt. This turned out to be a monumental task; far larger than anyone at the time imagined. As noted above in the discussion of the indirect costs of the war, Southerners bore a disproportionate share of those costs and the burden persisted long after the war had ended. The failure of the postbellum Southern economy to recover has spawned a huge literature that goes well beyond the effects of the war.

Economic historians who have examined the immediate effects of the war have reached a few important conclusions. First, the idea that the South was physically destroyed by the fighting has been largely discarded. Most writers have accepted the argument of Ransom and Sutch (2001) that the major “damage” to the South from the war was the depreciation and neglect of property on farms as a significant portion of the male workforce went off to war for several years. Second was the impact of emancipation. Slaveholders lost their enormous investment in slaves as a result of emancipation. Planters were consequently strapped for capital in the years immediately after the war, and this affected their options with regard to labor contracts with the freedmen and in their dealings with capital markets to obtain credit for the planting season. The freedmen and their families responded to emancipation by withdrawing up to a third of their labor from the market. While this was a perfectly reasonable response, it had the effect of creating an apparent labor “shortage” and it convinced white landlords that a free labor system could never work with the ex-slaves; thus further complicating an already unsettled labor market. In the longer run, as Gavin Wright (1986) put it, emancipation transformed the white landowners from “laborlords” to “landlords.” This was not a simple transition. While they were able, for the most part, to cling to their landholdings, the ex-slaveholders were ultimately forced to break up the great plantations that had been the cornerstone of the antebellum Southern economy and rent small parcels of land to the freedmen under using a new form of rental contract — sharecropping. From a situation where tenancy was extremely rare, the South suddenly became an agricultural economy characterized by tenant farms.

The result was an economy that remained heavily committed not only to agriculture, but to the staple crop of cotton. Crop output in the South fell dramatically at the end of the war, and had not yet recovered its antebellum level by 1879. The loss of income was particularly hard on white Southerners; per capita income of whites in 1857 had been $125; in 1879 it was just over $80 (Ransom and Sutch 1979). Table 5 compares the economic growth of GNP in the United States with the gross crop output of the Southern states from 1874 to 1904. Over the last quarter of the nineteenth century, gross crop output in the South rose by about one percent per year at a time when the GNP of United States (including the South) was rising at twice that rate. By the end of the century, Southern per capita income had fallen to roughly two-thirds the national level, and the South was locked in a cycle of poverty that lasted well into the twentieth century. How much of this failure was due solely to the war remains open to debate. What is clear is that neither the dreams of those who fought for an independent South in 1861 nor the dreams of those who hoped that a “New South” that might emerge from the destruction of war after 1865 were realized.

Table 5Annual Rates of Growth of Gross National Product of the U.S. and the Gross Southern Crop Output, 1874 to 1904
Annual Percentage Rate of Growth
Interval Gross National Product of the U.S. Gross Southern Crop Output
1874 to 1884 2.79 1.57
1879 to 1889 1.91 1.14
1884 to 1894 0.96 1.51
1889 to 1899 1.15 0.97
1894 to 1904 2.30 0.21
1874 to 1904 2.01 1.10
Source: (Ransom and Sutch 1979: 140, Table 7.3

References

Atack, Jeremy, and Peter Passell. A New Economic View of American History from Colonial Times to 1940. Second edition. New York: W.W. Norton, 1994.

Beard, Charles, and Mary Beard. The Rise of American Civilization. Two volumes. New York: Macmillan, 1927.

Bensel, Richard F. Yankee Leviathan: The Origins of Central State Authority in America, 1859-1877. New York: Cambridge University Press, 1990.

Brown, Richard D. Modernization: The Transformation of American Life, 1600-1865. New York: Hill and Wang, 1976.

Burdekin, Richard C.K., and Farrokh K. Langdana. “War Finance in the Southern Confederacy.” Explorations in Economic History 30 (1993): 352-377.

Cochran, Thomas C. “Did the Civil War Retard Industrialization?” Mississippi Valley Historical Review 48 (September 1961): 197-210.

Egnal, Marc. “The Beards Were Right: Parties in the North, 1840-1860.” Civil War History 47 (2001): 30-56.

Engerman, Stanley L. “The Economic Impact of the Civil War.” Explorations in Entrepreneurial History, second series 3 (1966): 176-199 .

Faulkner, Harold Underwood. American Economic History. Fifth edition. New York: Harper & Brothers, 1943.

Gilchrist, David T., and W. David Lewis, editors. Economic Change in the Civil War Era. Greenville, DE: Eleutherian Mills-Hagley Foundation, 1965.

Goldin, Claudia Dale. “The Economics of Emancipation.” Journal of Economic History 33 (1973): 66-85.

Goldin, Claudia, and Frank Lewis. “The Economic Costs of the American Civil War: Estimates and Implications.” Journal of Economic History 35 (1975): 299-326.

Goldin, Claudia, and Frank Lewis. “The Post-Bellum Recovery of the South and the Cost of the Civil War: Comment.” Journal of Economic History 38 (1978): 487-492.

Gunderson, Gerald. “The Origin of the American Civil War.” Journal of Economic History 34 (1974): 915-950.

Hacker, Louis. The Triumph of American Capitalism: The Development of Forces in American History to the End of the Nineteenth Century. New York: Columbia University Press, 1940.

Hughes, J.R.T., and Louis P. Cain. American Economic History. Fifth edition. New York: Addison Wesley, 1998.

Huston, James L. “Property Rights in Slavery and the Coming of the Civil War.” Journal of Southern History 65 (1999): 249-286.

James, John. “Public Debt Management and Nineteenth-Century American Economic Growth.” Explorations in Economic History 21 (1984): 192-217.

Lerner, Eugene. “Money, Prices and Wages in the Confederacy, 1861-65.” Ph.D. dissertation, University of Chicago, Chicago, 1954.

McPherson, James M. “Antebellum Southern Exceptionalism: A New Look at an Old Question.” Civil War History 29 (1983): 230-244.

McPherson, James M. Battle Cry of Freedom: The Civil War Era. New York: Oxford University Press, 1988.

North, Douglass C. The Economic Growth of the United States, 1790-1860. Englewood Cliffs: Prentice Hall, 1961.

Ransom, Roger L. Conflict and Compromise: The Political Economy of Slavery, Emancipation, and the American Civil War. New York: Cambridge University Press, 1989.

Ransom, Roger L. “The Economic Consequences of the American Civil War.” In The Political Economy of War and Peace, edited by M. Wolfson. Norwell, MA: Kluwer Academic Publishers, 1998.

Ransom, Roger L. “Fact and Counterfact: The ‘Second American Revolution’ Revisited.” Civil War History 45 (1999): 28-60.

Ransom, Roger L. “The Historical Statistics of the Confederacy.” In The Historical Statistics of the United States, Millennial Edition, edited by Susan Carter and Richard Sutch. New York: Cambridge University Press, 2002.

Ransom, Roger L., and Richard Sutch. “Growth and Welfare in the American South in the Nineteenth Century.” Explorations in Economic History 16 (1979): 207-235.

Ransom, Roger L., and Richard Sutch. “Who Pays for Slavery?” In The Wealth of Races: The Present Value of Benefits from Past Injustices, edited by Richard F. America, 31-54. Westport, CT: Greenwood Press, 1990.

Ransom, Roger L., and Richard Sutch. “Conflicting Visions: The American Civil War as a Revolutionary Conflict.” Research in Economic History 20 (2001)

Ransom, Roger L., and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. Second edition. New York: Cambridge University Press, 2001.

Robertson, Ross M. History of the American Economy. Second edition. New York: Harcourt Brace and World, 1955.

United States, Bureau of the Census. Historical Statistics of the United States, Colonial Times to 1970. Two volumes. Washington: U.S. Government Printing Office, 1975.

Walton, Gary M., and Hugh Rockoff. History of the American Economy. Eighth edition. New York: Dryden, 1998.

Weidenmier, Marc. “The Market for Confederate Bonds.” Explorations in Economic History 37 (2000): 76-97.

Weingast, Barry. “The Economic Role of Political Institutions: Market Preserving Federalism and Economic Development.” Journal of Law, Economics and Organization 11 (1995): 1:31.

Weingast, Barry R. “Political Stability and Civil War: Institutions, Commitment, and American Democracy.” In Analytic Narratives, edited by Robert Bates et al. Princeton: Princeton University Press, 1998.

Williamson, Jeffrey. “Watersheds and Turning Points: Conjectures on the Long-Term Impact of Civil War Financing.” Journal of Economic History 34 (1974): 636-661.

Wolfson, Murray. “A House Divided against Itself Cannot Stand.” Conflict Management and Peace Science 14 (1995): 115-141.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy since the Civil War. New York: Basic Books, 1986.

Citation: Ransom, Roger. “Economics of the Civil War”. EH.Net Encyclopedia, edited by Robert Whaples. August 24, 2001. URL http://eh.net/encyclopedia/the-economics-of-the-civil-war/

Castles, Battles and Bombs: How Economics Explains Military History

Author(s):Brauer, Jurgen
Tuyll, Hubert van
Reviewer(s):Howlett, Peter

Published by EH.NET (February 2009)

Jurgen Brauer and Hubert van Tuyll, Castles, Battles and Bombs: How Economics Explains Military History. Chicago: University of Chicago Press, 2008. xix + 385 pp. $29 (cloth), ISBN: 0-226-07163-4.

Reviewed for EH.NET by Peter Howlett, Department of Economic History, London School of Economics.

This book is a collaboration between a professor of economics and a professor of history, both based at Augusta State University, whose aim is to show how economic theory can enrich our understanding of military history. At first glance, therefore, this should be a book that should appeal to economic historians ? applying or utilizing economic principles, theories and ideas to history is our bread and butter. However, its target audience is ?the general-interest reader? (p. xvii), it is a trade book, and despite the authors? claim that ?each of the substantive chapters nonetheless makes a genuine contribution to the development of scholarly knowledge? (pp. xvii-xviii) it is not a research monograph. It is therefore more likely to appeal to students than to professors.

The strategy of the book is to take six case studies spanning a thousand years of history and apply economic principles to them to improve our depth of understanding. The historical cases are: the medieval castle, mercenaries during the Renaissance, the decision to offer battle in the seventeenth and eighteenth centuries, the role of information in the American Civil War, the strategic bombing of Germany in the Second World War, and France?s acquisition of a nuclear arsenal during the Cold War. The cases are studied through, or used to illustrate, six ?economic principles?: opportunity cost, expected marginal costs and benefits, substitution, diminishing marginal returns, asymmetric information and hidden characteristics, and hidden actions and incentive alignments. The economic principles, along with a broad introduction to economics itself, are introduced in chapter one and this is then followed by the case studies. While each case does touch on several of the economic principles, all are focused on one aspect ? for example, the medieval castle is used to illustrate the opportunity cost of warfare while the bombing of Germany focuses on diminishing marginal returns. Each of the case study chapters also comes with an appendix that provides a matrix whose rows are the six economic principles chosen by the authors and whose columns are five elements of the military sphere (manpower, logistics, technology, planning and operations) ? the effectiveness of this as a summary or learning tool is debatable and often the individual elements of the matrix seem forced. The final chapter, which was written in response to ?prepublication readers? (p. xix) offers some thought on war in the twenty-first century ? including the economics of terrorism and the economics of private military companies. It offers some interesting insights for the general reader, illustrating how economists are thinking about such issues, but sits outside the analytical framework employed in the rest of the book.

At least some aspects of the book could have been improved. For example, the cases could have had a more global spread ? five of the cases are European and one is American. Perhaps more acknowledgments could have been given to military historians and military strategy. For example, it is disappointing that neither The Art Of War (written several hundred years before the birth of Christ by Sun Tzu) or On War (by Carl von Clausewitz and published in 1832) appear in the book?s list of references, although the authors, in discussing the significance of information do acknowledge Sun Tzu (p. 159) and Clausweitz gets a nod in discussing Napoleon?s strategy (p. 154). These two of the classic texts on military strategy, while obviously not written using a modern economist?s vocabulary, do utilize economic concepts ? for example the second chapter of The Art of War explains how to understand the economic nature of competition and conflict and how success requires limiting their cost. This raises a broader issue of how successful the book will be in opening a dialogue with military historians. In a similar, but more self-interested vein, the book does not fully appreciate the existing large economic history literature that relates to the topic of military strategy (in the broadly defined manner used in the book), not least in terms of technology or state policy or institutions. To cite just one example ? Daniel Benjamin and Christopher Thornberg (?Organization and Incentives in the Age of Sail,? Explorations in Economic History, 2007, pp. 317-341) recently argued that the pay incentive system employed by British navy from the late seventeenth century to the early nineteenth century was one factor underlying its military success.

Overall, while the overall contribution to knowledge of this book is limited, it does serve its target audience of ?the general-interest? reader well ? it is written in an engaging manner, provides much interesting information about its disparate cases, and does illustrate that economics (as this readership knows) has much to offer history. As such it may well find its way onto many student reading lists.

Peter Howlett (w.p.howlett@lse.ac.uk) is a Senior Lecturer in the Economic History Department of the LSE. His most recent publication was ?Trade, Convergence and Globalisation: The Dynamics of the International Income Distribution, 1950-1998,? (with P. Epstein and M-S. Schulze), Explorations in Economic History, vol. 44, no. 1 (January 2007), pp.100-13.

Subject(s):Military and War
Geographic Area(s):North America
Time Period(s):General or Comparative

Northern Naval Superiority and the Economics of the American Civil War

Author(s):Surdam, David G.
Reviewer(s):Ransom, Roger L.

Published by EH.NET (November 2001)

David G. Surdam, Northern Naval Superiority and the Economics of the

American Civil War. Columbia: University of South Carolina Press, 2001.

xiv + 286 pp. $34.95 (cloth), ISBN: 1-57003-407-9.

Reviewed for EH.NET by Roger L. Ransom, Department of History, University of

California, Riverside.

Historians have argued for years whether the effort of the Union Navy to cut

off trade to the Confederacy during the Civil War was an effective weapon that

hastened the end of the war. At present it appears that there is still some

question whether the glass is half full — or is it half empty? Those who see

the glass as half full point to the enormous fall in commerce during the war

along with the evidence of shortages in the Confederacy; those who see it as

half empty argue that it was not until mid 1863 that the blockade was fully

in place, and even then enough blockade runners got through to provide the

South with the essential military supplies to carry on the war. In this

well-researched study of Union naval activities during the war, David Surdam

seeks to provide an answer to this enduring puzzle by examining the problem

from an economist’s perspective. Surdam begins by pointing out that counting

the number of ships and tons of supplies that did or did not “get through” the

blockade is an inadequate way to measure of the success or failure of the

closing of Confederate ports. “The focus on imports,” he notes “has been

almost myopic and misses what may have been two of the blockade’s most

important achievements: disrupting intraregional trade and denying the

Confederacy badly needed revenue from exporting raw cotton and other staple

products” (p.6).

Part I of the book presents an overview of the Southern economy on the eve of

the Civil War. Surdam provides ample quantitative evidence to demonstrate that

the South was more than capable of producing enough food to support itself

during a war — a result that will hardly surprise those familiar with earlier

debates on antebellum Southern self-sufficiency. The problem, Surdam points

out, is that the food production was either scattered over large areas, or —

in the case of meat — concentrated in areas that were remote from the scene

of fighting. In normal times, a combination of coastal and river traffic could

move supplies to central markets and on to their final destination. A point

that is often ignored by those who stress the importance of smuggling military

goods into the South during the war is that the blockade not only made it

difficult to carry on foreign commerce from Southern ports; it also shut down

the coastal trade that had been so active in antebellum times. Without that

water transport, the Southern rail system had to pick up the slack. It was

not equal to the task. The result, argues Surdam, was a serious problem of

moving goods within the CSA.

Part II of the book examines the effect that this disruption of trade had on

the Confederate economy and military effort. An important element of Surdam’s

argument is that he includes the Union control of New Orleans and the

Mississippi Valley as part of the “blockade.” Chapters 3 and 4 examine the

extent to which Union forces were able to seriously restrict the flow of meat

from the trans-Mississippi area, thereby creating severe shortages of meat in

the Eastern Confederacy. There is no question that access to the coastal trade

and inland waterways could have alleviated this problem. Chapter 5 presents

an excellent analysis of the shortcomings of the Southern rail system in

meeting this challenge. This is the best chapter of the book. Surdam clearly

shows how the Union Navy’s blockade and control of the Mississippi Valley

strained an already overtaxed rail system in the South to the breaking point.

The impact of the blockade was two-fold. On the one hand, the closing of water

transport meant that there was increased traffic on a rail system that was not

designed for heavy shipments of through freight. Nor was the Southern rail

system built to handle shipments along an east-west axis rather than

north-south axis. Increased traffic meant that rolling stock and equipment

wore out faster. The second impact of the blockade on the transportation

system was to eliminate the possibility of importing materials that would

allow the railroads to be maintained — or in some cases to have new railroads

built. The result was that rail service deteriorated steadily throughout the

war and problems of supply increased everywhere.

Chapters 6 and 7 examine in greater detail the effects of this collapse of the

transportation system on military and civilian efforts to support the war

together with a brief examination of the supply situation in Virginia in

Chapter 9. There are no surprises here; my main complaint is that Surdam

devotes too little space (a total of twenty pages for all three chapters) to

problems that are central to the debates on the effects of the blockade. The

major thrust of his argument is that the blockade raised costs of

transportation to the point where it cost $2 to get $1 of imports. In an

economy that depended on imported goods for some of its basic needs, this is a

serious problem. Yet Surdam says very little about the role of the blockade in

adding to inflationary pressures, and apart from some anecdotes from reports

of riots and personal hardships he does not explore the effects on civilian

morale of the reduced supply of imported goods whose availability Southerners

had taken for granted before the war. As Avner Offer has pointed out with

regard to the effects of the Allied blockade of Germany in World War I, the

rapid elimination of goods that were an important part of people’s consumption

package can force a drastic change in consumption patterns that eventually had

a deleterious effect on the morale at home (The First World War: An

Agrarian Interpretation, 1989).

Surdam presents a convincing case that there were serious problems of supply

within the CSA during the war. My sense is that looking further into these

problems would reveal that the total impact of the transportation collapse on

civilian life was even greater than the picture that Surdam portrays. Yet

there remains the question: How great was the role of the blockade in creating

this problem? Granted, taking away the coastal shipping hindered movements of

supplies. But Surdam admits that even had there been no blockade the CSA would

have encountered extreme difficulties in meeting demands for shipping goods

to all parts of the country. And the armies posed a special problem for even a

well-developed transportation network. Each of the two major armies of the CSA

was a “mobile population center” of 40,000 to 90,000 people (p. 98). Even

without a blockade feeding the Armies would have been a challenge for the

Southern transportation network — a situation that on two occasions caused

Lee to take his army into the North for forage — with disastrous

consequences. What emerges from the discussion of this part of the book is the

crucial role of the Union Navy in greatly compounding the problems of the

Southern transportation problems by capturing New Orleans early in the war.

Because Texas and Arkansas were the major meat producers in the CSA, the

interdiction of trade across the Mississippi created a serious shortage of

meat for Southern Armies throughout the war.

Part III of the book moves on to the question of “King Cotton” and the efforts

of the CSA to maximize the earning power of their staple crop. “Cotton

revenues,” claims Surdam, “remained the Confederacy’s best economic asset, but

realization of that asset depended upon the South’s ability to properly play

its cotton card in the face of Northern naval superiority” (p.132). Surdam

begins with a discussion in Chapter 9 of how the South could have manipulated

the supply of cotton to its advantage by placing an export tax on the staple

and perhaps encouraging producers to cut back on cotton production. Whatever

their theoretical appeal as economic policies, neither of these proposals made

a great deal of political sense. Surdam himself admits that efforts to impose

an export tax met with little enthusiasm in the Confederate Congress. Efforts

to control exports offered little basis for confidence that the CSA would play

its “cotton card” right. The one effort to manipulate cotton supply — the

embargo of 1861 — produced a miscalculation that threatened to deprive the

CSA of its chance to get cotton to European markets even before the blockade

became effective. Surdam correctly notes that the effect of the embargo was

not as great as often believed, inasmuch as the 1860 crop had already been

shipped and the 1861 crop had not yet been harvested (pp.161-62). Nonetheless,

a perusal of Confederate policy towards “manipulating” cotton hardly leads one

to the conclusion that this is a policy area with great possibilities for

success, and by the middle of 1862 the question was rendered moot by the

increasing effectiveness of the Blockade.

In Chapter 10 Surdam engages in a prolonged discussion of the future of world

cotton market in the period after 1860. While the econometric analysis

presented in this chapter might be of interest to those interested in what has

been a spirited debate over the stagnation of cotton prices after the Civil

War, it seems to me that this discussion is largely irrelevant to an

assessment of the effects of the Northern blockade in 1862-65. As Surdam

points out, the effect of the blockade was to drive up the price of cotton in

Britain. But since the blockade also denied cotton shipments to Europe, the

Confederates were unable to reap the potential revenue such exports might have

brought in the form of both foreign exchange and export taxes. Surdam contends

that had those cotton exports been shipped, the South could have paid for

imports needed in the war effort, and coupled with clever management of the

CSA finances, obtained revenues from export taxes to finance the war. He

regards this as a “catastrophic loss” of revenues to the South that crippled

their military effort.

This is an interesting argument, but I do not find it convincing. There is no

question that the South would have been better off if it were free to ship

cotton to Europe. But I would argue that the inability to ship cotton exports

to Europe was not as “catastrophic” as Surdam suggests. The South was still

able to play a “cotton card” in Europe without having to actually ship the

cotton. The Confederate government raised money abroad by issuing bonds based

on the expectation that the existing supply of cotton could be shipped to

Europe as soon as the war ended. The success of the “Erlanger Bonds,” which

were backed by cotton, underscored this point. They actually held their value

better than bonds backed by gold. In short, Confederates could and did

mortgage the cotton crops to pay for the war — but they still needed to win

the war! The role played by the Blockade in disrupting the Southern economy

outlined in Part II of this book, rather than the inability of the South to

ship cotton to Europe was what ultimately worked to defeat the Confederate war

effort. One of the issues Surdam does not raise in connection with the

“cotton card” was the possibility that Southern producers continued to produce

cotton long after the blockade had made such production almost worthless. The

loss in manpower employed in useless cotton production was probably a more

severe loss to the Confederate war effort than the loss of revenue from a

cotton export tax that was never collected.

One of the ironies of the Union naval blockade was that the North cut itself

off from the supply of raw cotton. In Chapter 13 Surdam presents a brief

analysis of the Northern efforts to obtain cotton through the blockade. This

was a fairly easy task once Union forces occupied New Orleans and secured

Memphis on the Mississippi. Both sides gained in this trade, and each had

difficulty weighing net gains. As Surdam points out, Lincoln recognized this

and when asked what his policy was with regard to trading cotton for supplies

with the rebels he replied “My policy is to have no policy!” (p. 202). After

noting the advantages to both sides, Surdam concludes that “it would be unfair

to expect that either government would have maximized the potential gains”

from the exchange of cotton.

In the final chapter of the book, Surdam notes that the U.S. government

commissioned 700 vessels and spent $567 million — or about 8 percent of the

total expenditures of the war — on the navy (p. 206). He then raises the

obvious question that should be asked by an economist: Was such an elaborate

blockade necessary to cripple the South? Clearly, the South was harmed by the

blockade. The real question is therefore whether a lesser effort might have

produced almost as powerful a result. We noted above that Surdam’s argument on

the centrality of the Mississippi Valley suggests that the capture of New

Orleans by itself was a crippling blow that could not be offset by a different

pattern of shipments within the South, and a blockade of several major gulf

ports would have been sufficient to virtually shut down the cotton trade. But

there remains the blockade’s success in shutting down the coastal trade. Just

how successful this was can be seen in the numbers on ships that successfully

ran the blockade. Over the course of the war, blockade runners made 5,400

successful runs. However, to put this number in perspective we should note

that 3,500 of those were in the first year of the war, and that before the

war New Orleans alone had an average of more than 1,900 vessels enter annually

(p.5). My assessment of Surdam’s analysis of the strain this put on the

transportation system is that, even allowing for some ambiguity as to how much

of the disruption was due to the blockade, he is correct in his conclusion

that “for the resources expended, the blockade appears to have been a

worthwhile investment” (p. 209).

For those interested in studying the blockade or the economy of the South

during the war, this book is a worthwhile investment. In addition to the

compelling argument that the transportation infrastructure of the South was

crippled by the strain of the Northern naval blockade, the book includes a

wealth of well-organized data and several excellent maps of the Confederate

transportation system. And, for those who wish to explore some of the issues

in greater depth, the author has added appendices dealing with the estimates

of meat supply, the potential revenue from raw cotton in the CSA and three

appendices on the demand for cotton in the U.S. and Britain.

Roger Ransom is author of “The Economics of the Civil War” in EH.Net’s

Encyclopedia of Economic and Business History.

http://eh.net/encyclopedia/ransom.civil.war.us.php

Subject(s):Transport and Distribution, Energy, and Other Services
Geographic Area(s):North America
Time Period(s):19th Century

The History of American Labor Market Institutions and Outcomes

Joshua Rosenbloom, University of Kansas

One of the most important implications of modern microeconomic theory is that perfectly competitive markets produce an efficient allocation of resources. Historically, however, most markets have not approached the level of organization of this theoretical ideal. Instead of the costless and instantaneous communication envisioned in theory, market participants must rely on a set of incomplete and often costly channels of communication to learn about conditions of supply and demand; and they may face significant transaction costs to act on the information that they have acquired through these channels.

The economic history of labor market institutions is concerned with identifying the mechanisms that have facilitated the allocation of labor effort in the economy at different times, tracing the historical processes by which they have responded to shifting circumstances, and understanding how these mechanisms affected the allocation of labor as well as the distribution of labor’s products in different epochs.

Labor market institutions include both formal organizations (such as union hiring halls, government labor exchanges, and third party intermediaries such as employment agents), and informal mechanisms of communication such as word-of-mouth about employment opportunities passed between family and friends. The impact of these institutions is broad ranging. It includes the geographic allocation of labor (migration and urbanization), decisions about education and training of workers (investment in human capital), inequality (relative wages), the allocation of time between paid work and other activities such as household production, education, and leisure, and fertility (the allocation of time between production and reproduction).

Because each worker possesses a unique bundle of skills and attributes and each job is different, labor market transactions require the communication of a relatively large amount of information. In other words, the transactions costs involved in the exchange of labor are relatively high. The result is that the barriers separating different labor markets have sometimes been quite high, and these markets are relatively poorly integrated with one another.

The frictions inherent in the labor market mean that even during macroeconomic expansions there may be both a significant number of unemployed workers and a large number of unfilled vacancies. When viewed from some distance and looked at in the long-run, however, what is most striking is how effective labor market institutions have been in adapting to the shifting patterns of supply and demand in the economy. Over the past two centuries American labor markets have accomplished a massive redistribution of labor out of agriculture into manufacturing, and then from manufacturing into services. At the same time they have accomplished a huge geographic reallocation of labor between the United States and other parts of the world as well as within the United States itself, both across states and regions and from rural locations to urban areas.

This essay is organized topically, beginning with a discussion of the evolution of institutions involved in the allocation of labor across space and then taking up the development of institutions that fostered the allocation of labor across industries and sectors. The third section considers issues related to labor market performance.

The Geographic Distribution of Labor

One of the dominant themes of American history is the process of European settlement (and the concomitant displacement of the native population). This movement of population is in essence a labor market phenomenon. From the beginning of European settlement in what became the United States, labor markets were characterized by the scarcity of labor in relation to abundant land and natural resources. Labor scarcity raised labor productivity and enabled ordinary Americans to enjoy a higher standard of living than comparable Europeans. Counterbalancing these inducements to migration, however, were the high costs of travel across the Atlantic and the significant risks posed by settlement in frontier regions. Over time, technological changes lowered the costs of communication and transportation. But exploiting these advantages required the parallel development of new labor market institutions.

Trans-Atlantic Migration in the Colonial Period

During the seventeenth and eighteenth centuries a variety of labor market institutions developed to facilitate the movement of labor in response to the opportunities created by American factor proportions. While some immigrants migrated on their own, the majority of immigrants were either indentured servants or African slaves.

Because of the cost of passage—which exceeded half a year’s income for a typical British immigrant and a full year’s income for a typical German immigrant—only a small portion of European migrants could afford to pay for their passage to the Americas (Grubb 1985a). They did so by signing contracts, or “indentures,” committing themselves to work for a fixed number of years in the future—their labor being their only viable asset—with British merchants, who then sold these contracts to colonists after their ship reached America. Indentured servitude was introduced by the Virginia Company in 1619 and appears to have arisen from a combination of the terms of two other types of labor contract widely used in England at the time: service in husbandry and apprenticeship (Galenson 1981). In other cases, migrants borrowed money for their passage and committed to repay merchants by pledging to sell themselves as servants in America, a practice known as “redemptioner servitude (Grubb 1986). Redemptioners bore increased risk because they could not predict in advance what terms they might be able to negotiate for their labor, but presumably they did so because of other benefits, such as the opportunity to choose their own master, and to select where they would be employed.

Although data on immigration for the colonial period are scattered and incomplete a number of scholars have estimated that between half and three quarters of European immigrants arriving in the colonies came as indentured or redemptioner servants. Using data for the end of the colonial period Grubb (1985b) found that close to three-quarters of English immigrants to Pennsylvania and nearly 60 percent of German immigrants arrived as servants.

A number of scholars have examined the terms of indenture and redemptioner contracts in some detail (see, e.g., Galenson 1981; Grubb 1985a). They find that consistent with the existence of a well-functioning market, the terms of service varied in response to differences in individual productivity, employment conditions, and the balance of supply and demand in different locations.

The other major source of labor for the colonies was the forced migration of African slaves. Slavery had been introduced in the West Indies at an early date, but it was not until the late seventeenth century that significant numbers of slaves began to be imported into the mainland colonies. From 1700 to 1780 the proportion of blacks in the Chesapeake region grew from 13 percent to around 40 percent. In South Carolina and Georgia, the black share of the population climbed from 18 percent to 41 percent in the same period (McCusker and Menard, 1985, p. 222). Galenson (1984) explains the transition from indentured European to enslaved African labor as the result of shifts in supply and demand conditions in England and the trans-Atlantic slave market. Conditions in Europe improved after 1650, reducing the supply of indentured servants, while at the same time increased competition in the slave trade was lowering the price of slaves (Dunn 1984). In some sense the colonies’ early experience with indentured servants paved the way for the transition to slavery. Like slaves, indentured servants were unfree, and ownership of their labor could be freely transferred from one owner to another. Unlike slaves, however, they could look forward to eventually becoming free (Morgan 1971).

Over time a marked regional division in labor market institutions emerged in colonial America. The use of slaves was concentrated in the Chesapeake and Lower South, where the presence of staple export crops (rice, indigo and tobacco) provided economic rewards for expanding the scale of cultivation beyond the size achievable with family labor. European immigrants (primarily indentured servants) tended to concentrate in the Chesapeake and Middle Colonies, where servants could expect to find the greatest opportunities to enter agriculture once they had completed their term of service. While New England was able to support self-sufficient farmers, its climate and soil were not conducive to the expansion of commercial agriculture, with the result that it attracted relatively few slaves, indentured servants, or free immigrants. These patterns are illustrated in Table 1, which summarizes the composition and destinations of English emigrants in the years 1773 to 1776.

Table 1

English Emigration to the American Colonies, by Destination and Type, 1773-76

Total Emigration
Destination Number Percentage Percent listed as servants
New England 54 1.20 1.85
Middle Colonies 1,162 25.78 61.27
New York 303 6.72 11.55
Pennsylvania 859 19.06 78.81
Chesapeake 2,984 66.21 96.28
Maryland 2,217 49.19 98.33
Virginia 767 17.02 90.35
Lower South 307 6.81 19.54
Carolinas 106 2.35 23.58
Georgia 196 4.35 17.86
Florida 5 0.11 0.00
Total 4,507 80.90

Source: Grubb (1985b, p. 334).

International Migration in the Nineteenth and Twentieth Centuries

American independence marks a turning point in the development of labor market institutions. In 1808 Congress prohibited the importation of slaves. Meanwhile, the use of indentured servitude to finance the migration of European immigrants fell into disuse. As a result, most subsequent migration was at least nominally free migration.

The high cost of migration and the economic uncertainties of the new nation help to explain the relatively low level of immigration in the early years of the nineteenth century. But as the costs of transportation fell, the volume of immigration rose dramatically over the course of the century. Transportation costs were of course only one of the obstacles to international population movements. At least as important were problems of communication. Potential migrants might know in a general way that the United States offered greater economic opportunities than were available at home, but acting on this information required the development of labor market institutions that could effectively link job-seekers with employers.

For the most part, the labor market institutions that emerged in the nineteenth century to direct international migration were “informal” and thus difficult to document. As Rosenbloom (2002, ch. 2) describes, however, word-of-mouth played an important role in labor markets at this time. Many immigrants were following in the footsteps of friends or relatives already in the United States. Often these initial pioneers provided material assistance—helping to purchase ship and train tickets, providing housing—as well as information. The consequences of this so-called “chain migration” are readily reflected in a variety of kinds of evidence. Numerous studies of specific migration streams have documented the role of a small group of initial migrants in facilitating subsequent migration (for example, Barton 1975; Kamphoefner 1987; Gjerde 1985). At a more aggregate level, settlement patterns confirm the tendency of immigrants from different countries to concentrate in different cities (Ward 1971, p. 77; Galloway, Vedder and Shukla 1974).

Informal word-of-mouth communication was an effective labor market institution because it served both employers and job-seekers. For job-seekers the recommendations of friends and relatives were more reliable than those of third parties and often came with additional assistance. For employers the recommendations of current employees served as a kind of screening mechanism, since their employees were unlikely to encourage the immigration of unreliable workers.

While chain migration can explain a quantitatively large part of the redistribution of labor in the nineteenth century it is still necessary to explain how these chains came into existence in the first place. Chain migration always coexisted with another set of more formal labor market institutions that grew up largely to serve employers who could not rely on their existing labor force to recruit new hires (such as railroad construction companies). Labor agents, often themselves immigrants, acted as intermediaries between these employers and job-seekers, providing labor market information and frequently acting as translators for immigrants who could not speak English. Steamship companies operating between Europe and the United States also employed agents to help recruit potential migrants (Rosenbloom 2002, ch. 3).

By the 1840s networks of labor agents along with boarding houses serving immigrants and other similar support networks were well established in New York, Boston, and other major immigrant destinations. The services of these agents were well documented in published guides and most Europeans considering immigration must have known that they could turn to these commercial intermediaries if they lacked friends and family to guide them. After some time working in America these immigrants, if they were successful, would find steadier employment and begin to direct subsequent migration, thus establishing a new link in the stream of chain migration.

The economic impacts of immigration are theoretically ambiguous. Increased labor supply, by itself, would tend to lower wages—benefiting employers and hurting workers. But because immigrants are also consumers, the resulting increase in demand for goods and services will increase the demand for labor, partially offsetting the depressing effect of immigration on wages. As long as the labor to capital ratio rises, however, immigration will necessarily lower wages. But if, as was true in the late nineteenth century, foreign lending follows foreign labor, then there may be no negative impact on wages (Carter and Sutch 1999). Whatever the theoretical considerations, however, immigration became an increasingly controversial political issue during the late nineteenth and early twentieth centuries. While employers and some immigrant groups supported continued immigration, there was a growing nativist sentiment among other segments of the population. Anti-immigrant sentiments appear to have arisen out of a mix of perceived economic effects and concern about the implications of the ethnic, religious and cultural differences between immigrants and the native born.

In 1882, Congress passed the Chinese Exclusion Act. Subsequent legislative efforts to impose further restrictions on immigration passed Congress but foundered on presidential vetoes. The balance of political forces shifted, however, in the wake of World War I. In 1917 a literacy requirement was imposed for the first time, and in 1921 an Emergency Quota Act was passed (Goldin 1994).

With the passage of the Emergency Quota Act in 1921 and subsequent legislation culminating in the National Origins Act, the volume of immigration dropped sharply. Since this time international migration into the United States has been controlled to varying degrees by legal restrictions. Variations in the rules have produced variations in the volume of legal immigration. Meanwhile the persistence of large wage gaps between the United States and Mexico and other developing countries has encouraged a substantial volume of illegal immigration. It remains the case, however, that most of this migration—both legal and illegal—continues to be directed by chains of friends and relatives.

Recent trends in outsourcing and off-shoring have begun to create a new channel by which lower-wage workers outside the United States can respond to the country’s high wages without physically relocating. Workers in India, China, and elsewhere possessing technical skills can now provide services such as data entry or technical support by phone and over the internet. While the novelty of this phenomenon has attracted considerable attention, the actual volume of jobs moved off-shore remains limited, and there are important obstacles to overcome before more jobs can be carried out remotely (Edwards 2004).

Internal Migration in the Nineteenth and Twentieth Centuries

At the same time that American economic development created international imbalances between labor supply and demand it also created internal disequilibrium. Fertile land and abundant natural resources drew population toward less densely settled regions in the West. Over the course of the century, advances in transportation technologies lowered the cost of shipping goods from interior regions, vastly expanding the area available for settlement. Meanwhile transportation advances and technological innovations encouraged the growth of manufacturing and fueled increased urbanization. The movement of population and economic activity from the Eastern Seaboard into the interior of the continent and from rural to urban areas in response to these incentives is an important element of U.S. economic history in the nineteenth century.

In the pre-Civil War era, the labor market response to frontier expansion differed substantially between North and South, with profound effects on patterns of settlement and regional development. Much of the cost of migration is a result of the need to gather information about opportunities in potential destinations. In the South, plantation owners could spread these costs over a relatively large number of potential migrants—i.e., their slaves. Plantations were also relatively self-sufficient, requiring little urban or commercial infrastructure to make them economically viable. Moreover, the existence of well-established markets for slaves allowed western planters to expand their labor force by purchasing additional labor from eastern plantations.

In the North, on the other hand, migration took place through the relocation of small, family farms. Fixed costs of gathering information and the risks of migration loomed larger in these farmers’ calculations than they did for slaveholders, and they were more dependent on the presence of urban merchants to supply them with inputs and market their products. Consequently the task of mobilizing labor fell to promoters who bought up large tracts of land at low prices and then subdivided them into individual lots. To increase the value of these lands promoters offered loans, actively encourage the development of urban services such as blacksmith shops, grain merchants, wagon builders and general stores, and recruited settlers. With the spread of railroads, railroad construction companies also played a role in encouraging settlement along their routes to speed the development of traffic.

The differences in processes of westward migration in the North and South were reflected in the divergence of rates of urbanization, transportation infrastructure investment, manufacturing employment, and population density, all of which were higher in the North than in the South in 1860 (Wright 1986, pp. 19-29).

The Distribution of Labor among Economic Activities

Over the course of U.S. economic development technological changes and shifting consumption patterns have caused the demand for labor to increase in manufacturing and services and decline in agriculture and other extractive activities. These broad changes are illustrated in Table 2. As technological changes have increased the advantages of specialization and the division of labor, more and more economic activity has moved outside the scope of the household, and the boundaries of the labor market have been enlarged. As a result more and more women have moved into the paid labor force. On the other hand, with the increasing importance of formal education, there has been a decline in the number of children in the labor force (Whaples 2005).

Table 2

Sectoral Distribution of the Labor Force, 1800-1999

Share in
Non-Agriculture
Year Total Labor Force (1000s) Agriculture Total Manufacturing Services
1800 1,658 76.2 23.8
1850 8,199 53.6 46.4
1900 29,031 37.5 59.4 35.8 23.6
1950 57,860 11.9 88.1 41.0 47.1
1999 133,489 2.3 97.7 24.7 73.0

Notes and Sources: 1800 and 1850 from Weiss (1986), pp. 646-49; remaining years from Hughes and Cain (2003), 547-48. For 1900-1999 Forestry and Fishing are included in the Agricultural labor force.

As these changes have taken place they have placed strains on existing labor market institutions and encouraged the development of new mechanisms to facilitate the distribution of labor. Over the course of the last century and a half the tendency has been a movement away from something approximating a “spot” market characterized by short-term employment relationships in which wages are equated to the marginal product of labor, and toward a much more complex and rule-bound set of long-term transactions (Goldin 2000, p. 586) While certain segments of the labor market still involve relatively anonymous and short-lived transactions, workers and employers are much more likely today to enter into long-term employment relationships that are expected to last for many years.

The evolution of labor market institutions in response to these shifting demands has been anything but smooth. During the late nineteenth century the expansion of organized labor was accompanied by often violent labor-management conflict (Friedman 2002). Not until the New Deal did unions gain widespread acceptance and a legal right to bargain. Yet even today, union organizing efforts are often met with considerable hostility.

Conflicts over union organizing efforts inevitably involved state and federal governments because the legal environment directly affected the bargaining power of both sides, and shifting legal opinions and legislative changes played an important part in determining the outcome of these contests. State and federal governments were also drawn into labor markets as various groups sought to limit hours of work, set minimum wages, provide support for disabled workers, and respond to other perceived shortcomings of existing arrangements. It would be wrong, however, to see the growth of government regulation as simply a movement from freer to more regulated markets. The ability to exchange goods and services rests ultimately on the legal system, and to this extent there has never been an entirely unregulated market. In addition, labor market transactions are never as simple as the anonymous exchange of other goods or services. Because the identities of individual buyers and sellers matter and the long-term nature of many employment relationships, adjustments can occur along other margins besides wages, and many of these dimensions involve externalities that affect all workers at a particular establishment, or possibly workers in an entire industry or sector.

Government regulations have responded in many cases to needs voiced by participants on both sides of the labor market for assistance to achieve desired ends. That has not, of course, prevented both workers and employers from seeking to use government to alter the way in which the gains from trade are distributed within the market.

The Agricultural Labor Market

At the beginning of the nineteenth century most labor was employed in agriculture, and, with the exception of large slave plantations, most agricultural labor was performed on small, family-run farms. There were markets for temporary and seasonal agricultural laborers to supplement family labor supply, but in most parts of the country outside the South, families remained the dominant institution directing the allocation of farm labor. Reliable estimates of the number of farm workers are not readily available before 1860, when the federal Census first enumerated “farm laborers.” At this time census enumerators found about 800 thousand such workers, implying an average of less than one-half farm worker per farm. Interpretation of this figure is complicated, however, and it may either overstate the amount of hired help—since farm laborers included unpaid family workers—or understate it—since it excluded those who reported their occupation simply as “laborer” and may have spent some of their time working in agriculture (Wright 1988, p. 193). A possibly more reliable indicator is provided by the percentage of gross value of farm output spent on wage labor. This figure fell from 11.4 percent in 1870 to around 8 percent by 1900, indicating that hired labor was on average becoming even less important (Wright 1988, pp. 194-95).

In the South, after the Civil War, arrangements were more complicated. Former plantation owners continued to own large tracts of land that required labor if they were to be made productive. Meanwhile former slaves needed access to land and capital if they were to support themselves. While some land owners turned to wage labor to work their land, most relied heavily on institutions like sharecropping. On the supply side, croppers viewed this form of employment as a rung on the “agricultural ladder” that would lead eventually to tenancy and possibly ownership. Because climbing the agricultural ladder meant establishing one’s credit-worthiness with local lenders, southern farm laborers tended to sort themselves into two categories: locally established (mostly older, married men) croppers and renters on the one hand, and mobile wage laborers (mostly younger and unmarried) on the other. While the labor market for each of these types of workers appears to have been relatively competitive, the barriers between the two markets remained relatively high (Wright 1987, p. 111).

While the predominant pattern in agriculture then was one of small, family-operated units, there was an important countervailing trend toward specialization that both depended on, and encouraged the emergence of a more specialized market for farm labor. Because specialization in a single crop increased the seasonality of labor demand, farmers could not afford to employ labor year-round, but had to depend on migrant workers. The use of seasonal gangs of migrant wage laborers developed earliest in California in the 1870s and 1880s, where employers relied heavily on Chinese immigrants. Following restrictions on Chinese entry, they were replaced first by Japanese, and later by Mexican workers (Wright 1988, pp. 201-204).

The Emergence of Internal Labor Markets

Outside of agriculture, at the beginning of the nineteenth century most manufacturing took place in small establishments. Hired labor might consist of a small number of apprentices, or, as in the early New England textile mills, a few child laborers hired from nearby farms (Ware 1931). As a result labor market institutions remained small-scale and informal, and institutions for training and skill acquisition remained correspondingly limited. Workers learned on the job as apprentices or helpers; advancement came through establishing themselves as independent producers rather than through internal promotion.

With the growth of manufacturing, and the spread of factory methods of production, especially in the years after the end of the Civil War, an increasing number of people could expect to spend their working-lives as employees. One reflection of this change was the emergence in the 1870s of the problem of unemployment. During the depression of 1873 for the first time cities throughout the country had to contend with large masses of industrial workers thrown out of work and unable to support themselves through, in the language of the time, “no fault of their own” (Keyssar 1986, ch. 2).

The growth of large factories and the creation of new kinds of labor skills specific to a particular employer created returns to sustaining long-term employment relationships. As workers acquired job- and employer-specific skills their productivity increased giving rise to gains that were available only so long as the employment relationship persisted. Employers did little, however, to encourage long-term employment relationships. Instead authority over hiring, promotion and retention was commonly delegated to foremen or inside contractors (Nelson 1975, pp. 34-54). In the latter case, skilled craftsmen operated in effect as their own bosses contracting with the firm to supply components or finished products for an agreed price, and taking responsibility for hiring and managing their own assistants.

These arrangements were well suited to promoting external mobility. Foremen were often drawn from the immigrant community and could easily tap into word-of-mouth channels of recruitment. But these benefits came increasingly into conflict with rising costs of hiring and training workers.

The informality of personnel policies prior to World War I seems likely to have discouraged lasting employment relationships, and it is true that rates of labor turnover at the beginning of the twentieth century were considerably higher than they were to be later (Owen, 2004). Scattered evidence on the duration of employment relationships gathered by various state labor bureaus at the end of the century suggests, however, at least some workers did establish lasting employment relationship (Carter 1988; Carter and Savocca 1990; Jacoby and Sharma 1992; James 1994).

The growing awareness of the costs of labor-turnover and informal, casual labor relations led reformers to advocate the establishment of more centralized and formal processes of hiring, firing and promotion, along with the establishment of internal job-ladders, and deferred payment plans to help bind workers and employers. The implementation of these reforms did not make significant headway, however, until the 1920s (Slichter 1929). Why employers began to establish internal labor markets in the 1920s remains in dispute. While some scholars emphasize pressure from workers (Jacoby 1984; 1985) others have stressed that it was largely a response to the rising costs of labor turnover (Edwards 1979).

The Government and the Labor Market

The growth of large factories contributed to rising labor tensions in the late nineteenth- and early twentieth-centuries. Issues like hours of work, safety, and working conditions all have a significant public goods aspect. While market forces of entry and exit will force employers to adopt policies that are sufficient to attract the marginal worker (the one just indifferent between staying and leaving), less mobile workers may find that their interests are not adequately represented (Freeman and Medoff 1984). One solution is to establish mechanisms for collective bargaining, and the years after the American Civil War were characterized by significant progress in the growth of organized labor (Friedman 2002). Unionization efforts, however, met strong opposition from employers, and suffered from the obstacles created by the American legal system’s bias toward protecting property and the freedom of contract. Under prevailing legal interpretation, strikes were often found by the courts to be conspiracies in restraint of trade with the result that the apparatus of government was often arrayed against labor.

Although efforts to win significant improvements in working conditions were rarely successful, there were still areas where there was room for mutually beneficial change. One such area involved the provision of disability insurance for workers injured on the job. Traditionally, injured workers had turned to the courts to adjudicate liability for industrial accidents. Legal proceedings were costly and their outcome unpredictable. By the early 1910s it became clear to all sides that a system of disability insurance was preferable to reliance on the courts. Resolution of this problem, however, required the intervention of state legislatures to establish mandatory state workers compensation insurance schemes and remove the issue from the courts. Once introduced workers compensation schemes spread quickly: nine states passed legislation in 1911; 13 more had joined the bandwagon by 1913, and by 1920 44 states had such legislation (Fishback 2001).

Along with workers compensation state legislatures in the late nineteenth century also considered legislation restricting hours of work. Prevailing legal interpretations limited the effectiveness of such efforts for adult males. But rules restricting hours for women and children were found to be acceptable. The federal government passed legislation restricting the employment of children under 14 in 1916, but this law was found unconstitutional in 1916 (Goldin 2000, p. 612-13).

The economic crisis of the 1930s triggered a new wave of government interventions in the labor market. During the 1930s the federal government granted unions the right to organize legally, established a system of unemployment, disability and old age insurance, and established minimum wage and overtime pay provisions.

In 1933 the National Industrial Recovery Act included provisions legalizing unions’ right to bargain collectively. Although the NIRA was eventually ruled to be unconstitutional, the key labor provisions of the Act were reinstated in the Wagner Act of 1935. While some of the provisions of the Wagner Act were modified in 1947 by the Taft-Hartley Act, its passage marks the beginning of the golden age of organized labor. Union membership jumped very quickly after 1935 from around 12 percent of the non-agricultural labor force to nearly 30 percent, and by the late 1940s had attained a peak of 35 percent, where it stabilized. Since the 1960s, however, union membership has declined steadily, to the point where it is now back at pre-Wagner Act levels.

The Social Security Act of 1935 introduced a federal unemployment insurance scheme that was operated in partnership with state governments and financed through a tax on employers. It also created government old age and disability insurance. In 1938, the federal Fair Labor Standards Act provided for minimum wages and for overtime pay. At first the coverage of these provisions was limited, but it has been steadily increased in subsequent years to cover most industries today.

In the post-war era, the federal government has expanded its role in managing labor markets both directly—through the establishment of occupational safety regulations, and anti-discrimination laws, for example—and indirectly—through its efforts to manage the macroeconomy to insure maximum employment.

A further expansion of federal involvement in labor markets began in 1964 with passage of the Civil Rights Act, which prohibited employment discrimination against both minorities and women. In 1967 the Age Discrimination and Employment Act was passed prohibiting discrimination against people aged 40 to 70 in regard to hiring, firing, working conditions and pay. The Family and Medical Leave Act of 1994 allows for unpaid leave to care for infants, children and other sick relatives (Goldin 2000, p. 614).

Whether state and federal legislation has significantly affected labor market outcomes remains unclear. Most economists would argue that the majority of labor’s gains in the past century would have occurred even in the absence of government intervention. Rather than shaping market outcomes, many legislative initiatives emerged as a result of underlying changes that were making advances possible. According to Claudia Goldin (2000, p. 553) “government intervention often reinforced existing trends, as in the decline of child labor, the narrowing of the wage structure, and the decrease in hours of work.” In other cases, such as Workers Compensation and pensions, legislation helped to establish the basis for markets.

The Changing Boundaries of the Labor Market

The rise of factories and urban employment had implications that went far beyond the labor market itself. On farms women and children had found ready employment (Craig 1993, ch. 4). But when the male household head worked for wages, employment opportunities for other family members were more limited. Late nineteenth-century convention largely dictated that married women did not work outside the home unless their husband was dead or incapacitated (Goldin 1990, p. 119-20). Children, on the other hand, were often viewed as supplementary earners in blue-collar households at this time.

Since 1900 changes in relative earnings power related to shifts in technology have encouraged women to enter the paid labor market while purchasing more of the goods and services that were previously produced within the home. At the same time, the rising value of formal education has lead to the withdrawal of child labor from the market and increased investment in formal education (Whaples 2005). During the first half of the twentieth century high school education became nearly universal. And since World War II, there has been a rapid increase in the number of college educated workers in the U.S. economy (Goldin 2000, p. 609-12).

Assessing the Efficiency of Labor Market Institutions

The function of labor markets is to match workers and jobs. As this essay has described the mechanisms by which labor markets have accomplished this task have changed considerably as the American economy has developed. A central issue for economic historians is to assess how changing labor market institutions have affected the efficiency of labor markets. This leads to three sets of questions. The first concerns the long-run efficiency of market processes in allocating labor across space and economic activities. The second involves the response of labor markets to short-run macroeconomic fluctuations. The third deals with wage determination and the distribution of income.

Long-Run Efficiency and Wage Gaps

Efforts to evaluate the efficiency of market allocation begin with what is commonly know as the “law of one price,” which states that within an efficient market the wage of similar workers doing similar work under similar circumstances should be equalized. The ideal of complete equalization is, of course, unlikely to be achieved given the high information and transactions costs that characterize labor markets. Thus, conclusions are usually couched in relative terms, comparing the efficiency of one market at one point in time with those of some other markets at other points in time. A further complication in measuring wage equalization is the need to compare homogeneous workers and to control for other differences (such as cost of living and non-pecuniary amenities).

Falling transportation and communications costs have encouraged a trend toward diminishing wage gaps over time, but this trend has not always been consistent over time, nor has it applied to all markets in equal measure. That said, what stands out is in fact the relative strength of forces of market arbitrage that have operated in many contexts to promote wage convergence.

At the beginning of the nineteenth century, the costs of trans-Atlantic migration were still quite high and international wage gaps large. By the 1840s, however, vast improvements in shipping cut the costs of migration, and gave rise to an era of dramatic international wage equalization (O’Rourke and Williamson 1999, ch. 2; Williamson 1995). Figure 1 shows the movement of real wages relative to the United States in a selection of European countries. After the beginning of mass immigration wage differentials began to fall substantially in one country after another. International wage convergence continued up until the 1880s, when it appears that the accelerating growth of the American economy outstripped European labor supply responses and reversed wage convergence briefly. World War I and subsequent immigration restrictions caused a sharper break, and contributed to widening international wage differences during the middle portion of the twentieth century. From World War II until about 1980, European wage levels once again began to converge toward the U.S., but this convergence reflected largely internally-generated improvements in European living standards rather then labor market pressures.

Figure 1

Relative Real Wages of Selected European Countries, 1830-1980 (US = 100)

Source: Williamson (1995), Tables A2.1-A2.3.

Wage convergence also took place within some parts of the United States during the nineteenth century. Figure 2 traces wages in the North Central and Southern regions of the U.S relative to those in the Northeast across the period from 1820 to the early twentieth century. Within the United States, wages in the North Central region of the country were 30 to 40 percent higher than in the East in the 1820s (Margo 2000a, ch. 5). Thereafter, wage gaps declined substantially, falling to the 10-20 percent range before the Civil War. Despite some temporary divergence during the war, wage gaps had fallen to 5 to 10 percent by the 1880s and 1890s. Much of this decline was made possible by faster and less expensive means of transportation, but it was also dependent on the development of labor market institutions linking the two regions, for while transportation improvements helped to link East and West, there was no corresponding North-South integration. While southern wages hovered near levels in the Northeast prior to the Civil War, they fell substantially below northern levels after the Civil War, as Figure 2 illustrates.

Figure 2

Relative Regional Real Wage Rates in the United States, 1825-1984

(Northeast = 100 in each year)

Notes and sources: Rosenbloom (2002, p. 133); Montgomery (1992). It is not possible to assemble entirely consistent data on regional wage variations over such an extended period. The nature of the wage data, the precise geographic coverage of the data, and the estimates of regional cost-of-living indices are all different. The earliest wage data—Margo (2000); Sundstrom and Rosenbloom (1993) and Coelho and Shepherd (1976) are all based on occupational wage rates from payroll records for specific occupations; Rosenbloom (1996) uses average earnings across all manufacturing workers; while Montgomery (1992) uses individual level wage data drawn from the Current Population Survey, and calculates geographic variations using a regression technique to control for individual differences in human capital and industry of employment. I used the relative real wages that Montgomery (1992) reported for workers in manufacturing, and used an unweighted average of wages across the cities in each region to arrive at relative regional real wages. Interested readers should consult the various underlying sources for further details.

Despite the large North-South wage gap Table 3 shows there was relatively little migration out of the South until large-scale foreign immigration came to an end. Migration from the South during World War I and the 1920s created a basis for future chain migration, but the Great Depression of the 1930s interrupted this process of adjustment. Not until the 1940s did the North-South wage gap begin to decline substantially (Wright 1986, pp. 71-80). By the 1970s the southern wage disadvantage had largely disappeared, and because of the decline fortunes of older manufacturing districts and the rise of Sunbelt cities, wages in the South now exceed those in the Northeast (Coelho and Ghali 1971; Bellante 1979; Sahling and Smith 1983; Montgomery 1992). Despite these shocks, however, the overall variation in wages appears comparable to levels attained by the end of the nineteenth century. Montgomery (1992), for example finds that from 1974 to 1984 the standard deviation of wages across SMSAs was only about 10 percent of the average wage.

Table 3

Net Migration by Region, and Race, 1870-1950

South Northeast North Central West
Period White Black White Black White Black White Black
Number (in 1,000s)
1870-80 91 -68 -374 26 26 42 257 0
1880-90 -271 -88 -240 61 -43 28 554 0
1890-00 -30 -185 101 136 -445 49 374 0
1900-10 -69 -194 -196 109 -1,110 63 1,375 22
1910-20 -663 -555 -74 242 -145 281 880 32
1920-30 -704 -903 -177 435 -464 426 1,345 42
1930-40 -558 -480 55 273 -747 152 1,250 55
1940-50 -866 -1581 -659 599 -1,296 626 2,822 356
Rate (migrants/1,000 Population)
1870-80 11 -14 -33 55 2 124 274 0
1880-90 -26 -15 -18 107 -3 65 325 0
1890-00 -2 -26 6 200 -23 104 141 0
1900-10 -4 -24 -11 137 -48 122 329 542
1910-20 -33 -66 -3 254 -5 421 143 491
1920-30 -30 -103 -7 328 -15 415 160 421
1930-40 -20 -52 2 157 -22 113 116 378
1940-50 -28 -167 -20 259 -35 344 195 964

Note: Net migration is calculated as the difference between the actual increase in population over each decade and the predicted increase based on age and sex specific mortality rates and the demographic structure of the region’s population at the beginning of the decade. If the actual increase exceeds the predicted increase this implies a net migration into the region; if the actual increase is less than predicted this implies net migration out of the region. The states included in the Southern region are Oklahoma, Texas, Arkansas, Louisiana, Mississippi, Alabama, Tennessee, Kentucky, West Virginia, Virginia, North Carolina, South Carolina, Georgia, and Florida.

Source: Eldridge and Thomas (1964, pp. 90, 99).

In addition to geographic wage gaps economists have considered gaps between farm and city, between black and white workers, between men and women, and between different industries. The literature on these topics is quite extensive and this essay can only touch on a few of the more general themes raised here as they relate to U.S. economic history.

Studies of farm-city wage gaps are a variant of the broader literature on geographic wage variation, related to the general movement of labor from farms to urban manufacturing and services. Here comparisons are complicated by the need to adjust for the non-wage perquisites that farm laborers typically received, which could be almost as large as cash wages. The issue of whether such gaps existed in the nineteenth century has important implications for whether the pace of industrialization was impeded by the lack of adequate labor supply responses. By the second half of the nineteenth century at least, it appears that farm-manufacturing wage gaps were small and markets were relatively integrated (Wright 1988, pp. 204-5). Margo (2000, ch. 4) offers evidence of a high degree of equalization within local labor markets between farm and urban wages as early as 1860. Making comparisons within counties and states, he reports that farm wages were within 10 percent of urban wages in eight states. Analyzing data from the late nineteenth century through the 1930s, Hatton and Williamson (1991) find that farm and city wages were nearly equal within U.S. regions by the 1890s. It appears, however that during the Great Depression farm wages were much more flexible than urban wages causing a large gap to emerge at this time (Alston and Williamson 1991).

Much attention has been focused on trends in wage gaps by race and sex. The twentieth century has seen a substantial convergence in both of these differentials. Table 4 displays comparisons of earnings of black males relative to white males for full time workers. In 1940, full-time black male workers earned only about 43 percent of what white male full-time workers did. By 1980 the racial pay ratio had risen to nearly 73 percent, but there has been little subsequent progress. Until the mid-1960s these gains can be attributed primarily to migration from the low-wage South to higher paying areas in the North, and to increases in the quantity and quality of black education over time (Margo 1995; Smith and Welch 1990). Since then, however, most gains have been due to shifts in relative pay within regions. Although it is clear that discrimination was a key factor in limiting access to education, the role of discrimination within the labor market in contributing to these differentials has been a more controversial topic (see Wright 1986, pp. 127-34). But the episodic nature of black wage gains, especially after 1964 is compelling evidence that discrimination has played a role historically in earnings differences and that federal anti-discrimination legislation was a crucial factor in reducing its effects (Donohue and Heckman 1991).

Table 4

Black Male Wages as a Percentage of White Male Wages, 1940-2004

Date Black Relative Wage
1940 43.4
1950 55.2
1960 57.5
1970 64.4
1980 72.6
1990 70.0
2004 77.0

Notes and Sources: Data for 1940 through 1980 are based on Census data as reported in Smith and Welch (1989, Table 8). Data for 1990 are from Ehrenberg and Smith (2000, Table 12.4) and refer to earnings of full time, full year workers. Data from 2004 are for median weekly earnings of full-time wage and salary workers derived from data in the Current Population Survey accessed on-line from the Bureau of Labor Statistic on 13 December 2005; URL ftp://ftp.bls.gov/pub/special.requests/lf/aat37.txt.

Male-Female wage gaps have also narrowed substantially over time. In the 1820s women’s earnings in manufacturing were a little less than 40 percent of those of men, but this ratio rose over time reaching about 55 percent by the 1920s. Across all sectors women’s relative pay rose during the first half of the twentieth century, but gains in female wages stalled during the 1950s and 1960s at the time when female labor force participation began to increase rapidly. Beginning in the late 1970s or early 1980s, relative female pay began to rise again, and today women earn about 80 percent what men do (Goldin 1990, table 3.2; Goldin 2000, pp. 606-8). Part of this remaining difference is explained by differences in the occupational distribution of men and women, with women tending to be concentrated in lower paying jobs. Whether these differences are the result of persistent discrimination or arise because of differences in productivity or a choice by women to trade off greater flexibility in terms of labor market commitment for lower pay remains controversial.

In addition to locational, sectoral, racial and gender wage differentials, economists have also documented and analyzed differences by industry. Krueger and Summers (1987) find that there are pronounced differences in wages by industry within well-specified occupational classes, and that these differentials have remained relatively stable over several decades. One interpretation of this phenomenon is that in industries with substantial market power workers are able to extract some of the monopoly rents as higher pay. An alternative view is that workers are in fact heterogeneous, and differences in wages reflect a process of sorting in which higher paying industries attract more able workers.

The Response to Short-run Macroeconomic Fluctuations

The existence of unemployment is one of the clearest indications of the persistent frictions that characterize labor markets. As described earlier, the concept of unemployment first entered common discussion with the growth of the factory labor force in the 1870s. Unemployment was not a visible social phenomenon in an agricultural economy, although there was undoubtedly a great deal of hidden underemployment.

Although one might have expected that the shift from spot toward more contractual labor markets would have increased rigidities in the employment relationship that would result in higher levels of unemployment there is in fact no evidence of any long-run increase in the level of unemployment.

Contemporaneous measurements of the rate of unemployment only began in 1940. Prior to this date, economic historians have had to estimate unemployment levels from a variety of other sources. Decennial censuses provide benchmark levels, but it is necessary to interpolate between these benchmarks based on other series. Conclusions about long-run changes in unemployment behavior depend to a large extent on the method used to interpolate between benchmark dates. Estimates prepared by Stanley Lebergott (1964) suggest that the average level of unemployment and its volatility have declined between the pre-1930 and post-World War II periods. Christina Romer (1986a, 1986b), however, has argued that there was no decline in volatility. Rather, she argues that the apparent change in behavior is the result of Lebergott’s interpolation procedure.

While the aggregate behavior of unemployment has changed surprisingly little over the past century, the changing nature of employment relationships has been reflected much more clearly in changes in the distribution of the burden of unemployment (Goldin 2000, pp. 591-97). At the beginning of the twentieth century, unemployment was relatively widespread, and largely unrelated to personal characteristics. Thus many employees faced great uncertainty about the permanence of their employment relationship. Today, on the other hand, unemployment is highly concentrated: falling heavily on the least skilled, the youngest, and the non-white segments of the labor force. Thus, the movement away from spot markets has tended to create a two-tier labor market in which some workers are highly vulnerable to economic fluctuations, while others remain largely insulated from economic shocks.

Wage Determination and Distributional Issues

American economic growth has generated vast increases in the material standard of living. Real gross domestic product per capita, for example, has increased more than twenty-fold since 1820 (Steckel 2002). This growth in total output has in large part been passed on to labor in the form of higher wages. Although labor’s share of national output has fluctuated somewhat, in the long-run it has remained surprisingly stable. According to Abramovitz and David (2000, p. 20), labor received 65 percent of national income in the years 1800-1855. Labor’s share dropped in the late nineteenth and early twentieth centuries, falling to a low of 54 percent of national income between 1890 and 1927, but has since risen, reaching 65 percent again in 1966-1989. Thus, over the long term, labor income has grown at the same rate as total output in the economy.

The distribution of labor’s gains across different groups in the labor force has also varied over time. I have already discussed patterns of wage variation by race and gender, but another important issue revolves around the overall level of inequality of pay, and differences in pay between groups of skilled and unskilled workers. Careful research by Picketty and Saez (2003) using individual income tax returns has documented changes in the overall distribution of income in the United States since 1913. They find that inequality has followed a U-shaped pattern over the course of the twentieth century. Inequality was relatively high at the beginning of the period they consider, fell sharply during World War II, held steady until the early 1970s and then began to increase, reaching levels comparable to those in the early twentieth century by the 1990s.

An important factor in the rising inequality of income since 1970 has been growing dispersion in wage rates. The wage differential between workers in the 90th percentile of the wage distribution and those in the 10th percentile increased by 49 percent between 1969 and 1995 (Plotnick et al 2000, pp. 357-58). These shifts are mirrored in increased premiums earned by college graduates relative to high school graduates. Two primary explanations have been advanced for these trends. First, there is evidence that technological changes—especially those associated with the increased use of information technology—has increased relative demand for more educated workers (Murnane, Willett and Levy (1995). Second, increased global integration has allowed low-wage manufacturing industries overseas to compete more effectively with U.S. manufacturers, thus depressing wages in what have traditionally been high-paying blue collar jobs.

Efforts to expand the scope of analysis over a longer-run encounter problems with more limited data. Based on selected wage ratios of skilled and unskilled workers Willamson and Lindert (1980) have argued that there was an increase in wage inequality over the course of the nineteenth century. But other scholars have argued that the wage series that Williamson and Lindert used are unreliable (Margo 2000b, pp. 224-28).

Conclusions

The history of labor market institutions in the United States illustrates the point that real world economies are substantially more complex than the simplest textbook models. Instead of a disinterested and omniscient auctioneer, the process of matching buyers and sellers takes place through the actions of self-interested market participants. The resulting labor market institutions do not respond immediately and precisely to shifting patterns of incentives. Rather they are subject to historical forces of increasing-returns and lock-in that cause them to change gradually and along path-dependent trajectories.

For all of these departures from the theoretically ideal market, however, the history of labor markets in the United States can also be seen as a confirmation of the remarkable power of market processes of allocation. From the beginning of European settlement in mainland North America, labor markets have done a remarkable job of responding to shifting patterns of demand and supply. Not only have they accomplished the massive geographic shifts associated with the settlement of the United States, but they have also dealt with huge structural changes induced by the sustained pace of technological change.

References

Abramovitz, Moses and Paul A. David. “American Macroeconomic Growth in the Era of Knowledge-Based Progress: The Long-Run Perspective.” In The Cambridge Economic History of the United States, Volume 3: The Twentieth Century, edited by Stanley L. Engerman and Robert Gallman. New York: Cambridge University Press, 2000.

Alston, Lee J. and Jeffery G. Williamson. “The Earnings Gap between Agricultural and Manufacturing Laborers, 1925-1941. Journal of Economic History 51, no. 1 (1991): 83-99.

Barton, Josef J. Peasants and Strangers: Italians, Rumanians, and Slovaks in an American City, 1890-1950. Cambridge, MA: Harvard University Press, 1975.

Bellante, Don. “The North-South Differential and the Migration of Heterogeneous Labor.” American Economic Review 69, no. 1 (1979): 166-75.

Carter, Susan B. “The Changing Importance of Lifetime Jobs in the U.S. Economy, 1892-1978.” Industrial Relations 27 (1988): 287-300.

Carter, Susan B. and Elizabeth Savoca. “Labor Mobility and Lengthy Jobs in Nineteenth-Century America.” Journal of Economic History 50, no. 1 (1990): 1-16.

Carter, Susan B. and Richard Sutch. “Historical Perspectives on the Economic Consequences of Immigration into the United States.” In The Handbook of International Migration: The American Experience, edited by Charles Hirschman, Philip Kasinitz and Josh DeWind. New York: Russell Sage Foundation, 1999.

Coelho, Philip R.P. and Moheb A. Ghali. “The End of the North-South Wage Differential.” American Economic Review 61, no. 5 (1971): 932-37.

Coelho, Philip R.P. and James F. Shepherd. “Regional Differences in Real Wages: The United States in 1851-1880.” Explorations in Economic History 13 (1976): 203-30.

Craig, Lee A. To Sow One Acre More: Childbearing and Farm Productivity in the Antebellum North. Baltimore: Johns Hopkins University Press, 1993.

Donahue, John H. III and James J. Heckman. “Continuous versus Episodic Change: The Impact of Civil Rights Policy on the Economic Status of Blacks.” Journal of Economic Literature 29, no. 4 (1991): 1603-43.

Dunn, Richard S. “Servants and Slaves: The Recruitment and Employment of Labor.” In Colonial British America: Essays in the New History of the Early Modern Era, edited by Jack P. Greene and J.R. Pole. Baltimore: Johns Hopkins University Press, 1984.

Edwards, B. “A World of Work: A Survey of Outsourcing.” Economist 13 November 2004.

Edwards, Richard. Contested Terrain: The Transformation of the Workplace in the Twentieth Century. New York: Basic Books, 1979.

Ehrenberg, Ronald G. and Robert S. Smith. Modern Labor Economics: Theory and Public Policy, seventh edition. Reading, MA; Addison-Wesley, 2000.

Eldridge, Hope T. and Dorothy Swaine Thomas. Population Redistribution and Economic Growth, United States 1870-1950, vol. 3: Demographic Analyses and Interrelations. Philadelphia: American Philosophical Society, 1964.

Fishback, Price V. “Workers’ Compensation.” EH.Net Encyclopedia, edited by Robert Whaples. August 15, 2001. URL http://www.eh.net/encyclopedia/articles/fishback.workers.compensation.

Freeman, Richard and James Medoff. What Do Unions Do? New York: Basic Books, 1984.

Friedman, Gerald (2002). “Labor Unions in the United States.” EH.Net Encyclopedia, edited by Robert Whaples. May 8, 2002. URL http://www.eh.net/encyclopedia/articles/friedman.unions.us.

Galenson, David W. White Servitude in Colonial America. New York: Cambridge University Press, 1981.

Galenson, David W. “The Rise and Fall of Indentured Servitude in the Americas: An Economic Analysis.” Journal of Economic History 44, no. 1 (1984): 1-26.

Galloway, Lowell E., Richard K. Vedder and Vishwa Shukla. “The Distribution of the Immigrant Population in the United States: An Econometric Analysis.” Explorations in Economic History 11 (1974): 213-26.

Gjerde, John. From Peasants to Farmers: Migration from Balestrand, Norway to the Upper Middle West. New York: Cambridge University Press, 1985.

Goldin, Claudia. “The Political Economy of Immigration Restriction in the United States, 1890 to 1921.” In The Regulated Economy: A Historical Approach to Political Economy, edited by Claudia Goldin and Gary Libecap. Chicago: University of Chicago Press, 1994.

Goldin, Claudia. “Labor Markets in the Twentieth Century.” In The Cambridge Economic History of the United States, Volume 3: The Twentieth Century, edited by Stanley L. Engerman and Robert Gallman. Cambridge: Cambridge University Press, 2000.

Grubb, Farley. “The Market for Indentured Immigrants: Evidence on the Efficiency of Forward Labor Contracting in Philadelphia, 1745-1773.” Journal of Economic History 45, no. 4 (1985a): 855-68.

Grubb, Farley. “The Incidence of Servitude in Trans-Atlantic Migration, 1771-1804.” Explorations in Economic History 22 (1985b): 316-39.

Grubb, Farley. “Redemptioner Immigration to Pennsylvania: Evidence on Contract Choice and Profitability.” Journal of Economic History 46, no. 2 (1986): 407-18.

Hatton, Timothy J. and Jeffrey G. Williamson (1991). “Integrated and Segmented Labor Markets: Thinking in Two Sectors.” Journal of Economic History 51, no. 2 (1991): 413-25.

Hughes, Jonathan and Louis Cain. American Economic History, sixth edition. Boston: Addison-Wesley, 2003.

Jacoby, Sanford M. “The Development of Internal Labor markets in American Manufacturing Firms.” In Internal Labor Markets, edited by Paul Osterman, 23-69. Cambridge, MA: MIT Press, 1984

Jacoby, Sanford M. Employing Bureaucracy: Managers, Unions, and the Transformation of Work in American Industry, 1900-1945. New York: Columbia University Press, 1985.

Jacoby, Sanford M. and Sunil Sharma. “Employment Duration and Industrial Labor Mobility in the United States, 1880-1980.” Journal of Economic History 52, no. 1 (1992): 161-79.

James, John A. “Job Tenure in the Gilded Age.” In Labour Market Evolution: The Economic History of Market Integration, Wage Flexibility, and the Employment Relation, edited by George Grantham and Mary MacKinnon. New York: Routledge, 1994.

Kamphoefner, Walter D. The Westfalians: From Germany to Missouri. Princeton, NJ: Princeton University Press, 1987.

Keyssar, Alexander. Out of Work: The First Century of Unemployment in Massachusetts. New York: Cambridge University Press, 1986.

Krueger, Alan B. and Lawrence H. Summers. “Reflections on the Inter-Industry Wage Structure.” In Unemployment and the Structure of Labor Markets, edited by Kevin Lang and Jonathan Leonard, 17-47. Oxford: Blackwell, 1987.

Lebergott, Stanley. Manpower in Economic Growth: The American Record since 1800. New York: McGraw-Hill, 1964.

Margo, Robert. “Explaining Black-White Wage Convergence, 1940-1950: The Role of the Great Compression.” Industrial and Labor Relations Review 48 (1995): 470-81.

Margo, Robert. Wages and Labor Markets in the United States, 1820-1860. Chicago: University of Chicago Press, 2000a.

Margo, Robert. “The Labor Force in the Nineteenth Century.” In The Cambridge Economic History of the United States, Volume 2: The Long Nineteenth Century, edited by Stanley L. Engerman and Robert E. Gallman, 207-44. New York: Cambridge University Press, 2000b.

McCusker, John J. and Russell R. Menard. The Economy of British America: 1607-1789. Chapel Hill: University of North Carolina Press, 1985.

Montgomery, Edward. “Evidence on Metropolitan Wage Differences across Industries and over Time.” Journal of Urban Economics 31 (1992): 69-83.

Morgan, Edmund S. “The Labor Problem at Jamestown, 1607-18.” American Historical Review 76 (1971): 595-611.

Murnane, Richard J., John B. Willett and Frank Levy. “The Growing Importance of Cognitive Skills in Wage Determination.” Review of Economics and Statistics 77 (1995): 251-66

Nelson, Daniel. Managers and Workers: Origins of the New Factory System in the United States, 1880-1920. Madison: University of Wisconsin Press, 1975.

O’Rourke, Kevin H. and Jeffrey G. Williamson. Globalization and History: The Evolution of a Nineteenth-Century Atlantic Economy. Cambridge, MA: MIT Press, 1999.

Owen, Laura. “History of Labor Turnover in the U.S.” EH.Net Encyclopedia, edited by Robert Whaples. April 30, 2004. URL http://www.eh.net/encyclopedia/articles/owen.turnover.

Piketty, Thomas and Emmanuel Saez. “Income Inequality in the United States, 1913-1998.” Quarterly Journal of Economics 118 (2003): 1-39.

Plotnick, Robert D. et al. “The Twentieth-Century Record of Inequality and Poverty in the United States” In The Cambridge Economic History of the United States, Volume 3: The Twentieth Century, edited by Stanley L. Engerman and Robert Gallman. New York: Cambridge University Press, 2000.

Romer, Christina. “New Estimates of Prewar Gross National Product and Unemployment.” Journal of Economic History 46, no. 2 (1986a): 341-52.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” Journal of Political Economy 94 (1986b): 1-37.

Rosenbloom, Joshua L. “Was There a National Labor Market at the End of the Nineteenth Century? New Evidence on Earnings in Manufacturing.” Journal of Economic History 56, no. 3 (1996): 626-56.

Rosenbloom, Joshua L. Looking for Work, Searching for Workers: American Labor Markets during Industrialization. New York: Cambridge University Press, 2002.

Slichter, Sumner H. “The Current Labor Policies of American Industries.” Quarterly Journal of Economics 43 (1929): 393-435.

Sahling, Leonard G. and Sharon P. Smith. “Regional Wage Differentials: Has the South Risen Again?” Review of Economics and Statistics 65 (1983): 131-35.

Smith, James P. and Finis R. Welch. “Black Economic Progress after Myrdal.” Journal of Economic Literature 27 (1989): 519-64.

Steckel, Richard. “A History of the Standard of Living in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. July 22, 2002. URL http://eh.net/encyclopedia/article/steckel.standard.living.us

Sundstrom, William A. and Joshua L. Rosenbloom. “Occupational Differences in the Dispersion of Wages and Working Hours: Labor Market Integration in the United States, 1890-1903.” Explorations in Economic History 30 (1993): 379-408.

Ward, David. Cities and Immigrants: A Geography of Change in Nineteenth-Century America. New York: Oxford University Press, 1971.

Ware, Caroline F. The Early New England Cotton Manufacture: A Study in Industrial Beginnings. Boston: Houghton Mifflin, 1931.

Weiss, Thomas. “Revised Estimates of the United States Workforce, 1800-1860.” In Long Term Factors in American Economic Growth, edited by Stanley L. Engerman and Robert E. Gallman, 641-78. Chicago: University of Chicago, 1986.

Whaples, Robert. “Child Labor in the United States.” EH.Net Encyclopedia, edited by Robert Whaples. October 8, 2005. URL http://eh.net/encyclopedia/article/whaples.childlabor.

Williamson, Jeffrey G. “The Evolution of Global Labor Markets since 1830: Background Evidence and Hypotheses.” Explorations in Economic History 32 (1995): 141-96.

Williamson, Jeffrey G. and Peter H. Lindert. American Inequality: A Macroeconomic History. New York: Academic Press, 1980.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy since the Civil War. New York: Basic Books, 1986.

Wright, Gavin. “Postbellum Southern Labor Markets.” In Quantity and Quiddity: Essays in U.S. Economic History, edited by Peter Kilby. Middletown, CT: Wesleyan University Press, 1987.

Wright, Gavin. “American Agriculture and the Labor Market: What Happened to Proletarianization?” Agricultural History 62 (1988): 182-209.

Citation: Rosenbloom, Joshua. “The History of American Labor Market Institutions and Outcomes”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-history-of-american-labor-market-institutions-and-outcomes/

The American Economy during World War II

Christopher J. Tassava

For the United States, World War II and the Great Depression constituted the most important economic event of the twentieth century. The war’s effects were varied and far-reaching. The war decisively ended the depression itself. The federal government emerged from the war as a potent economic actor, able to regulate economic activity and to partially control the economy through spending and consumption. American industry was revitalized by the war, and many sectors were by 1945 either sharply oriented to defense production (for example, aerospace and electronics) or completely dependent on it (atomic energy). The organized labor movement, strengthened by the war beyond even its depression-era height, became a major counterbalance to both the government and private industry. The war’s rapid scientific and technological changes continued and intensified trends begun during the Great Depression and created a permanent expectation of continued innovation on the part of many scientists, engineers, government officials and citizens. Similarly, the substantial increases in personal income and frequently, if not always, in quality of life during the war led many Americans to foresee permanent improvements to their material circumstances, even as others feared a postwar return of the depression. Finally, the war’s global scale severely damaged every major economy in the world except for the United States, which thus enjoyed unprecedented economic and political power after 1945.

The Great Depression

The global conflict which was labeled World War II emerged from the Great Depression, an upheaval which destabilized governments, economies, and entire nations around the world. In Germany, for instance, the rise of Adolph Hitler and the Nazi party occurred at least partly because Hitler claimed to be able to transform a weakened Germany into a self-sufficient military and economic power which could control its own destiny in European and world affairs, even as liberal powers like the United States and Great Britain were buffeted by the depression.

In the United States, President Franklin Roosevelt promised, less dramatically, to enact a “New Deal” which would essentially reconstruct American capitalism and governance on a new basis. As it waxed and waned between 1933 and 1940, Roosevelt’s New Deal mitigated some effects of the Great Depression, but did not end the economic crisis. In 1939, when World War II erupted in Europe with Germany’s invasion of Poland, numerous economic indicators suggested that the United States was still deeply mired in the depression. For instance, after 1929 the American gross domestic product declined for four straight years, then slowly and haltingly climbed back to its 1929 level, which was finally exceeded again in 1936. (Watkins, 2002; Johnston and Williamson, 2004)

Unemployment was another measure of the depression’s impact. Between 1929 and 1939, the American unemployment rate averaged 13.3 percent (calculated from “Corrected BLS” figures in Darby, 1976, 8). In the summer of 1940, about 5.3 million Americans were still unemployed — far fewer than the 11.5 million who had been unemployed in 1932 (about thirty percent of the American workforce) but still a significant pool of unused labor and, often, suffering citizens. (Darby, 1976, 7. For somewhat different figures, see Table 3 below.)

In spite of these dismal statistics, the United States was, in other ways, reasonably well prepared for war. The wide array of New Deal programs and agencies which existed in 1939 meant that the federal government was markedly larger and more actively engaged in social and economic activities than it had been in 1929. Moreover, the New Deal had accustomed Americans to a national government which played a prominent role in national affairs and which, at least under Roosevelt’s leadership, often chose to lead, not follow, private enterprise and to use new capacities to plan and administer large-scale endeavors.

Preparedness and Conversion

As war spread throughout Europe and Asia between 1939 and 1941, nowhere was the federal government’s leadership more important than in the realm of “preparedness” — the national project to ready for war by enlarging the military, strengthening certain allies such as Great Britain, and above all converting America’s industrial base to produce armaments and other war materiel rather than civilian goods. “Conversion” was the key issue in American economic life in 1940-1942. In many industries, company executives resisted converting to military production because they did not want to lose consumer market share to competitors who did not convert. Conversion thus became a goal pursued by public officials and labor leaders. In 1940, Walter Reuther, a high-ranking officer in the United Auto Workers labor union, provided impetus for conversion by advocating that the major automakers convert to aircraft production. Though initially rejected by car-company executives and many federal officials, the Reuther Plan effectively called the public’s attention to America’s lagging preparedness for war. Still, the auto companies only fully converted to war production in 1942 and only began substantially contributing to aircraft production in 1943.

Even for contemporary observers, not all industries seemed to be lagging as badly as autos, though. Merchant shipbuilding mobilized early and effectively. The industry was overseen by the U.S. Maritime Commission (USMC), a New Deal agency established in 1936 to revive the moribund shipbuilding industry, which had been in a depression since 1921, and to ensure that American shipyards would be capable of meeting wartime demands. With the USMC supporting and funding the establishment and expansion of shipyards around the country, including especially the Gulf and Pacific coasts, merchant shipbuilding took off. The entire industry had produced only 71 ships between 1930 and 1936, but from 1938 to 1940, commission-sponsored shipyards turned out 106 ships, and then almost that many in 1941 alone (Fischer, 41). The industry’s position in the vanguard of American preparedness grew from its strategic import — ever more ships were needed to transport American goods to Great Britain and France, among other American allies — and from the Maritime Commission’s ability to administer the industry through means as varied as construction contracts, shipyard inspectors, and raw goading of contractors by commission officials.

Many of the ships built in Maritime Commission shipyards carried American goods to the European allies as part of the “Lend-Lease” program, which was instituted in 1941 and provided another early indication that the United States could and would shoulder a heavy economic burden. By all accounts, Lend-Lease was crucial to enabling Great Britain and the Soviet Union to fight the Axis, not least before the United States formally entered the war in December 1941. (Though scholars are still assessing the impact of Lend-Lease on these two major allies, it is likely that both countries could have continued to wage war against Germany without American aid, which seems to have served largely to augment the British and Soviet armed forces and to have shortened the time necessary to retake the military offensive against Germany.) Between 1941 and 1945, the U.S. exported about $32.5 billion worth of goods through Lend-Lease, of which $13.8 billion went to Great Britain and $9.5 billion went to the Soviet Union (Milward, 71). The war dictated that aircraft, ships (and ship-repair services), military vehicles, and munitions would always rank among the quantitatively most important Lend-Lease goods, but food was also a major export to Britain (Milward, 72).

Pearl Harbor was an enormous spur to conversion. The formal declarations of war by the United States on Japan and Germany made plain, once and for all, that the American economy would now need to be transformed into what President Roosevelt had called “the Arsenal of Democracy” a full year before, in December 1940. From the perspective of federal officials in Washington, the first step toward wartime mobilization was the establishment of an effective administrative bureaucracy.

War Administration

From the beginning of preparedness in 1939 through the peak of war production in 1944, American leaders recognized that the stakes were too high to permit the war economy to grow in an unfettered, laissez-faire manner. American manufacturers, for instance, could not be trusted to stop producing consumer goods and to start producing materiel for the war effort. To organize the growing economy and to ensure that it produced the goods needed for war, the federal government spawned an array of mobilization agencies which not only often purchased goods (or arranged their purchase by the Army and Navy), but which in practice closely directed those goods’ manufacture and heavily influenced the operation of private companies and whole industries.

Though both the New Deal and mobilization for World War I served as models, the World War II mobilization bureaucracy assumed its own distinctive shape as the war economy expanded. Most importantly, American mobilization was markedly less centralized than mobilization in other belligerent nations. The war economies of Britain and Germany, for instance, were overseen by war councils which comprised military and civilian officials. In the United States, the Army and Navy were not incorporated into the civilian administrative apparatus, nor was a supreme body created to subsume military and civilian organizations and to direct the vast war economy.

Instead, the military services enjoyed almost-unchecked control over their enormous appetites for equipment and personnel. With respect to the economy, the services were largely able to curtail production destined for civilians (e.g., automobiles or many non-essential foods) and even for war-related but non-military purposes (e.g., textiles and clothing). In parallel to but never commensurate with the Army and Navy, a succession of top-level civilian mobilization agencies sought to influence Army and Navy procurement of manufactured goods like tanks, planes, and ships, raw materials like steel and aluminum, and even personnel. One way of gauging the scale of the increase in federal spending and the concomitant increase in military spending is through comparison with GDP, which itself rose sharply during the war. Table 1 shows the dramatic increases in GDP, federal spending, and military spending.

Table 1: Federal Spending and Military Spending during World War II

(dollar values in billions of constant 1940 dollars)

Nominal GDP Federal Spending Defense Spending
Year total $ % increase total $ % increase % of GDP total $ % increase % of GDP % of federal spending
1940 101.4 9.47 9.34% 1.66 1.64% 17.53%
1941 120.67 19.00% 13.00 37.28% 10.77% 6.13 269.28% 5.08% 47.15%
1942 139.06 15.24% 30.18 132.15% 21.70% 22.05 259.71% 15.86% 73.06%
1943 136.44 -1.88% 63.57 110.64% 46.59% 43.98 99.46% 32.23% 69.18%
1944 174.84 28.14% 72.62 14.24% 41.54% 62.95 43.13% 36.00% 86.68%
1945 173.52 -0.75% 72.11 -0.70% 41.56% 64.53 2.51% 37.19% 89.49%

Sources: 1940 GDP figure from “Nominal GDP: Louis Johnston and Samuel H. Williamson, “The Annual Real and Nominal GDP for the United States, 1789 — Present,” Economic History Services, March 2004, available at http://www.eh.net/hmit/gdp/ (accessed 27 July 2005). 1941-1945 GDP figures calculated using Bureau of Labor Statistics, “CPI Inflation Calculator,” available at http://data.bls.gov/cgi-bin/cpicalc.pl. Federal and defense spending figures from Government Printing Office, “Budget of the United States Government: Historical Tables Fiscal Year 2005,” Table 6.1—Composition of Outlays: 1940—2009 and Table 3.1—Outlays by Superfunction and Function: 1940—2009.

Preparedness Agencies

To oversee this growth, President Roosevelt created a number of preparedness agencies beginning in 1939, including the Office for Emergency Management and its key sub-organization, the National Defense Advisory Commission; the Office of Production Management; and the Supply Priorities Allocation Board. None of these organizations was particularly successful at generating or controlling mobilization because all included two competing parties. On one hand, private-sector executives and managers had joined the federal mobilization bureaucracy but continued to emphasize corporate priorities such as profits and positioning in the marketplace. On the other hand, reform-minded civil servants, who were often holdovers from the New Deal, emphasized the state’s prerogatives with respect to mobilization and war making. As a result of this basic division in the mobilization bureaucracy, “the military largely remained free of mobilization agency control” (Koistinen, 502).

War Production Board

In January 1942, as part of another effort to mesh civilian and military needs, President Roosevelt established a new mobilization agency, the War Production Board, and placed it under the direction of Donald Nelson, a former Sears Roebuck executive. Nelson understood immediately that the staggeringly complex problem of administering the war economy could be reduced to one key issue: balancing the needs of civilians — especially the workers whose efforts sustained the economy — against the needs of the military — especially those of servicemen and women but also their military and civilian leaders.

Though neither Nelson nor other high-ranking civilians ever fully resolved this issue, Nelson did realize several key economic goals. First, in late 1942, Nelson successfully resolved the so-called “feasibility dispute,” a conflict between civilian administrators and their military counterparts over the extent to which the American economy should be devoted to military needs during 1943 (and, by implication, in subsequent war years). Arguing that “all-out” production for war would harm America’s long-term ability to continue to produce for war after 1943, Nelson convinced the military to scale back its Olympian demands. He thereby also established a precedent for planning war production so as to meet most military and some civilian needs. Second (and partially as a result of the feasibility dispute), the WPB in late 1942 created the “Controlled Materials Plan,” which effectively allocated steel, aluminum, and copper to industrial users. The CMP obtained throughout the war, and helped curtail conflict among the military services and between them and civilian agencies over the growing but still scarce supplies of those three key metals.

Office of War Mobilization

By late 1942 it was clear that Nelson and the WPB were unable to fully control the growing war economy and especially to wrangle with the Army and Navy over the necessity of continued civilian production. Accordingly, in May 1943 President Roosevelt created the Office of War Mobilization and in July put James Byrne — a trusted advisor, a former U.S. Supreme Court justice, and the so-called “assistant president” — in charge. Though the WPB was not abolished, the OWM soon became the dominant mobilization body in Washington. Unlike Nelson, Byrnes was able to establish an accommodation with the military services over war production by “acting as an arbiter among contending forces in the WPB, settling disputes between the board and the armed services, and dealing with the multiple problems” of the War Manpower Commission, the agency charged with controlling civilian labor markets and with assuring a continuous supply of draftees to the military (Koistinen, 510).

Beneath the highest-level agencies like the WPB and the OWM, a vast array of other federal organizations administered everything from labor (the War Manpower Commission) to merchant shipbuilding (the Maritime Commission) and from prices (the Office of Price Administration) to food (the War Food Administration). Given the scale and scope of these agencies’ efforts, they did sometimes fail, and especially so when they carried with them the baggage of the New Deal. By the midpoint of America’s involvement in the war, for example, the Civilian Conservation Corps, the Works Progress Administration, and the Rural Electrification Administration — all prominent New Deal organizations which tried and failed to find a purpose in the mobilization bureaucracy — had been actually or virtually abolished.

Taxation

However, these agencies were often quite successful in achieving their respective, narrower aims. The Department of the Treasury, for instance, was remarkably successful at generating money to pay for the war, including the first general income tax in American history and the famous “war bonds” sold to the public. Beginning in 1940, the government extended the income tax to virtually all Americans and began collecting the tax via the now-familiar method of continuous withholdings from paychecks (rather than lump-sum payments after the fact). The number of Americans required to pay federal taxes rose from 4 million in 1939 to 43 million in 1945. With such a large pool of taxpayers, the American government took in $45 billion in 1945, an enormous increase over the $8.7 billion collected in 1941 but still far short of the $83 billion spent on the war in 1945. Over that same period, federal tax revenue grew from about 8 percent of GDP to more than 20 percent. Americans who earned as little as $500 per year paid income tax at a 23 percent rate, while those who earned more than $1 million per year paid a 94 percent rate. The average income tax rate peaked in 1944 at 20.9 percent (“Fact Sheet: Taxes”).

War Bonds

All told, taxes provided about $136.8 billion of the war’s total cost of $304 billion (Kennedy, 625). To cover the other $167.2 billion, the Treasury Department also expanded its bond program, creating the famous “war bonds” hawked by celebrities and purchased in vast numbers and enormous values by Americans. The first war bond was purchased by President Roosevelt on May 1, 1941 (“Introduction to Savings Bonds”). Though the bonds returned only 2.9 percent annual interest after a 10-year maturity, they nonetheless served as a valuable source of revenue for the federal government and an extremely important investment for many Americans. Bonds served as a way for citizens to make an economic contribution to the war effort, but because interest on them accumulated slower than consumer prices rose, they could not completely preserve income which could not be readily spent during the war. By the time war-bond sales ended in 1946, 85 million Americans had purchased more than $185 billion worth of the securities, often through automatic deductions from their paychecks (“Brief History of World War Two Advertising Campaigns: War Loans and Bonds”). Commercial institutions like banks also bought billions of dollars of bonds and other treasury paper, holding more than $24 billion at the war’s end (Kennedy, 626).

Price Controls and the Standard of Living

Fiscal and financial matters were also addressed by other federal agencies. For instance, the Office of Price Administration used its “General Maximum Price Regulation” (also known as “General Max”) to attempt to curtail inflation by maintaining prices at their March 1942 levels. In July, the National War Labor Board (NWLB; a successor to a New Deal-era body) limited wartime wage increases to about 15 percent, the factor by which the cost of living rose from January 1941 to May 1942. Neither “General Max” nor the wage-increase limit was entirely successful, though federal efforts did curtail inflation. Between April 1942 and June 1946, the period of the most stringent federal controls on inflation, the annual rate of inflation was just 3.5 percent; the annual rate had been 10.3 percent in the six months before April 1942 and it soared to 28.0 percent in the six months after June 1946 (Rockoff, “Price and Wage Controls in Four Wartime Periods,” 382).With wages rising about 65 percent over the course of the war, this limited success in cutting the rate of inflation meant that many American civilians enjoyed a stable or even improving quality of life during the war (Kennedy, 641). Improvement in the standard of living was not ubiquitous, however. In some regions, such as rural areas in the Deep South, living standards stagnated or even declined, and according to some economists, the national living standard barely stayed level or even declined (Higgs, 1992).

Labor Unions

Labor unions and their members benefited especially. The NWLB’s “maintenance-of-membership” rule allowed unions to count all new employees as union members and to draw union dues from those new employees’ paychecks, so long as the unions themselves had already been recognized by the employer. Given that most new employment occurred in unionized workplaces, including plants funded by the federal government through defense spending, “the maintenance-of-membership ruling was a fabulous boon for organized labor,” for it required employers to accept unions and allowed unions to grow dramatically: organized labor expanded from 10.5 million members in 1941 to 14.75 million in 1945 (Blum, 140). By 1945, approximately 35.5 percent of the non-agricultural workforce was unionized, a record high.

The War Economy at High Water

Despite the almost-continual crises of the civilian war agencies, the American economy expanded at an unprecedented (and unduplicated) rate between 1941 and 1945. The gross national product of the U.S., as measured in constant dollars, grew from $88.6 billion in 1939 — while the country was still suffering from the depression — to $135 billion in 1944. War-related production skyrocketed from just two percent of GNP to 40 percent in 1943 (Milward, 63).

As Table 2 shows, output in many American manufacturing sectors increased spectacularly from 1939 to 1944, the height of war production in many industries.

Table 2: Indices of American Manufacturing Output (1939 = 100)

1940 1941 1942 1943 1944
Aircraft 245 630 1706 2842 2805
Munitions 140 423 2167 3803 2033
Shipbuilding 159 375 1091 1815 1710
Aluminum 126 189 318 561 474
Rubber 109 144 152 202 206
Steel 131 171 190 202 197

Source: Milward, 69.

Expansion of Employment

The wartime economic boom spurred and benefited from several important social trends. Foremost among these trends was the expansion of employment, which paralleled the expansion of industrial production. In 1944, unemployment dipped to 1.2 percent of the civilian labor force, a record low in American economic history and as near to “full employment” as is likely possible (Samuelson). Table 3 shows the overall employment and unemployment figures during the war period.

Table 3: Civilian Employment and Unemployment during World War II

(Numbers in thousands)

1940 1941 1942 1943 1944 1945
All Non-institutional Civilians 99,840 99,900 98,640 94,640 93,220 94,090
Civilian Labor Force Total 55,640 55,910 56,410 55,540 54,630 53,860
% of Population 55.7% 56% 57.2% 58.7% 58.6% 57.2%
Employed Total 47,520 50,350 53,750 54,470 53,960 52,820
% of Population 47.6% 50.4% 54.5% 57.6% 57.9% 56.1%
% of Labor Force 85.4% 90.1% 95.3% 98.1% 98.8% 98.1%
Unemployed Total 8,120 5,560 2,660 1,070 670 1,040
% of Population 8.1% 5.6% 2.7% 1.1% 0.7% 1.1%
% of Labor Force 14.6% 9.9% 4.7% 1.9% 1.2% 1.9%

Source: Bureau of Labor Statistics, “Employment status of the civilian noninstitutional population, 1940 to date.” Available at http://www.bls.gov/cps/cpsaat1.pdf.

Not only those who were unemployed during the depression found jobs. So, too, did about 10.5 million Americans who either could not then have had jobs (the 3.25 million youths who came of age after Pearl Harbor) or who would not have then sought employment (3.5 million women, for instance). By 1945, the percentage of blacks who held war jobs — eight percent — approximated blacks’ percentage in the American population — about ten percent (Kennedy, 775). Almost 19 million American women (including millions of black women) were working outside the home by 1945. Though most continued to hold traditional female occupations such as clerical and service jobs, two million women did labor in war industries (half in aerospace alone) (Kennedy, 778). Employment did not just increase on the industrial front. Civilian employment by the executive branch of the federal government — which included the war administration agencies — rose from about 830,000 in 1938 (already a historical peak) to 2.9 million in June 1945 (Nash, 220).

Population Shifts

Migration was another major socioeconomic trend. The 15 million Americans who joined the military — who, that is, became employees of the military — all moved to and between military bases; 11.25 million ended up overseas. Continuing the movements of the depression era, about 15 million civilian Americans made a major move (defined as changing their county of residence). African-Americans moved with particular alacrity and permanence: 700,000 left the South and 120,000 arrived in Los Angeles during 1943 alone. Migration was especially strong along rural-urban axes, especially to war-production centers around the country, and along an east-west axis (Kennedy, 747-748, 768). For instance, as Table 4 shows, the population of the three Pacific Coast states grew by a third between 1940 and 1945, permanently altering their demographics and economies.

Table 4: Population Growth in Washington, Oregon, and California, 1940-1945

(populations in millions)

1940 1941 1942 1943 1944 1945 % growth
1940-1945
Washington 1.7 1.8 1.9 2.1 2.1 2.3 35.3%
Oregon 1.1 1.1 1.1 1.2 1.3 1.3 18.2%
California 7.0 7.4 8.0 8.5 9.0 9.5 35.7%
Total 9.8 10.3 11.0 11.8 12.4 13.1 33.7%

Source: Nash, 222.

A third wartime socioeconomic trend was somewhat ironic, given the reduction in the supply of civilian goods: rapid increases in many Americans’ personal incomes. Driven by the federal government’s abilities to prevent price inflation and to subsidize high wages through war contracting and by the increase in the size and power of organized labor, incomes rose for virtually all Americans — whites and blacks, men and women, skilled and unskilled. Workers at the lower end of the spectrum gained the most: manufacturing workers enjoyed about a quarter more real income in 1945 than in 1940 (Kennedy, 641). These rising incomes were part of a wartime “great compression” of wages which equalized the distribution of incomes across the American population (Goldin and Margo, 1992). Again focusing on three war-boom states in the West, Table 5 shows that personal-income growth continued after the war, as well.

Table 5: Personal Income per Capita in Washington, Oregon, and California, 1940 and 1948

1940 1948 % growth
Washington $655 $929 42%
Oregon $648 $941 45%
California $835 $1,017 22%

Source: Nash, 221. Adjusted for inflation using Bureau of Labor Statistics, “CPI Inflation Calculator,” available at http://data.bls.gov/cgi-bin/cpicalc.pl

Despite the focus on military-related production in general and the impact of rationing in particular, spending in many civilian sectors of the economy rose even as the war consumed billions of dollars of output. Hollywood boomed as workers bought movie tickets rather than scarce clothes or unavailable cars. Americans placed more legal wagers in 1943 and 1944, and racetracks made more money than at any time before. In 1942, Americans spent $95 million on legal pharmaceuticals, $20 million more than in 1941. Department-store sales in November 1944 were greater than in any previous month in any year (Blum, 95-98). Black markets for rationed or luxury goods — from meat and chocolate to tires and gasoline — also boomed during the war.

Scientific and Technological Innovation

As observers during the war and ever since have recognized, scientific and technological innovations were a key aspect in the American war effort and an important economic factor in the Allies’ victory. While all of the major belligerents were able to tap their scientific and technological resources to develop weapons and other tools of war, the American experience was impressive in that scientific and technological change positively affected virtually every facet of the war economy.

The Manhattan Project

American techno-scientific innovations mattered most dramatically in “high-tech” sectors which were often hidden from public view by wartime secrecy. For instance, the Manhattan Project to create an atomic weapon was a direct and massive result of a stunning scientific breakthrough: the creation of a controlled nuclear chain reaction by a team of scientists at the University of Chicago in December 1942. Under the direction of the U.S. Army and several private contractors, scientists, engineers, and workers built a nationwide complex of laboratories and plants to manufacture atomic fuel and to fabricate atomic weapons. This network included laboratories at the University of Chicago and the University of California-Berkeley, uranium-processing complexes at Oak Ridge, Tennessee, and Hanford, Washington, and the weapon-design lab at Los Alamos, New Mexico. The Manhattan Project climaxed in August 1945, when the United States dropped two atomic weapons on Hiroshima and Nagasaki, Japan; these attacks likely accelerated Japanese leaders’ decision to seek peace with the United States. By that time, the Manhattan Project had become a colossal economic endeavor, costing approximately $2 billion and employing more than 100,000.

Though important and gigantic, the Manhattan Project was an anomaly in the broader war economy. Technological and scientific innovation also transformed less-sophisticated but still complex sectors such as aerospace or shipbuilding. The United States, as David Kennedy writes, “ultimately proved capable of some epochal scientific and technical breakthroughs, [but] innovated most characteristically and most tellingly in plant layout, production organization, economies of scale, and process engineering” (Kennedy, 648).

Aerospace

Aerospace provides one crucial example. American heavy bombers, like the B-29 Superfortress, were highly sophisticated weapons which could not have existed, much less contributed to the air war on Germany and Japan, without innovations such as bombsights, radar, and high-performance engines or advances in aeronautical engineering, metallurgy, and even factory organization. Encompassing hundreds of thousands of workers, four major factories, and $3 billion in government spending, the B-29 project required almost unprecedented organizational capabilities by the U.S. Army Air Forces, several major private contractors, and labor unions (Vander Meulen, 7). Overall, American aircraft production was the single largest sector of the war economy, costing $45 billion (almost a quarter of the $183 billion spent on war production), employing a staggering two million workers, and, most importantly, producing over 125,000 aircraft, which Table 6 describe in more detail.

Table 6: Production of Selected U.S. Military Aircraft (1941-1945)

Bombers 49,123
Fighters 63,933
Cargo 14,710
Total 127,766

Source: Air Force History Support Office

Shipbuilding

Shipbuilding offers a third example of innovation’s importance to the war economy. Allied strategy in World War II utterly depended on the movement of war materiel produced in the United States to the fighting fronts in Africa, Europe, and Asia. Between 1939 and 1945, the hundred merchant shipyards overseen by the U.S. Maritime Commission (USMC) produced 5,777 ships at a cost of about $13 billion (navy shipbuilding cost about $18 billion) (Lane, 8). Four key innovations facilitated this enormous wartime output. First, the commission itself allowed the federal government to direct the merchant shipbuilding industry. Second, the commission funded entrepreneurs, the industrialist Henry J. Kaiser chief among them, who had never before built ships and who were eager to use mass-production methods in the shipyards. These methods, including the substitution of welding for riveting and the addition of hundreds of thousands of women and minorities to the formerly all-white and all-male shipyard workforces, were a third crucial innovation. Last, the commission facilitated mass production by choosing to build many standardized vessels like the ugly, slow, and ubiquitous “Liberty” ship. By adapting well-known manufacturing techniques and emphasizing easily-made ships, merchant shipbuilding became a low-tech counterexample to the atomic-bomb project and the aerospace industry, yet also a sector which was spectacularly successful.

Reconversion and the War’s Long-term Effects

Reconversion from military to civilian production had been an issue as early as 1944, when WPB Chairman Nelson began pushing to scale back war production in favor of renewed civilian production. The military’s opposition to Nelson had contributed to the accession by James Byrnes and the OWM to the paramount spot in the war-production bureaucracy. Meaningful planning for reconversion was postponed until 1944 and the actual process of reconversion only began in earnest in early 1945, accelerating through V-E Day in May and V-J Day in September.

The most obvious effect of reconversion was the shift away from military production and back to civilian production. As Table 7 shows, this shift — as measured by declines in overall federal spending and in military spending — was dramatic, but did not cause the postwar depression which many Americans dreaded. Rather, American GDP continued to grow after the war (albeit not as rapidly as it had during the war; compare Table 1). The high level of defense spending, in turn, contributed to the creation of the “military-industrial complex,” the network of private companies, non-governmental organizations, universities, and federal agencies which collectively shaped American national defense policy and activity during the Cold War.

Table 7: Federal Spending, and Military Spending after World War II

(dollar values in billions of constant 1945 dollars)

Nominal GDP Federal Spending Defense Spending
Year Total % increase total % increase % of GDP Total % increase % of GDP % of federal
spending
1945 223.10 92.71 1.50% 41.90% 82.97 4.80% 37.50% 89.50%
1946 222.30 -0.36% 55.23 -40.40% 24.80% 42.68 -48.60% 19.20% 77.30%
1947 244.20 8.97% 34.5 -37.50% 14.80% 12.81 -70.00% 5.50% 37.10%
1948 269.20 9.29% 29.76 -13.70% 11.60% 9.11 -28.90% 3.50% 30.60%
1949 267.30 -0.71% 38.84 30.50% 14.30% 13.15 44.40% 4.80% 33.90%
1950 293.80 9.02% 42.56 9.60% 15.60% 13.72 4.40% 5.00% 32.20%

1945 GDP figure from “Nominal GDP: Louis Johnston and Samuel H. Williamson, “The Annual Real and Nominal GDP for the United States, 1789 — Present,” Economic History Services, March 2004, available at http://www.eh.net/hmit/gdp/ (accessed 27 July 2005). 1946-1950 GDP figures calculated using Bureau of Labor Statistics, “CPI Inflation Calculator,” available at http://data.bls.gov/cgi-bin/cpicalc.pl. Federal and defense spending figures from Government Printing Office, “Budget of the United States Government: Historical Tables Fiscal Year 2005,” Table 6.1—Composition of Outlays: 1940—2009 and Table 3.1—Outlays by Superfunction and Function: 1940—2009.

Reconversion spurred the second major restructuring of the American workplace in five years, as returning servicemen flooded back into the workforce and many war workers left, either voluntarily or involuntarily. For instance, many women left the labor force beginning in 1944 — sometimes voluntarily and sometimes involuntarily. In 1947, about a quarter of all American women worked outside the home, roughly the same number who had held such jobs in 1940 and far off the wartime peak of 36 percent in 1944 (Kennedy, 779).

G.I. Bill

Servicemen obtained numerous other economic benefits beyond their jobs, including educational assistance from the federal government and guaranteed mortgages and small-business loans via the Serviceman’s Readjustment Act of 1944 or “G.I. Bill.” Former servicemen thus became a vast and advantaged class of citizens which demanded, among other goods, inexpensive, often suburban housing; vocational training and college educations; and private cars which had been unobtainable during the war (Kennedy, 786-787).

The U.S.’s Position at the End of the War

At a macroeconomic scale, the war not only decisively ended the Great Depression, but created the conditions for productive postwar collaboration between the federal government, private enterprise, and organized labor, the parties whose tripartite collaboration helped engender continued economic growth after the war. The U.S. emerged from the war not physically unscathed, but economically strengthened by wartime industrial expansion, which placed the United States at absolute and relative advantage over both its allies and its enemies.

Possessed of an economy which was larger and richer than any other in the world, American leaders determined to make the United States the center of the postwar world economy. American aid to Europe ($13 billion via the Economic Recovery Program (ERP) or “Marshall Plan,” 1947-1951) and Japan ($1.8 billion, 1946-1952) furthered this goal by tying the economic reconstruction of West Germany, France, Great Britain, and Japan to American import and export needs, among other factors. Even before the war ended, the Bretton Woods Conference in 1944 determined key aspects of international economic affairs by establishing standards for currency convertibility and creating institutions such as the International Monetary Fund and the precursor of the World Bank.

In brief, as economic historian Alan Milward writes, “the United States emerged in 1945 in an incomparably stronger position economically than in 1941″… By 1945 the foundations of the United States’ economic domination over the next quarter of a century had been secured”… [This] may have been the most influential consequence of the Second World War for the post-war world” (Milward, 63).

Selected References

Adams, Michael C.C. The Best War Ever: America and World War II. Baltimore: Johns Hopkins University Press, 1994.

Anderson, Karen. Wartime Women: Sex Roles, Family Relations, and the Status of Women during World War II. Westport, CT: Greenwood Press, 1981.

Air Force History Support Office. “Army Air Forces Aircraft: A Definitive Moment.” U.S. Air Force, 1993. Available at http://www.airforcehistory.hq.af.mil/PopTopics/AAFaircraft.htm.

Blum, John Morton. V Was for Victory: Politics and American Culture during World War II. New York: Harcourt Brace, 1976.

Bordo, Michael. “The Gold Standard, Bretton Woods, and Other Monetary Regimes: An Historical Appraisal.” NBER Working Paper No. 4310. April 1993.

“Brief History of World War Two Advertising Campaigns.” Duke University Rare Book, Manuscript, and Special Collections, 1999. Available at http://scriptorium.lib.duke.edu/adaccess/wwad-history.html

Brody, David. “The New Deal and World War II.” In The New Deal, vol. 1, The National Level, edited by John Braeman, Robert Bremmer, and David Brody, 267-309. Columbus: Ohio State University Press, 1975.

Connery, Robert. The Navy and Industrial Mobilization in World War II. Princeton: Princeton University Press, 1951.

Darby, Michael R. “Three-and-a-Half Million U.S. Employees Have Been Mislaid: Or, an Explanation of Unemployment, 1934-1941.” Journal of Political Economy 84, no. 1 (February 1976): 1-16.

Field, Alexander J. “The Most Technologically Progressive Decade of the Century.” American Economic Review 93, no 4 (September 2003): 1399-1414.

Field, Alexander J. “U.S. Productivity Growth in the Interwar Period and the 1990s.” (Paper presented at “Understanding the 1990s: the Long Run Perspective” conference, Duke University and the University of North Carolina, March 26-27, 2004) Available at www.unc.edu/depts/econ/seminars/Field.pdf.

Fischer, Gerald J. A Statistical Summary of Shipbuilding under the U.S. Maritime Commission during World War II. Washington, DC: Historical Reports of War Administration; United States Maritime Commission, no. 2, 1949.

Friedberg, Aaron. In the Shadow of the Garrison State. Princeton: Princeton University Press, 2000.

Gluck, Sherna Berger. Rosie the Riveter Revisited: Women, the War, and Social Change. Boston: Twayne Publishers, 1987.

Goldin, Claudia. “The Role of World War II in the Rise of Women’s Employment.” American Economic Review 81, no. 4 (September 1991): 741-56.

Goldin, Claudia and Robert A. Margo. “The Great Compression: Wage Structure in the United States at Mid-Century.” Quarterly Journal of Economics 107, no. 2 (February 1992): 1-34.

Harrison, Mark, editor. The Economics of World War II: Six Great Powers in International Comparison. Cambridge: Cambridge University Press, 1998.

Higgs, Robert. “Wartime Prosperity? A Reassessment of the U.S. Economy in the 1940s.” Journal of Economic History 52, no. 1 (March 1992): 41-60.

Holley, I.B. Buying Aircraft: Materiel Procurement for the Army Air Forces. Washington, DC: U.S. Government Printing Office, 1964.

Hooks, Gregory. Forging the Military-Industrial Complex: World War II’s Battle of the Potomac. Urbana: University of Illinois Press, 1991.

Janeway, Eliot. The Struggle for Survival: A Chronicle of Economic Mobilization in World War II. New Haven: Yale University Press, 1951.

Jeffries, John W. Wartime America: The World War II Home Front. Chicago: Ivan R. Dee, 1996.

Johnston, Louis and Samuel H. Williamson. “The Annual Real and Nominal GDP for the United States, 1789 – Present.” Available at Economic History Services, March 2004, URL: http://www.eh.net/hmit/gdp/; accessed 3 June 2005.

Kennedy, David M. Freedom from Fear: The American People in Depression and War, 1929-1945. New York: Oxford University Press, 1999.

Kryder, Daniel. Divided Arsenal: Race and the American State during World War II. New York: Cambridge University Press, 2000.

Lane, Frederic, with Blanche D. Coll, Gerald J. Fischer, and David B. Tyler. Ships for Victory: A History of Shipbuilding under the U.S. Maritime Commission in World War II. Baltimore: Johns Hopkins University Press, 1951; republished, 2001.

Koistinen, Paul A.C. Arsenal of World War II: The Political Economy of American Warfare, 1940-1945. Lawrence, KS: University Press of Kansas, 2004.

Lichtenstein, Nelson. Labor’s War at Home: The CIO in World War II. New York: Cambridge University Press, 1982.

Lingeman, Richard P. Don’t You Know There’s a War On? The American Home Front, 1941-1945. New York: G.P. Putnam’s Sons, 1970.

Milkman, Ruth. Gender at Work: The Dynamics of Job Segregation by Sex during World War II. Urbana: University of Illinois Press, 1987.

Milward, Alan S. War, Economy, and Society, 1939-1945. Berkeley: University of California Press, 1979.

Nash, Gerald D. The American West Transformed: The Impact of the Second World War. Lincoln: University of Nebraska Press, 1985.

Nelson, Donald M. Arsenal of Democracy: The Story of American War Production. New York: Harcourt Brace, 1946.

O’Neill, William L. A Democracy at War: America’s Fight at Home and Abroad in World War II. New York: Free Press, 1993.

Overy, Richard. How the Allies Won. New York: W.W. Norton, 1995.

Rockoff, Hugh. “The Response of the Giant Corporations to Wage and Price Control in World War II.” Journal of Economic History 41, no. 1 (March 1981): 123-28.

Rockoff, Hugh. “Price and Wage Controls in Four Wartime Periods.” Journal of Economic History 41, no. 2 (June 1981): 381-401.

Samuelson, Robert J., “Great Depression.” The Concise Encyclopedia of Economics. Indianapolis: Liberty Fund, Inc., ed. David R. Henderson, 2002. Available at http://www.econlib.org/library/Enc/GreatDepression.html

U.S. Department of the Treasury, “Fact Sheet: Taxes,” n. d. Available at http://www.treas.gov/education/fact-sheets/taxes/ustax.shtml

U.S. Department of the Treasury, “Introduction to Savings Bonds,” n.d. Available at http://www.treas.gov/offices/treasurer/savings-bonds.shtml

Vander Meulen, Jacob. Building the B-29. Washington, DC: Smithsonian Institution Press, 1995.

Watkins, Thayer. “The Recovery from the Depression of the 1930s.” 2002. Available at http://www2.sjsu.edu/faculty/watkins/recovery.htm

Citation: Tassava, Christopher. “The American Economy during World War II”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/the-american-economy-during-world-war-ii/

Hours of Work in U.S. History

Robert Whaples, Wake Forest University

In the 1800s, many Americans worked seventy hours or more per week and the length of the workweek became an important political issue. Since then the workweek’s length has decreased considerably. This article presents estimates of the length of the historical workweek in the U.S., describes the history of the shorter-hours “movement,” and examines the forces that drove the workweek’s decline over time.

Estimates of the Length of the Workweek

Measuring the length of the workweek (or workday or workyear) is a difficult task, full of ambiguities concerning what constitutes work and who is to be considered a worker. Estimating the length of the historical workweek is even more troublesome. Before the Civil War most Americans were employed in agriculture and most of these were self-employed. Like self-employed workers in other fields, they saw no reason to record the amount of time they spent working. Often the distinction between work time and leisure time was blurry. Therefore, estimates of the length of the typical workweek before the mid-1800s are very imprecise.

The Colonial Period

Based on the amount of work performed — for example, crops raised per worker — Carr (1992) concludes that in the seventeenth-century Chesapeake region, “for at least six months of the year, an eight to ten-hour day of hard labor was necessary.” This does not account for other required tasks, which probably took about three hours per day. This workday was considerably longer than for English laborers, who at the time probably averaged closer to six hours of heavy labor each day.

The Nineteenth Century

Some observers believe that most American workers adopted the practice of working from “first light to dark” — filling all their free hours with work — throughout the colonial period and into the nineteenth century. Others are skeptical of such claims and argue that work hours increased during the nineteenth century — especially its first half. Gallman (1975) calculates “changes in implicit hours of work per agricultural worker” and estimates that hours increased 11 to 18 percent from 1800 to 1850. Fogel and Engerman (1977) argue that agricultural hours in the North increased before the Civil War due to the shift into time-intensive dairy and livestock. Weiss and Craig (1993) find evidence suggesting that agricultural workers also increased their hours of work between 1860 and 1870. Finally, Margo (2000) estimates that “on an economy-wide basis, it is probable that annual hours of work rose over the (nineteenth) century, by around 10 percent.” He credits this rise to the shift out of agriculture, a decline in the seasonality of labor demand and reductions in annual periods of nonemployment. On the other hand, it is clear that working hours declined substantially for one important group. Ransom and Sutch (1977) and Ng and Virts (1989) estimate that annual labor hours per capita fell 26 to 35 percent among African-Americans with the end of slavery.

Manufacturing Hours before 1890

Our most reliable estimates of the workweek come from manufacturing, since most employers required that manufacturing workers remain at work during precisely specified hours. The Census of Manufactures began to collect this information in 1880 but earlier estimates are available. Much of what is known about average work hours in the nineteenth century comes from two surveys of manufacturing hours taken by the federal government. The first survey, known as the Weeks Report, was prepared by Joseph Weeks as part of the Census of 1880. The second was prepared in 1893 by Commissioner of Labor Carroll D. Wright, for the Senate Committee on Finance, chaired by Nelson Aldrich. It is commonly called the Aldrich Report. Both of these sources, however, have been criticized as flawed due to problems such as sample selection bias (firms whose records survived may not have been typical) and unrepresentative regional and industrial coverage. In addition, the two series differ in their estimates of the average length of the workweek by as much as four hours. These estimates are reported in Table 1. Despite the previously mentioned problems, it seems reasonable to accept two important conclusions based on these data — the length of the typical manufacturing workweek in the 1800s was very long by modern standards and it declined significantly between 1830 and 1890.

Table 1
Estimated Average Weekly Hours Worked in Manufacturing, 1830-1890

Year Weeks Report Aldrich Report
1830 69.1
1840 67.1 68.4
1850 65.5 69.0
1860 62.0 66.0
1870 61.1 63.0
1880 60.7 61.8
1890 60.0

Sources: U.S. Department of Interior (1883), U.S. Senate (1893)
Note: Atack and Bateman (1992), using data from census manuscripts, estimate average weekly hours to be 60.1 in 1880 — very close to Weeks’ contemporary estimate. They also find that the summer workweek was about 1.5 hours longer than the winter workweek.

Hours of Work during the Twentieth Century

Because of changing definitions and data sources there does not exist a consistent series of workweek estimates covering the entire twentieth century. Table 2 presents six sets of estimates of weekly hours. Despite differences among the series, there is a fairly consistent pattern, with weekly hours falling considerably during the first third of the century and much more slowly thereafter. In particular, hours fell strongly during the years surrounding World War I, so that by 1919 the eight-hour day (with six workdays per week) had been won. Hours fell sharply at the beginning of the Great Depression, especially in manufacturing, then rebounded somewhat and peaked during World War II. After World War II, the length of the workweek stabilized around forty hours. Owen’s nonstudent-male series shows little trend after World War II, but the other series show a slow, but steady, decline in the length of the average workweek. Greis’s two series are based on the average length of the workyear and adjust for paid vacations, holidays and other time-off. The last column is based on information reported by individuals in the decennial censuses and in the Current Population Survey of 1988. It may be the most accurate and representative series, as it is based entirely on the responses of individuals rather than employers.

Table 2
Estimated Average Weekly Hours Worked, 1900-1988

Year Census of Manu-facturing JonesManu-

facturing

OwenNonstudent Males GreisManu-

facturing

GreisAll Workers Census/CPS All Workers
1900 59.6* 55.0 58.5
1904 57.9 53.6 57.1
1909 56.8 (57.3) 53.1 55.7
1914 55.1 (55.5) 50.1 54.0
1919 50.8 (51.2) 46.1 50.0
1924 51.1* 48.8 48.8
1929 50.6 48.0 48.7
1934 34.4 40.6
1940 37.6 42.5 43.3
1944 44.2 46.9
1947 39.2 42.4 43.4 44.7
1950 38.7 41.1 42.7
1953 38.6 41.5 43.2 44.0
1958 37.8* 40.9 42.0 43.4
1960 41.0 40.9
1963 41.6 43.2 43.2
1968 41.7 41.2 42.0
1970 41.1 40.3
1973 40.6 41.0
1978 41.3* 39.7 39.1
1980 39.8
1988 39.2

Sources: Whaples (1990a), Jones (1963), Owen (1976, 1988), and Greis (1984). The last column is based on the author’s calculations using Coleman and Pencavel’s data from Table 4 (below).
* = these estimates are from one year earlier than the year listed.
(The figures in parentheses in the first column are unofficial estimates but are probably more precise, as they better estimate the hours of workers in industries with very long workweeks.)

Hours in Other Industrial Sectors

Table 3 compares the length of the workweek in manufacturing to that in other industries for which there is available information. (Unfortunately, data from the agricultural and service sectors are unavailable until late in this period.) The figures in Table 3 show that the length of the workweek was generally shorter in the other industries — sometimes considerably shorter. For example, in 1910 anthracite coalminers’ workweeks were about forty percent shorter than the average workweek among manufacturing workers. All of the series show an overall downward trend.

Table 3
Estimated Average Weekly Hours Worked, Other Industries

Year Manufacturing Construction Railroads Bituminous Coal Anthracite Coal
1850s about 66 about 66
1870s about 62 about 60
1890 60.0 51.3
1900 59.6 50.3 52.3 42.8 35.8
1910 57.3 45.2 51.5 38.9 43.3
1920 51.2 43.8 46.8 39.3 43.2
1930 50.6 42.9 33.3 37.0
1940 37.6 42.5 27.8 27.2
1955 38.5 37.1 32.4 31.4

Sources: Douglas (1930), Jones (1963), Licht (1983), and Tables 1 and 2.
Note: The manufacturing figures for the 1850s and 1870s are approximations based on averaging numbers from the Weeks and Aldrich reports from Table 1. The early estimates for the railroad industry are also approximations.

Recent Trends by Race and Gender

Some analysts, such as Schor (1992) have argued that the workweek increased substantially in the last half of the twentieth century. Few economists accept this conclusion, arguing that it is based on the use of faulty data (public opinion surveys) and unexplained methods of “correcting” more reliable sources. Schor’s conclusions are contradicted by numerous studies. Table 4 presents Coleman and Pencavel’s (1993a, 1993b) estimates of the average workweek of employed people — disaggregated by race and gender. For all four groups the average length of the workweek has dropped since 1950. Although median weekly hours were virtually constant for men, the upper tail of the hours distribution fell for those with little schooling and rose for the well-educated. In addition, Coleman and Pencavel also find that work hours declined for young and older men (especially black men), but changed little for white men in their prime working years. Women with relatively little schooling were working fewer hours in the 1980s than in 1940, while the reverse is true of well-educated women.

Table 4
Estimated Average Weekly Hours Worked, by Race and Gender, 1940-1988

Year White Men Black Men White Women Black Women
1940 44.1 44.5 40.6 42.2
1950 43.4 42.8 41.0 40.3
1960 43.3 40.4 36.8 34.7
1970 43.1 40.2 36.1 35.9
1980 42.9 39.6 35.9 36.5
1988 42.4 39.6 35.5 37.2

Source: Coleman and Pencavel (1993a, 1993b)

Broader Trends in Time Use, 1880 to 2040

In 1880 a typical male household head had very little leisure time — only about 1.8 hours per day over the course of a year. However, as Fogel’s (2000) estimates in Table 5 show, between 1880 and 1995 the amount of work per day fell nearly in half, allowing leisure time to more than triple. Because of the decline in the length of the workweek and the declining portion of a lifetime that is spent in paid work (due largely to lengthening periods of education and retirement) the fraction of the typical American’s lifetime devoted to work has become remarkably small. Based on these trends Fogel estimates that four decades from now less than one-fourth of our discretionary time (time not needed for sleep, meals, and hygiene) will be devoted to paid work — over three-fourths will be available for doing what we wish.

Table 5
Division of the Day for the Average Male Household Head over the Course of a Year, 1880 and 1995

Activity 1880 1995
Sleep 8 8
Meals and hygiene 2 2
Chores 2 2
Travel to and from work 1 1
Work 8.5 4.7
Illness .7 .5
Left over for leisure activities 1.8 5.8

Source: Fogel (2000)

Table 6
Estimated Trend in the Lifetime Distribution of Discretionary Time, 1880-2040

Activity 1880 1995 2040
Lifetime Discretionary Hours 225,900 298,500 321,900
Lifetime Work Hours 182,100 122,400 75,900
Lifetime Leisure Hours 43,800 176,100 246,000

Source: Fogel (2000)
Notes: Discretionary hours exclude hours used for sleep, meals and hygiene. Work hours include paid work, travel to and from work, and household chores.

Postwar International Comparisons

While hours of work have decreased slowly in the U.S. since the end of World War II, they have decreased more rapidly in Western Europe. Greis (1984) calculates that annual hours worked per employee fell from 1908 to 1704 in the U.S. between 1950 and 1979, a 10.7 percent decrease. This compares to a 21.8 percent decrease across a group of twelve Western European countries, where the average fell from 2170 hours to 1698 hours between 1950 and 1979. Perhaps the most precise way of measuring work hours is to have individuals fill out diaries on their day-to-day and hour-to-hour time use. Table 7 presents an international comparison of average work hours both inside and outside of the workplace, by adult men and women — averaging those who are employed with those who are not. (Juster and Stafford (1991) caution, however, that making these comparisons requires a good deal of guesswork.) These numbers show a significant drop in total work per week in the U.S. between 1965 and 1981. They also show that total work by men and women is very similar, although it is divided differently. Total work hours in the U.S. were fairly similar to those in Japan, but greater than in Denmark, while less than in the USSR.

Table 7
Weekly Work Time in Four Countries, Based on Time Diaries, 1960s-1980s

Activity US USSR (Pskov)
Men Women Men Women
1965 1981 1965 1981 1965 1981 1965 1981
Total Work 63.1 57.8 60.9 54.4 64.4 65.7 75.3 66.3
Market Work 51.6 44.0 18.9 23.9 54.6 53.8 43.8 39.3
Commuting 4.8 3.5 1.6 2.0 4.9 5.2 3.7 3.4
Housework 11.5 13.8 41.8 30.5 9.8 11.9 31.5 27.0
Activity Japan Denmark
Men Women Men Women
1965 1985 1965 1985 1964 1987 1964 1987
Total Work 60.5 55.5 64.7 55.6 45.4 46.2 43.4 43.9
Market Work 57.7 52.0 33.2 24.6 41.7 33.4 13.3 20.8
Commuting 3.6 4.5 1.0 1.2 n.a n.a n.a n.a
Housework 2.8 3.5 31.5 31.0 3.7 12.8 30.1 23.1

Source: Juster and Stafford (1991)

The Shorter Hours “Movement” in the U.S.

The Colonial Period

Captain John Smith, after mapping New England’s coast, came away convinced that three days’ work per week would satisfy any settler. Far from becoming a land of leisure, however, the abundant resources of British America and the ideology of its settlers, brought forth high levels of work. Many colonial Americans held the opinion that prosperity could be taken as a sign of God’s pleasure with the individual, viewed work as inherently good and saw idleness as the devil’s workshop. Rodgers (1978) argues that this work ethic spread and eventually reigned supreme in colonial America. The ethic was consistent with the American experience, since high returns to effort meant that hard work often yielded significant increases in wealth. In Virginia, authorities also transplanted the Statue of Artificers, which obliged all Englishmen (except the gentry) to engage in productive activity from sunrise to sunset. Likewise, a 1670 Massachusetts law demanded a minimum ten-hour workday, but it is unlikely that these laws had any impact on the behavior of most free workers.

The Revolutionary War Period

Roediger and Foner (1989) contend that the Revolutionary War era brought a series of changes that undermined support for sun-to-sun work. The era’s republican ideology emphasized that workers needed free time, away from work, to participate in democracy. Simultaneously, the development of merchant capitalism meant that there were, for the first time, a significant number of wageworkers. Roediger and Foner argue that reducing labor costs was crucial to the profitability of these workers’ employers, who reduced costs by squeezing more work from their employees — reducing time for meals, drink and rest and sometimes even rigging the workplace’s official clock. Incensed by their employers’ practice of paying a flat daily wage during the long summer shift and resorting to piece rates during short winter days, Philadelphia’s carpenters mounted America’s first ten-hour-day strike in May 1791. (The strike was unsuccessful.)

1820s: The Shorter Hours Movement Begins

Changes in the organization of work, with the continued rise of merchant capitalists, the transition from the artisanal shop to the early factory, and an intensified work pace had become widespread by about 1825. These changes produced the first extensive, aggressive movement among workers for shorter hours, as the ten-hour movement blossomed in New York City, Philadelphia and Boston. Rallying around the ten-hour banner, workers formed the first city-central labor union in the U.S., the first labor newspaper, and the first workingmen’s political party — all in Philadelphia — in the late 1820s.

Early Debates over Shorter Hours

Although the length of the workday is largely an economic decision arrived at by the interaction of the supply and demand for labor, advocates of shorter hours and foes of shorter hours have often argued the issue on moral grounds. In the early 1800s, advocates argued that shorter work hours improved workers’ health, allowed them time for self-improvement and relieved unemployment. Detractors countered that workers would abuse leisure time (especially in saloons) and that long, dedicated hours of work were the path to success, which should not be blocked for the great number of ambitious workers.

1840s: Early Agitation for Government Intervention

When Samuel Slater built the first textile mills in the U.S., “workers labored from sun up to sun down in summer and during the darkness of both morning and evening in the winter. These hours ? only attracted attention when they exceeded the common working day of twelve hours,” according to Ware (1931). During the 1830s, an increased work pace, tighter supervision, and the addition of about fifteen minutes to the work day (partly due to the introduction of artificial lighting during winter months), plus the growth of a core of more permanent industrial workers, fueled a campaign for a shorter workweek among mill workers in Lowell, Massachusetts, whose workweek averaged about 74 hours. This agitation was led by Sarah Bagley and the New England Female Labor Reform Association, which, beginning in 1845, petitioned the state legislature to intervene in the determination of hours. The petitions were followed by America’s first-ever examination of labor conditions by a governmental investigating committee. The Massachusetts legislature proved to be very unsympathetic to the workers’ demands, but similar complaints led to the passage of laws in New Hampshire (1847) and Pennsylvania (1848), declaring ten hours to be the legal length of the working day. However, these laws also specified that a contract freely entered into by employee and employer could set any length for the workweek. Hence, these laws had little impact. Legislation passed by the federal government had a more direct, though limited effect. On March 31, 1840, President Martin Van Buren issued an executive order mandating a ten-hour day for all federal employees engaged in manual work.

1860s: Grand Eight Hours Leagues

As the length of the workweek gradually declined, political agitation for shorter hours seems to have waned for the next two decades. However, immediately after the Civil War reductions in the length of the workweek reemerged as an important issue for organized labor. The new goal was an eight-hour day. Roediger (1986) argues that many of the new ideas about shorter hours grew out of the abolitionists’ critique of slavery — that long hours, like slavery, stunted aggregate demand in the economy. The leading proponent of this idea, Ira Steward, argued that decreasing the length of the workweek would raise the standard of living of workers by raising their desired consumption levels as their leisure expanded, and by ending unemployment. The hub of the newly launched movement was Boston and Grand Eight Hours Leagues sprang up around the country in 1865 and 1866. The leaders of the movement called the meeting of the first national organization to unite workers of different trades, the National Labor Union, which met in Baltimore in 1867. In response to this movement, eight states adopted general eight-hour laws, but again the laws allowed employer and employee to mutually consent to workdays longer than the “legal day.” Many critics saw these laws and this agitation as a hoax, because few workers actually desired to work only eight hours per day at their original hourly pay rate. The passage of the state laws did foment action by workers — especially in Chicago where parades, a general strike, rioting and martial law ensued. In only a few places did work hours fall after the passage of these laws. Many become disillusioned with the idea of using the government to promote shorter hours and by the late 1860s, efforts to push for a universal eight-hour day had been put on the back burner.

The First Enforceable Hours Laws

Despite this lull in shorter-hours agitation, in 1874, Massachusetts passed the nation’s first enforceable ten-hour law. It covered only female workers and became fully effective by 1879. This legislation was fairly late by European standards. Britain had passed its first effective Factory Act, setting maximum hours for almost half of its very young textile workers, in 1833.

1886: Year of Dashed Hopes

In the early 1880s organized labor in the U.S. was fairly weak. In 1884, the short-lived Federation of Organized Trades and Labor Unions (FOTLU) fired a “shot in the dark.” During its final meeting, before dissolving, the Federation “ordained” May 1, 1886 as the date on which workers would cease working beyond eight hours per day. Meanwhile, the Knights of Labor, which had begun as a secret fraternal society and evolved a labor union, began to gain strength. It appears that many nonunionized workers, especially the unskilled, came to see in the Knights a chance to obtain a better deal from their employers, perhaps even to obtain the eight-hour day. FOTLU’s call for workers to simply walk off the job after eight hours beginning on May 1, plus the activities of socialist and anarchist labor organizers and politicians, and the apparent strength of the Knights combined to attract members in record numbers. The Knights mushroomed and its new membership demanded that their local leaders support them in attaining the eight-hour day. Many smelled victory in the air — the movement to win the eight-hour day became frenzied and the goal became “almost a religious crusade” (Grob, 1961).

The Knights’ leader, Terence Powderly, thought that the push for a May 1 general strike for eight-hours was “rash, short-sighted and lacking in system” and “must prove abortive” (Powderly, 1890). He offered no effective alternative plan but instead tried to block the mass action, issuing a “secret circular” condemning the use of strikes. Powderly reasoned that low incomes forced workmen to accept long hours. Workers didn’t want shorter hours unless their daily pay was maintained, but employers were unwilling and/or unable to offer this. Powderly’s rival, labor leader Samuel Gompers, agreed that “the movement of ’86 did not have the advantage of favorable conditions” (Gompers, 1925). Nelson (1986) points to divisions among workers, which probably had much to do with the failure in 1886 of the drive for the eight-hour day. Some insisted on eight hours with ten hours’ pay, but others were willing to accept eight hours with eight hours’ pay,

Haymarket Square Bombing

The eight-hour push of 1886 was, in Norman Ware’s words, “a flop” (Ware, 1929). Lack of will and organization among workers was undoubtedly important, but its collapse was aided by violence that marred strikes and political rallies in Chicago and Milwaukee. The 1886 drive for eight-hours literally blew up in organized labor’s face. At Haymarket Square in Chicago an anarchist bomb killed fifteen policemen during an eight-hour rally, and in Milwaukee’s Bay View suburb nine strikers were killed as police tried to disperse roving pickets. The public backlash and fear of revolution damned the eight-hour organizers along with the radicals and dampened the drive toward eight hours — although it is estimated that the strikes of May 1886 shortened the workweek for about 200,000 industrial workers, especially in New York City and Cincinnati.

The AFL’s Strategy

After the demise of the Knights of Labor, the American Federation of Labor (AFL) became the strongest labor union in the U.S. It held shorter hours as a high priority. The inside cover of its Proceedings carried two slogans in large type: “Eight hours for work, eight hours for rest, eight hours for what we will” and “Whether you work by the piece or work by the day, decreasing the hours increases the pay.” (The latter slogan was coined by Ira Steward’s wife, Mary.) In the aftermath of 1886, the American Federation of Labor adopted a new strategy of selecting each year one industry in which it would attempt to win the eight-hour day, after laying solid plans, organizing, and building up a strike fund war chest by taxing nonstriking unions. The United Brotherhood of Carpenters and Joiners was selected first and May 1, 1890 was set as a day of national strikes. It is estimated that nearly 100,000 workers gained the eight-hour day as a result of these strikes in 1890. However, other unions turned down the opportunity to follow the carpenters’ example and the tactic was abandoned. Instead, the length of the workweek continued to erode during this period, sometimes as the result of a successful local strike, more often as the result of broader economic forces.

The Spread of Hours Legislation

Massachusetts’ first hours law in 1874 set sixty hours per week as the legal maximum for women, in 1892 this was cut to 58, in 1908 to 56, and in 1911 to 54. By 1900, 26 percent of states had maximum hours laws covering women, children and, in some, adult men (generally only those in hazardous industries). The percentage of states with maximum hours laws climbed to 58 percent in 1910, 76 percent in 1920, and 84 percent in 1930. Steinberg (1982) calculates that the percent of employees covered climbed from 4 percent nationally in 1900, to 7 percent in 1910, and 12 percent in 1920 and 1930. In addition, these laws became more restrictive with the average legal standard falling from a maximum of 59.3 hours per week in 1900 to 56.7 in 1920. According to her calculations, in 1900 about 16 percent of the workers covered by these laws were adult men, 49 percent were adult women and the rest were minors.

Court Rulings

The banner years for maximum hours legislation were right around 1910. This may have been partly a reaction to the Supreme Court’s ruling upholding female-hours legislation in the Muller vs. Oregon case (1908). The Court’s rulings were not always completely consistent during this period, however. In 1898 the Court upheld a maximum eight-hour day for workmen in the hazardous industries of mining and smelting in Utah in Holden vs. Hardy. In Lochner vs. New York (1905), it rejected as unconstitutional New York’s ten-hour day for bakers, which was also adopted (at least nominally) out of concerns for safety. The defendant showed that mortality rates in baking were only slightly above average, and lower than those for many unregulated occupations, arguing that this was special interest legislation, designed to favor unionized bakers. Several state courts, on the other hand, supported laws regulating the hours of men in only marginally hazardous work. By 1917, in Bunting vs. Oregon, the Supreme Court seemingly overturned the logic of the Lochner decision, supporting a state law that required overtime payment for all men working long hours. The general presumption during this period was that the courts would allow regulation of labor concerning women and children, who were thought to be incapable of bargaining on an equal footing with employers and in special need of protection. Men were allowed freedom of contract unless it could be proven that regulating their hours served a higher good for the population at large.

New Arguments about Shorter Hours

During the first decades of the twentieth century, arguments favoring shorter hours moved away from Steward’s line that shorter hours increased pay and reduced unemployment to arguments that shorter hours were good for employers because they made workers more productive. A new cadre of social scientists began to offer evidence that long hours produced health-threatening, productivity-reducing fatigue. This line of reasoning, advanced in the court brief of Louis Brandeis and Josephine Goldmark, was crucial in the Supreme Court’s decision to support state regulation of women’s hours in Muller vs. Oregon. Goldmark’s book, Fatigue and Efficiency (1912) was a landmark. In addition, data relating to hours and output among British and American war workers during World War I helped convince some that long hours could be counterproductive. Businessmen, however, frequently attacked the shorter hours movement as merely a ploy to raise wages, since workers were generally willing to work overtime at higher wage rates.

Federal Legislation in the 1910s

In 1912 the Federal Public Works Act was passed, which provided that every contract to which the U.S. government was a party must contain an eight-hour day clause. Three year later LaFollette’s Bill established maximum hours for maritime workers. These were preludes to the most important shorter-hours law enacted by Congress during this period — 1916’s Adamson Act, which was passed to counter a threatened nationwide strike, granted rail workers the basic eight hour day. (The law set eight hours as the basic workday and required higher overtime pay for longer hours.)

World War I and Its Aftermath

Labor markets became very tight during World War I as the demand for workers soared and the unemployment rate plunged. These forces put workers in a strong bargaining position, which they used to obtain shorter work schedules. The move to shorter hours was also pushed by the federal government, which gave unprecedented support to unionization. The federal government began to intervene in labor disputes for the first time, and the National War Labor Board “almost invariably awarded the basic eight-hour day when the question of hours was at issue” in labor disputes (Cahill, 1932). At the end of the war everyone wondered if organized labor would maintain its newfound power and the crucial test case was the steel industry. Blast furnace workers generally put in 84-hour workweeks. These abnormally long hours were the subject of much denunciation and a major issue in a strike that began in September 1919. The strike failed (and organized labor’s power receded during the 1920s), but four years later US Steel reduced its workday from twelve to eight hours. The move came after much arm-twisting by President Harding but its timing may be explained by immigration restrictions and the loss of immigrant workers who were willing to accept such long hours (Shiells, 1990).

The Move to a Five-day Workweek

During the 1920s agitation for shorter workdays largely disappeared, now that the workweek had fallen to about 50 hours. However, pressure arose to grant half-holidays on Saturday or Saturday off — especially in industries whose workers were predominantly Jewish. By 1927 at least 262 large establishments had adopted the five-day week, while only 32 had it by 1920. The most notable action was Henry Ford’s decision to adopt the five-day week in 1926. Ford employed more than half of the nation’s approximately 400,000 workers with five-day weeks. However, Ford’s motives were questioned by many employers who argued that productivity gains from reducing hours ceased beyond about forty-eight hours per week. Even the reformist American Labor Legislation Review greeted the call for a five-day workweek with lukewarm interest.

Changing Attitudes in the 1920s

Hunnicutt (1988) argues that during the 1920s businessmen and economists began to see shorter hours as a threat to future economic growth. With the development of advertising — the “gospel of consumption” — a new vision of progress was proposed to American workers. It replaced the goal of leisure time with a list of things to buy and business began to persuade workers that more work brought more tangible rewards. Many workers began to oppose further decreases in the length of the workweek. Hunnicutt concludes that a new work ethic arose as Americans threw off the psychology of scarcity for one of abundance.

Hours’ Reduction during the Great Depression

Then the Great Depression hit the American economy. By 1932 about half of American employers had shortened hours. Rather than slash workers’ real wages, employers opted to lay-off many workers (the unemployment rate hit 25 percent) and tried to protect the ones they kept on by the sharing of work among them. President Hoover’s Commission for Work Sharing pushed voluntary hours reductions and estimated that they had saved three to five million jobs. Major employers like Sears, GM, and Standard Oil scaled down their workweeks and Kellogg’s and the Akron tire industry pioneered the six-hour day. Amid these developments, the AFL called for a federally-mandated thirty-hour workweek.

The Black-Connery 30-Hours Bill and the NIRA

The movement for shorter hours as a depression-fighting work-sharing measure built such a seemingly irresistible momentum that by 1933 observers predicting that the “30-hour week was within a month of becoming federal law” (Hunnicutt, 1988). During the period after the 1932 election but before Franklin Roosevelt’s inauguration, Congressional hearings on thirty hours began, and less than one month into FDR’s first term, the Senate passed, 53 to 30, a thirty-hour bill authored by Hugo Black. The bill was sponsored in the House by William Connery. Roosevelt originally supported the Black-Connery proposals, but soon backed off, uneasy with a provision forbidding importation of goods produced by workers whose weeks were longer than thirty hours, and convinced by arguments of business that trying to legislate fewer hours might have disastrous results. Instead, FDR backed the National Industrial Recovery Act (NIRA). Hunnicutt argues that an implicit deal was struck in the NIRA. Labor leaders were persuaded by NIRA Section 7a’s provisions — which guaranteed union organization and collective bargaining — to support the NIRA rather than the Black-Connery Thirty-Hour Bill. Business, with the threat of thirty hours hanging over its head, fell raggedly into line. (Most historians cite other factors as the key to the NIRA’s passage. See Barbara Alexander’s article on the NIRA in this encyclopedia.) When specific industry codes were drawn up by the NIRA-created National Recovery Administration (NRA), shorter hours were deemphasized. Despite a plan by NRA Administrator Hugh Johnson to make blanket provisions for a thirty-five hour workweek in all industry codes, by late August 1933, the momentum toward the thirty-hour week had dissipated. About half of employees covered by NRA codes had their hours set at forty per week and nearly 40 percent had workweeks longer than forty hours.

The FSLA: Federal Overtime Law

Hunnicutt argues that the entire New Deal can be seen as an attempt to keep shorter-hours advocates at bay. After the Supreme Court struck down the NRA, Roosevelt responded to continued demands for thirty hours with the Works Progress Administration, the Wagner Act, Social Security, and, finally, the Fair Labor Standards Acts, which set a federal minimum wage and decreed that overtime beyond forty hours per week would be paid at one-and-a-half times the base rate in covered industries.

The Demise of the Shorter Hours’ Movement

As the Great Depression ended, average weekly work hours slowly climbed from their low reached in 1934. During World War II hours reached a level almost as high as at the end of World War I. With the postwar return of weekly work hours to the forty-hour level the shorter hours movement effectively ended. Occasionally organized labor’s leaders announced that they would renew the push for shorter hours, but they found that most workers didn’t desire a shorter workweek.

The Case of Kellogg’s

Offsetting isolated examples of hours reductions after World War II, there were noteworthy cases of backsliding. Hunnicutt (1996) has studied the case of Kellogg’s in great detail. In 1946, 87% of women and 71% of men working at Kellogg’s voted to return to the six-hour day, with the end of the war. Over the course of the next decade, however, the tide turned. By 1957 most departments had opted to switch to 8-hour shifts, so that only about one-quarter of the work force, mostly women, retained a six-hour shift. Finally, in 1985, the last department voted to adopt an 8-hour workday. Workers, especially male workers, began to favor additional money more than the extra two hours per day of free time. In interviews they explained that they needed the extra money to buy a wide range of consumer items and to keep up with the neighbors. Several men told about the friction that resulted when men spent too much time around the house: “The wives didn’t like the men underfoot all day.” “The wife always found something for me to do if I hung around.” “We got into a lot of fights.” During the 1950s, the threat of unemployment evaporated and the moral condemnation for being a “work hog” no longer made sense. In addition, the rise of quasi-fixed employment costs (such as health insurance) induced management to push workers toward a longer workday.

The Current Situation

As the twentieth century ended there was nothing resembling a shorter hours “movement.” The length of the workweek continues to fall for most groups — but at a glacial pace. Some Americans complain about a lack of free time but the vast majority seem content with an average workweek of roughly forty hours — channeling almost all of their growing wages into higher incomes rather than increased leisure time.

Causes of the Decline in the Length of the Workweek

Supply, Demand and Hours of Work

The length of the workweek, like other labor market outcomes, is determined by the interaction of the supply and demand for labor. Employers are torn by conflicting pressures. Holding everything else constant, they would like employees to work long hours because this means that they can utilize their equipment more fully and offset any fixed costs from hiring each worker (such as the cost of health insurance — common today, but not a consideration a century ago). On the other hand, longer hours can bring reduced productivity due to worker fatigue and can bring worker demands for higher hourly wages to compensate for putting in long hours. If they set the workweek too high, workers may quit and few workers will be willing to work for them at a competitive wage rate. Thus, workers implicitly choose among a variety of jobs — some offering shorter hours and lower earnings, others offering longer hours and higher earnings.

Economic Growth and the Long-Term Reduction of Work Hours

Historically employers and employees often agreed on very long workweeks because the economy was not very productive (by today’s standards) and people had to work long hours to earn enough money to feed, clothe and house their families. The long-term decline in the length of the workweek, in this view, has primarily been due to increased economic productivity, which has yielded higher wages for workers. Workers responded to this rise in potential income by “buying” more leisure time, as well as by buying more goods and services. In a recent survey, a sizeable majority of economic historians agreed with this view. Over eighty percent accepted the proposition that “the reduction in the length of the workweek in American manufacturing before the Great Depression was primarily due to economic growth and the increased wages it brought” (Whaples, 1995). Other broad forces probably played only a secondary role. For example, roughly two-thirds of economic historians surveyed rejected the proposition that the efforts of labor unions were the primary cause of the drop in work hours before the Great Depression.

Winning the Eight-Hour Day in the Era of World War I

The swift reduction of the workweek in the period around World War I has been extensively analyzed by Whaples (1990b). His findings support the consensus that economic growth was the key to reduced work hours. Whaples links factors such as wages, labor legislation, union power, ethnicity, city size, leisure opportunities, age structure, wealth and homeownership, health, education, alternative employment opportunities, industrial concentration, seasonality of employment, and technological considerations to changes in the average workweek in 274 cities and 118 industries. He finds that the rapid economic expansion of the World War I period, which pushed up real wages by more than 18 percent between 1914 and 1919, explains about half of the drop in the length of the workweek. The reduction of immigration during the war was important, as it deprived employers of a group of workers who were willing to put in long hours, explaining about one-fifth of the hours decline. The rapid electrification of manufacturing seems also to have played an important role in reducing the workweek. Increased unionization explains about one-seventh of the reduction, and federal and state legislation and policies that mandated reduced workweeks also had a noticeable role.

Cross-sectional Patterns from 1919

In 1919 the average workweek varied tremendously, emphasizing the point that not all workers desired the same workweek. The workweek exceeded 69 hours in the iron blast furnace, cottonseed oil, and sugar beet industries, but fell below 45 hours in industries such as hats and caps, fur goods, and women’s clothing. Cities’ averages also differed dramatically. In a few Midwestern steel mill towns average workweeks exceeded 60 hours. In a wide range of low-wage Southern cities they reached the high 50s, but in high-wage Western ports, like Seattle, the workweek fell below 45 hours.

Whaples (1990a) finds that among the most important city-level determinants of the workweek during this period were the availability of a pool of agricultural workers, the capital-labor ratio, horsepower per worker, and the amount of employment in large establishments. Hours rose as each of these increased. Eastern European immigrants worked significantly longer than others, as did people in industries whose output varied considerably from season to season. High unionization and strike levels reduced hours to a small degree. The average female employee worked about six and a half fewer hours per week in 1919 than did the average male employee. In city-level comparisons, state maximum hours laws appear to have had little affect on average work hours, once the influences of other factors have been taken into account. One possibility is that these laws were passed only after economic forces lowered the length of the workweek. Overall, in cities where wages were one percent higher, hours were about -0.13 to -0.05 percent lower. Again, this suggests that during the era of declining hours, workers were willing to use higher wages to “buy” shorter hours.

Annotated Bibliography

Perhaps the most comprehensive survey of the shorter hours movement in the U.S. is David Roediger and Philip Foner’s Our Own Time: A History of American Labor and the Working Day (1989). It contends that “the length of the working day has been the central issue for the American labor movement during its most vigorous periods of activity, uniting workers along lines of craft, gender, and ethnicity.” Critics argue that its central premise is flawed because workers have often been divided about the optimal length of the workweek. It explains the point of view of organized labor and recounts numerous historically important events and arguments, but does not attempt to examine in detail the broader economic forces that determined the length of the workweek. An earlier useful comprehensive work is Marion Cahill’s Shorter Hours: A Study of the Movement since the Civil War (1932).

Benjamin Hunnicutt’s Work Without End: Abandoning Shorter Hours for the Right to Work (1988) focuses on the period from 1920 to 1940 and traces the political, intellectual, and social “dialogues” that changed the American concept of progress from dreams of more leisure to an “obsession” with the importance of work and wage-earning. This work’s detailed analysis and insights are valuable, but it draws many of its inferences from what intellectuals said about shorter hours, rather than spending time on the actual decision makers — workers and employers. Hunnicutt’s Kellogg’s Six-Hour Day (1996), is important because it does exactly this — interviewing employees and examining the motives and decisions of a prominent employer. Unfortunately, it shows that one must carefully interpret what workers say on the subject, as they are prone to reinterpret their own pasts so that their choices can be more readily rationalized. (See EH.NET’s review: http://eh.net/book_reviews/kelloggs-six-hour-day/.)

Economists have given surprisingly little attention to the determinants of the workweek. The most comprehensive treatment is Robert Whaples’ “The Shortening of the American Work Week” (1990), which surveys estimates of the length of the workweek, the shorter hours movement, and economic theories about the length of the workweek. Its core is an extensive statistical examination of the determinants of the workweek in the period around World War I.

References

Atack, Jeremy and Fred Bateman. “How Long Was the Workday in 1880?” Journal of Economic History 52, no. 1 (1992): 129-160.

Cahill, Marion Cotter. Shorter Hours: A Study of the Movement since the Civil War. New York: Columbia University Press, 1932.

Carr, Lois Green. “Emigration and the Standard of Living: The Seventeenth Century Chesapeake.” Journal of Economic History 52, no. 2 (1992): 271-291.

Coleman, Mary T. and John Pencavel. “Changes in Work Hours of Male Employees, 1940-1988.” Industrial and Labor Relations Review 46, no. 2 (1993a): 262-283.

Coleman, Mary T. and John Pencavel. “Trends in Market Work Behavior of Women since 1940.” Industrial and Labor Relations Review 46, no. 4 (1993b): 653-676.

Douglas, Paul. Real Wages in the United States, 1890-1926. Boston: Houghton, 1930.

Fogel, Robert. The Fourth Great Awakening and the Future of Egalitarianism. Chicago: University of Chicago Press, 2000.

Fogel, Robert and Stanley Engerman. Time on the Cross: The Economics of American Negro Slavery. Boston: Little, Brown, 1974.

Gallman, Robert. “The Agricultural Sector and the Pace of Economic Growth: U.S. Experience in the Nineteenth Century.” In Essays in Nineteenth-Century Economic History: The Old Northwest, edited by David Klingaman and Richard Vedder. Athens, OH: Ohio University Press, 1975.

Goldmark, Josephine. Fatigue and Efficiency. New York: Charities Publication Committee, 1912.

Gompers, Samuel. Seventy Years of Life and Labor: An Autobiography. New York: Dutton, 1925.

Greis, Theresa Diss. The Decline of Annual Hours Worked in the United States, since 1947. Manpower and Human Resources Studies, no. 10, Wharton School, University of Pennsylvania, 1984.

Grob, Gerald. Workers and Utopia: A Study of Ideological Conflict in the American Labor Movement, 1865-1900. Evanston: Northwestern University Press, 1961.

Hunnicutt, Benjamin Kline. Work Without End: Abandoning Shorter Hours for the Right to Work. Philadelphia: Temple University Press, 1988.

Hunnicutt, Benjamin Kline. Kellogg’s Six-Hour Day. Philadelphia: Temple University Press, 1996.

Jones, Ethel. “New Estimates of Hours of Work per Week and Hourly Earnings, 1900-1957.” Review of Economics and Statistics 45, no. 4 (1963): 374-385.

Juster, F. Thomas and Frank P. Stafford. “The Allocation of Time: Empirical Findings, Behavioral Models, and Problems of Measurement.” Journal of Economic Literature 29, no. 2 (1991): 471-522.

Licht, Walter. Working for the Railroad: The Organization of Work in the Nineteenth Century. Princeton: Princeton University Press, 1983.

Margo, Robert. “The Labor Force in the Nineteenth Century.” In The Cambridge Economic History of the United States, Volume II, The Long Nineteenth Century, edited by Stanley Engerman and Robert Gallman, 207-243. New York: Cambridge University Press, 2000.

Nelson, Bruce. “‘We Can’t Get Them to Do Aggressive Work': Chicago’s Anarchists and the Eight-Hour Movement.” International Labor and Working Class History 29 (1986).

Ng, Kenneth and Nancy Virts. “The Value of Freedom.” Journal of Economic History 49, no. 4 (1989): 958-965.

Owen, John. “Workweeks and Leisure: An Analysis of Trends, 1948-1975.” Monthly Labor Review 99 (1976).

Owen, John. “Work-time Reduction in the United States and Western Europe.” Monthly Labor Review 111 (1988).

Powderly, Terence. Thirty Years of Labor, 1859-1889. Columbus: Excelsior, 1890.

Ransom, Roger and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. New York: Cambridge University Press, 1977.

Rodgers, Daniel. The Work Ethic in Industrial America, 1850-1920. Chicago: University of Chicago Press, 1978.

Roediger, David. “Ira Steward and the Antislavery Origins of American Eight-Hour Theory.” Labor History 27 (1986).

Roediger, David and Philip Foner. Our Own Time: A History of American Labor and the Working Day. New York: Verso, 1989.

Schor, Juliet B. The Overworked American: The Unexpected Decline in Leisure. New York: Basic Books, 1992.

Shiells, Martha Ellen, “Collective Choice of Working Conditions: Hours in British and U.S. Iron and Steel, 1890-1923.” Journal of Economic History 50, no. 2 (1990): 379-392.

Steinberg, Ronnie. Wages and Hours: Labor and Reform in Twentieth-Century America. New Brunswick, NJ: Rutgers University Press, 1982.

United States, Department of Interior, Census Office. Report on the Statistics of Wages in Manufacturing Industries, by Joseph Weeks, 1880 Census, Vol. 20. Washington: GPO, 1883.

United States Senate. Senate Report 1394, Fifty-Second Congress, Second Session. “Wholesale Prices, Wages, and Transportation.” Washington: GPO, 1893.

Ware, Caroline. The Early New England Cotton Manufacture: A Study of Industrial Beginnings. Boston: Houghton-Mifflin, 1931.

Ware, Norman. The Labor Movement in the United States, 1860-1895. New York: Appleton, 1929.

Weiss, Thomas and Lee Craig. “Agricultural Productivity Growth during the Decade of the Civil War.” Journal of Economic History 53, no. 3 (1993): 527-548.

Whaples, Robert. “The Shortening of the American Work Week: An Economic and Historical Analysis of Its Context, Causes, and Consequences.” Ph.D. dissertation, University of Pennsylvania, 1990a.

Whaples, Robert. “Winning the Eight-Hour Day, 1909-1919.” Journal of Economic History 50, no. 2 (1990b): 393-406.

Whaples, Robert. “Where Is There Consensus Among American Economic Historians? The Results of a Survey on Forty Propositions.” Journal of Economic History 55, no. 1 (1995): 139-154.

Citation: Whaples, Robert. “Hours of Work in U.S. History”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/hours-of-work-in-u-s-history/

In Praise of Empires: Globalization and Order

Author(s):Lal, Deepak
Reviewer(s):Goldstone, Jack A.

Published by EH.NET (February 2006)

Deepak Lal, In Praise of Empires: Globalization and Order. New York: Palgrave Macmillan, 2004. xxvi + 270 pp. $27 (cloth), ISBN: 1-4039-3639-0.

Reviewed for EH.NET by Jack A. Goldstone, Department of Sociology and Anthropology, George Mason University.

Deepak Lal is not your typical economist. His book Unintended Consequences pointed to religious reforms under Pope Gregory the Great to explain Europe’s subsequent economic growth. In Praise of Empires draws on Adam Smith and classical economics, but does so to derive guidelines for the conduct of U.S. foreign policy.

Lal’s core argument is straightforward. Economic growth requires order and security; otherwise state predation undermines private economic endeavor. Such order and security is provided (where it is provided) at the national level by national governments. Yet economies today are not isolated national economies. Rather, growth in the global economy as a whole has been most rapid and widespread during periods with high levels of international trade. Such trade also requires order and security, and in the absence of anything like a global government, Lal argues that it falls to empires to provide them. In pre-modern times, the Roman, Abbasid, Ottoman, Indian, and Chinese empires provided order and security in their domains. In modern times, the Pax Britannica (enforced by the British Navy and the British-led but Indian-manned British army) provided the global order that underwrote the first age of globalization, roughly from 1870 to World War I. Since World War II, it has been American power that has supported the second age of globalization.

Advocates of American power will say, “of course.” Yet Lal is deeply concerned. He believes that America’s reluctance to embrace its imperial role will undermine its competence in addressing this task. In particular, although America is clearly willing to support a military force with global reach, it does so only in the name of national defense. Moreover, because America clearly wants to avoid administering anything like an overseas empire, it has decided not to develop anything like an imperial civil service to help create and maintain domestic order in far-flung and dangerous places.

In addition, Lal is worried that well-meaning moralists and misguided non-governmental organizations (NGO’s) will undercut American efforts. Lal distinguishes between civil liberty and material culture, on the one hand, and democracy and cosmological culture on the other. Civil liberty and material culture are open to everyone in the world; this freedom to contract and own property and to draw upon technology for modern production can be adopted by any society without it having to become “Westernized.” Empires (including the Pax Britannica and the ideal Pax Americana) promote economic prosperity by creating a secure domain in which the acquisition and exchange of property and the deployment of production technologies can take place with acceptable risks. In other words, empires make the world safe for investment, acquisition of human and material capital, mutually advantageous exchange, and innovation.

Lal claims that well-meaning moralists do not realize that democracy and various other elements of Western culture, such as ‘human rights,’ are not universal, but are rather products of the West’s specific historical and religious development. Seeking to impose or promote democracy in countries that have not yet developed civic character among their citizens, or seeking to impose Western values and ideals on cultures that have, for millennia, defined ‘rights’ rather differently, is likely to be fruitless and lead to exactly that clash of cultures that Samuel Huntington has prophesied, and in the process replace productive global order with economically stifling conflict. In addition, NGO’s, operating as global ‘interest groups’ without electoral legitimacy or economic responsibility, have undertaken a moral crusade against multi-national corporations and global capitalism, and in particular against American political and economic power, in pursuit of a utopian ideal of global equality.

Lal’s solution to this is equally straightforward: close down all the ineffective multi-national organizations that are supposed to be supporting international order (the World Bank, IMF, United Nations), recognize that an imperial power supporting security and order is the sole realistic basis for increasing global prosperity, and let America (for no other power is up to the task) get on with that job.

To address Lal’s argument, let me break it into three parts: first is the claim that imperial order is essential for growth; second is the claim that America is best suited to take the role of imperial protector at present; third is the claim that such missions as spreading democracy, promoting human rights, and protecting the environment against global warming and other alarums, are dangerous distractions likely to undermine economic growth.

Lal is undoubtedly correct that most of mankind, for most of its history, has lived under the rule of empires. Yet the notion that such imperial rule is good for economic growth is debatable, at the least. Lal points to the defects of the Ottoman Empire (overly focused on martial expansion) and the Chinese Empire (where the Ming and Ching ‘closed’ themselves off from the global economy as much as possible before the nineteenth century Western incursions) as examples of empires gone wrong. Moreover, many of the most striking episodes of economic growth — China during the southern Song dynasty or the Netherlands in their “Golden Age” of the seventeenth century — occurred precisely when imperial hegemony was absent.

Lal’s argument is most persuasive when he focuses on growth during the “pax Britannica” from 1870 to 1914. During this period, when Britain clearly ruled the waves (and insisted on maintaining its naval predominance through treaties that imposed inferior navies on its rivals), global economic growth was undoubtedly rapid and widespread. But was this all cause and effect? This was the period in which Germany developed its chemical and steel and railroad industries, and by its victories over France and Austria created a “pax Germanica” in central and Eastern Europe. Russia abolished serfdom, and Japan set out to end its isolation and recreate itself in the shape of a western industrial and imperial power. All of these actions were not because Germany, Russia, and Japan felt themselves under the beneficial protection of British hegemony, protecting their foreign trade; it was precisely because having suffered past defeats at the hands of west European powers, they were determined to strengthen themselves sufficiently to escape from British hegemony.

The problem of imperial order is much like the ‘paradox of power’ that Lal discusses at the level of the nation state. Any state government strong enough to enforce order and security for its citizens is also strong enough, if it wishes, to oppress them. The paradox is resolved at the state level by representative institutions and a legal system that allows the populace to block or overturn an overly predatory state, and that protects individuals and minorities from having their basic political and economic freedoms constrained by the majority.

Yet when facing an imperial hegemon, no matter how dedicated it may be to the principles of free trade and equal protection, other countries have no such mechanisms to protect themselves against the day when the imperial power becomes so misguided or selfish that it becomes tyrannical and oppressive. Even virtuous and democratic Athens (as Thucydides reminds us) could be as ruthless and oppressive toward other Greek city-states within its empire as any individual dictator. Thus the presence of an imperial hegemon determined to preserve its predominance creates counter-pressures from other states seeking to escape it. German militarization and naval armament in the years leading up to World War I were a direct response to fears that British naval power could be as easily used to starve Germany as to protect global trade (Avner Offer, The First World War: An Agrarian Interpretation, 1991). The first World War, and the second (which as Lal himself points out, was largely a response to the handcuffs that France and Britain tried to put on Germany to preserve their dominance in the global system), were thus as much an effect of British imperial domination as the prosperity that preceded it.

The two world wars broke Britain’s military and economic strength, and left the field open for America to become the world’s military and economic superpower. At first, of course, the world was polarized between the U.S. and the socialist bloc, led by the USSR. The failure of socialism as a political and economic system, punctuated by the anti-communist revolutions of 1989-91, seems to have further demonstrated that the only path to stable economic growth is through capitalist institutions. Yet granting this demonstration, and America’s current place in the world, does this mean that American military and economic and civil leadership would be most beneficial to the global economic order?

One would be happier about the prospect if America’s history of global interventions were not so ham-handed. It is difficult to claim that America’s recent interventions in Southeast Asia (Vietnam, Laos, and Cambodia), Haiti, Lebanon, Somalia, and Iraq were good for growth in those nations or their neighbors. Where intervention to restore order in dangerous places (Bosnia, Kuwait, Kosovo, Afghanistan) has been more successful, it was precisely in those places where America carefully put together broad coalitions of significant allies and operated as primus inter pares rather than as the 800-pound gorilla.

In regard to perhaps its greatest crises today — the acquisition or threatened acquisition of nuclear arms by North Korea and Iran — the U.S. is relying on multi-national diplomacy, having exhausted its own efforts at unilateral persuasion, and fearing that despite its overwhelming military might it has no clear military solution to these problems.

Lal overlooks one major aspect of Britain’s imperial success that America simply cannot reproduce today: Britain could rely on the manpower of other countries (mainly India) to do the work that British soldiers could not or would not manage on their own. America has developed a highly lethal, highly mobile, global military that can strike anywhere. But the problem with such a military, as shown in Iraq, is that it cannot hold any place without the assistance of large numbers of non-U.S. troops. In the absence of mass conscription and the deployment of millions of U.S. troops, the U.S. will not be able to impose pacification on entire countries as it did in post-war Germany and Japan.

One could add that America — with its rather unique attitudes toward having a highly armed citizenry, running huge central budget deficits, and its history of severe ethnic discrimination — may not be the ideal tutor for a world in which armed conflict, rampant government spending, and ethnic conflict are some of the main obstacles to achieving economic prosperity. A blend of different traditions from a variety of western and non-western nations may be a more palatable mix in forces seeking to impose order in disorderly places. The presence of a single dominant hegemon seems to create unity mainly among those seeking to overturn its order, rather than among those who support it.

Finally, while I fully sympathize with Lal’s concern that utopian quests more often lead to disaster than progress, I fear he is too harsh in his dismissal of NGO’s and the pursuit of human rights. It was not only economic weakness that did in the Soviet Union and its satellites (Burma, despite far worse economic performance, shows no signs of collapse), but also the vigorous pressure for human rights by dissidents. Lal quotes the reformed environmentalist Bjorn Lomborg to ward off anti-globalization doomsayers: “We have reduced atmospheric pollution in the cities, … our rivers have become cleaner and support more life, … the problem of the ozone layer has been more or less solved.” All this is true, and the doomsayers were proved wrong. Yet as those of us who choked on sulfuric acid and ozone during hundreds of ‘smog days’ in Los Angeles, or who canoed by hundreds of belly-up fish in the Potomac river near Washington, in the 1970s remember, it was only because of the efforts of environmentalists to pass the Clean Air and Clean Water acts, and to promote action on the Montreal Protocols that brought international cooperation in managing ozone-destroying gases, that the results that Lomborg praises were achieved. Neither American power nor market forces unaided brought these desirable results.

Democracy has never been a panacea (it did not help the U.S. avoid a bloody civil war), and Lal is correct to say that imposing democracy on societies that are neither nations nor accustomed to liberal institutions is likely to produce less, rather than more, stability and growth. Nonetheless, for Lal to suggest that democracy may simply be outside of the cultural tradition of much of mankind again seems to me a bit too strong. Cultures are elastic and adaptable; they borrow much if it seems useful. The Japanese and Koreans borrowed many cultural elements from China without losing their Japanese or Korean character. Indonesia had no democratic tradition, but seems in recent years to have made great progress in using democracy as a tool to tame corruption and manage political life. South Korea and Taiwan may have gained much initial prosperity under authoritarian governments, but chose not to keep them indefinitely. We clearly have much to learn about the dynamic relationship between democracy and development, but to simply divide the world up into fixed cultural spheres seems to me to reproduce the very errors of Huntington’s work that Lal himself criticizes. Promoting democracy with the same intensity everywhere is probably a poor idea; but encouraging and supporting the spread of democracy judiciously may indeed lead to a more liberal world.

This is one of the most thought-provoking and stimulating books that I have read in years. Lal’s thesis is extraordinarily challenging to conventional thought. Agree or disagree, your opinions will have to change or be carefully rethought. Lal is completely right to raise the difficult problem of who enforces contracts (economic and political) in a globalizing world, an issue too often skirted by advocates of globalization such as Friedman and Bhagwati. You may embrace Lal’s solution or reject it, but it is clear that we will have to find a way to make the best of the combination of overwhelming American power and a world where everyone’s prosperity is at risk from disruptions of global trade.

Jack A. Goldstone is Hazel Professor of Public Policy at George Mason University. His book, A Peculiar Path: The Rise of the West in Global Context, 1500-1850, is forthcoming from Harvard University Press.

Subject(s):Economywide Country Studies and Comparative History
Geographic Area(s):General, International, or Comparative
Time Period(s):General or Comparative