EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Project 2000/2001

Project 2000

Each month during 2000, EH.NET published a review essay on a significant work in twentieth-century economic history. The purpose of these essays was to survey the works that have had the most influence on the field of economic history and to highlight the intellectual accomplishments of twentieth-century economic historians. Each review essay outlines the work’s argument and findings, discusses the author’s methods and sources, and examines the impact that the work has had since its publication.

Nominations were received from dozens of EH.Net’s users. P2K
selection committee members were: Stanley Engerman (University of
Rochester), Alan Heston (University of Pennsylvania), Paul
Hohenberg, chair (Rensselaer Polytechnic Institute), and Mary
Yeager (University of California-Los Angeles). Project Chair was
Robert Whaples (Wake Forest University).

The review essays are:

Braudel, Fernand
Civilization and Capitalism, 15th-18th Century Time
Reviewed by Alan Heston (University of Pennsylvania).

Chandler, Alfred D. Jr.
The Visible Hand: The Managerial Revolution in American Business
Reviewed by David S. Landes (Department of Economics and History, Harvard University).

Chaudhuri, K. N.
The Trading World of Asia and the English East India Company, 1660-1760
Reviewed by Santhi Hejeebu.

Davis, Lance E. and North, Douglass C. (with the assistance of Calla Smorodin)
Institutional Change and American Economic Growth.
Reviewed by Cynthia Taft Morris (Department of Economics, Smith College and American University).

Fogel, Robert W.
Railroads and American Economic Growth: Essays in Econometric History
Reviewed by Lance Davis (California Institute of Technology).

Friedman, Milton and Schwartz, Anna Jacobson
A Monetary History of the United States, 1867-1960
Reviewed by Hugh Rockoff (Rutgers University).

Heckscher, Eli F.
Mercantilism
Reviewed by John J. McCusker (Departments of History and Economics, Trinity University).

Landes, David S.
The Unbound Prometheus: Technological Change and Industrial Development in Western Europe from 1750 to the Present
Reviewed by Paul M. Hohenberg (Rensselaer Polytechnic Institute).

Pinchbeck, Ivy
Women Workers and the Industrial Revolution, 1750-1850 
Reviewed by Joyce Burnette (Wabash College).

Polanyi, Karl
The Great Transformation: The Political and Economic Origins of Our Time
Reviewed by Anne Mayhew (University of Tennessee).

Schumpeter, Joseph A.
Capitalism, Socialism and Democracy 
Reviewed by Thomas K. McCraw (Harvard Business School).

Weber, Max
The Protestant Ethic and the Spirit of Capitalism
Reviewed by Stanley Engerman.

Project 2001

Throughout 2001 and 2002, EH.Net published a second series
of review essays on important and influential works in economic
history. As with Project 2000, nominations for Project 2001 were
received from many EH.Net users and reviewed by the Selection
Committee: Lee Craig (North Carolina State University); Giovanni
Federico (University of Pisa); Anne McCants (MIT); Marvin McInnis
(Queen’s University); Albrecht Ritschl (University of Zurich);
Winifred Rothenberg (Tufts University); and Richard Salvucci
(Trinity College).

Project 2001 selections were:

Borah, Woodrow Wilson
New Spain’s Century of Depression
Reviewed by Richard Salvucci (Department of Economics, Trinity University).

Boserup, Ester
Conditions of Agricultural Growth: The Economics of Agrarian Change under Population Pressure
Reviewed by Giovanni Federico (Department of Modern History, University of Pisa).

Deane, Phyllis and W. A. Cole
British Economic Growth, 1688-1959: Trends and Structure
Reviewed by Knick Harley (Department of Economics, University of Western Ontario).

Fogel, Robert and Stanley Engerman
Time on the Cross: The Economics of American Negro Slavery
Reviewed by Thomas Weiss (Department of Economics, University of Kansas).

Gerschenkron, Alexander
Economic Backwardness in Historical Perspective
Review Essay by Albert Fishlow (International Affairs, Columbia University).

Horwitz, Morton
The Transformation of American Law, 1780-1860
Reviewed by Winifred B. Rothenberg (Department of Economics, Tufts University).

Kuznets, Simon
Modern Economic Growth: Rate, Structure and Spread
Reviewed by Richard A. Easterlin (Department of Economics, University of Southern California).

Le Roy Ladurie, Emmanuel
The Peasants of Languedoc
Reviewed by Anne E.C. McCants (Department of History, Massachusetts Institute of Technology).

North, Douglass and Robert Paul Thomas
The Rise of the Western World: A New Economic History
Reviewed by Philip R. P. Coelho (Department of Economics, Ball State University).

de Vries, Jan
The Economy of Europe in an Age of Crisis, 1600-1750
Review Essay by George Grantham (Department of Economics, McGill University).

Temin, Peter
The Jacksonian Economy
Reviewed by Richard Sylla (Department of Economics, Stern School of Business, New York University).

Wrigley, E. A. and R. S. Schofield
The Population History of England, 1541-1871: A Reconstruction

Project Coordinator and Editor: Robert Whaples (Wake Forest
University)

History of Workplace Safety in the United States, 1880-1970

Mark Aldrich, Smith College

The dangers of work are usually measured by the number of injuries or fatalities occurring to a group of workers, usually over a period of one year. 1 Over the past century such measures reveal a striking improvement in the safety of work in all the advanced countries. In part this has been the result of the gradual shift of jobs from relatively dangerous goods production such as farming, fishing, logging, mining, and manufacturing into such comparatively safe work as retail trade and services. But even the dangerous trades are now far safer than they were in 1900. To take but one example, mining today remains a comparatively risky activity. Its annual fatality rate is about nine for every one hundred thousand miners employed. A century ago in 1900 about three hundred out of every one hundred thousand miners were killed on the job each year. 2

The Nineteenth Century

Before the late nineteenth century we know little about the safety of American workplaces because contemporaries cared little about it. As a result, only fragmentary information exists prior to the 1880s. Pre-industrial laborers faced risks from animals and hand tools, ladders and stairs. Industrialization substituted steam engines for animals, machines for hand tools, and elevators for ladders. But whether these new technologies generally worsened the dangers of work is unclear. What is clear is that nowhere was the new work associated with the industrial revolution more dangerous than in America.

US Was Unusually Dangerous

Americans modified the path of industrialization that had been pioneered in Britain to fit the particular geographic and economic circumstances of the American continent. Reflecting the high wages and vast natural resources of a new continent, this American system encouraged use of labor saving machines and processes. These developments occurred within a legal and regulatory climate that diminished employer’s interest in safety. As a result, Americans developed production methods that were both highly productive and often very dangerous. 3

Accidents Were “Cheap”

While workers injured on the job or their heirs might sue employers for damages, winning proved difficult. Where employers could show that the worker had assumed the risk, or had been injured by the actions of a fellow employee, or had himself been partly at fault, courts would usually deny liability. A number or surveys taken about 1900 showed that only about half of all workers fatally injured recovered anything and their average compensation only amounted to about half a year’s pay. Because accidents were so cheap, American industrial methods developed with little reference to their safety. 4

Mining

Nowhere was the American system more dangerous than in early mining. In Britain, coal seams were deep and coal expensive. As a result, British mines used mining methods that recovered nearly all of the coal because they used waste rock to hold up the roof. British methods also concentrated the working, making supervision easy, and required little blasting. American coal deposits by contrast, were both vast and near the surface; they could be tapped cheaply using techniques known as “room and pillar” mining. Such methods used coal pillars and timber to hold up the roof, because timber and coal were cheap. Since miners worked in separate rooms, labor supervision was difficult and much blasting was required to bring down the coal. Miners themselves were by no means blameless; most were paid by the ton, and when safety interfered with production, safety often took a back seat. For such reasons, American methods yielded more coal per worker than did European techniques, but they were far more dangerous, and toward the end of the nineteenth century, the dangers worsened (see Table 1).5

Table 1

British and American Mine Safety, 1890 -1904

(Fatality rates per Thousand Workers per Year)

Years American Anthracite American Bituminous Great Britain
1890-1894 3.29 2.52 1.61
1900-1904 3.13 3.53 1.28

Source: British data from Great Britain, General Report. Other data from Aldrich, Safety First.

Railroads

Nineteenth century American railroads were also comparatively dangerous to their workers – and their passengers as well – and for similar reasons. Vast North American distances and low population density turned American carriers into predominantly freight haulers – and freight was far more dangerous to workers than passenger traffic, for men had to go in between moving cars for coupling and uncoupling and ride the cars to work brakes. The thin traffic and high wages also forced American carriers to economize on both capital and labor. Accordingly, American carriers were poorly built and used few signals, both of which resulted in many derailments and collisions. Such conditions made American railroad work far more dangerous than that in Britain (see Table 2).6

Table 2

Comparative Safety of British and American Railroad Workers, 1889 – 1901

(Fatality Rates per Thousand Workers per Year)

1889 1895 1901
British railroad workers

All causes
1.14 0.95 0.89
British trainmena

All causes
4.26 3.22 2.21
Coupling 0.94 0.83 0.74
American Railroad workers

All causes
2.67 2.31 2.50
American trainmen

All causes
8.52 6.45 7.35
Coupling 1.73c 1.20 0.78
Brakingb 3.25c 2.44 2.03

Source: Aldrich, Safety First, Table 1 and Great Britain Board of Trade, General Report.

1

Note: Death rates are per thousand employees.

a. Guards, brakemen, and shunters.

b. Deaths from falls from cars and striking overhead obstructions.

Manufacturing

American manufacturing also developed in a distinctively American fashion that substituted power and machinery for labor and manufactured products with interchangeable arts for ease in mass production. Whether American methods were less safe than those in Europe is unclear but by 1900 they were extraordinarily risky by modern standards, for machines and power sources were largely unguarded. And while competition encouraged factory managers to strive for ever-increased output, they showed little interest in improving safety.7

Worker and Employer Responses

Workers and firms responded to these dangers in a number of ways. Some workers simply left jobs they felt were too dangerous, and risky jobs may have had to offer higher pay to attract workers. After the Civil War life and accident insurance companies expanded, and some workers purchased insurance or set aside savings to offset the income risks from death or injury. Some unions and fraternal organizations also offered their members insurance. Railroads and some mines also developed hospital and insurance plans to care for injured workers while many carriers provided jobs for all their injured men. 8

Improving safety, 1910-1939

Public efforts to improve safety date from the very beginnings of industrialization. States established railroad regulatory commissions as early as the 1840s. But while most of the commissions were intended to improve safety, they had few powers and were rarely able to exert much influence on working conditions. Similarly, the first state mining commission began in Pennsylvania in 1869, and other states soon followed. Yet most of the early commissions were ineffectual and as noted safety actually deteriorated after the Civil War. Factory commissions also dated from but most were understaffed and they too had little power.9

Railroads

The most successful effort to improve work safety during the nineteenth century began on the railroads in the 1880s as a small band of railroad regulators, workers, and managers began to campaign for the development of better brakes and couplers for freight cars. In response George Westinghouse modified his passenger train air brake in about 1887 so it would work on long freights, while at roughly the same time Ely Janney developed an automatic car coupler. For the railroads such equipment meant not only better safety, but also higher productivity and after 1888 they began to deploy it. The process was given a boost in 1889-1890 when the newly-formed Interstate Commerce Commission (ICC) published its first accident statistics. They demonstrated conclusively the extraordinary risks to trainmen from coupling and riding freight (Table 2). In 1893 Congress responded, passing the Safety Appliance Act, which mandated use of such equipment. It was the first federal law intended primarily to improve work safety, and by 1900 when the new equipment was widely diffused, risks to trainmen had fallen dramatically.10

Federal Safety Regulation

In the years between 1900 and World War I, a rather strange band of Progressive reformers, muckraking journalists, businessmen, and labor unions pressed for changes in many areas of American life. These years saw the founding of the Federal Food and Drug Administration, the Federal Reserve System and much else. Work safety also became of increased public concern and the first important developments came once again on the railroads. Unions representing trainmen had been impressed by the safety appliance act of 1893 and after 1900 they campaigned for more of the same. In response Congress passed a host of regulations governing the safety of locomotives and freight cars. While most of these specific regulations were probably modestly beneficial, collectively their impact was small because unlike the rules governing automatic couplers and air brakes they addressed rather minor risks.11

In 1910 Congress also established the Bureau of Mines in response to a series of disastrous and increasingly frequent explosions. The Bureau was to be a scientific, not a regulatory body and it was intended to discover and disseminate new knowledge on ways to improve mine safety.12

Workers’ Compensation Laws Enacted

Far more important were new laws that raised the cost of accidents to employers. In 1908 Congress passed a federal employers’ liability law that applied to railroad workers in interstate commerce and sharply limited defenses an employee could claim. Worker fatalities that had once cost the railroads perhaps $200 now cost $2,000. Two years later in 1910, New York became the first state to pass a workmen’s compensation law. This was a European idea. Instead of requiring injured workers to sue for damages in court and prove the employer was negligent, the new law automatically compensated all injuries at a fixed rate. Compensation appealed to businesses because it made costs more predictable and reduced labor strife. To reformers and unions it promised greater and more certain benefits. Samuel Gompers, leader of the American Federation of Labor had studied the effects of compensation in Germany. He was impressed with how it stimulated business interest in safety, he said. Between 1911 and 1921 forty-four states passed compensation laws.13

Employers Become Interested in Safety

The sharp rise in accident costs that resulted from compensation laws and tighter employers’ liability initiated the modern concern with work safety and initiated the long-term decline in work accidents and injuries. Large firms in railroading, mining, manufacturing and elsewhere suddenly became interested in safety. Companies began to guard machines and power sources while machinery makers developed safer designs. Managers began to look for hidden dangers at work, and to require that workers wear hard hats and safety glasses. They also set up safety departments run by engineers and safety committees that included both workers and managers. In 1913 companies founded the National Safety Council to pool information. Government agencies such as the Bureau of Mines and National Bureau of Standards provided scientific support while universities also researched safety problems for firms and industries14

Accident Rates Begin to Fall Steadily

During the years between World War I and World War II the combination of higher accident costs along with the institutionalization of safety concerns in large firms began to show results. Railroad employee fatality rates declined steadily after 1910 and at some large companies such as DuPont and whole industries such as steel making (see Table 3) safety also improved dramatically. Largely independent changes in technology and labor markets also contributed to safety as well. The decline in labor turnover meant fewer new employees who were relatively likely to get hurt, while the spread of factory electrification not only improved lighting but reduced the dangers from power transmission as well. In coal mining the shift from underground work to strip mining also improved safety. Collectively these long-term forces reduced manufacturing injury rates about 38 percent between 1926 and 1939 (see Table 4).15

Table 3

Steel Industry fatality and Injury rates, 1910-1939

(Rates are per million manhours)

Period Fatality rate Injury Rate
1910-1913 0.40 44.1
1937-1939 0.13 11.7

Pattern of Improvement Was Uneven

Yet the pattern of improvement was uneven, both over time and among firms and industries. Safety still deteriorated in times of economic boon when factories mines and railroads were worked to the limit and labor turnover rose. Nor were small companies as successful in reducing risks, for they paid essentially the same compensation insurance premium irrespective of their accident rate, and so the new laws had little effect there. Underground coal mining accidents also showed only modest improvement. Safety was also expensive in coal and many firms were small and saw little payoff from a lower accident rate. The one source of danger that did decline was mine explosions, which diminished in response to technologies developed by the Bureau of Mines. Ironically, however, in 1940 six disastrous blasts that killed 276 men finally led to federal mine inspection in 1941.16

Table 4

Work Injury Rates, Manufacturing and Coal Mining, 1926-1970

(Per Million Manhours)

.

Year Manufacturing Coal Mining
1926 24.2
1931 18.9 89.9
1939 14.9 69.5
1945 18.6 60.7
1950 14.7 53.3
1960 12.0 43.4
1970 15.2 42.6

Source: U.S. Department of Commerce Bureau of the Census, Historical Statistics of the United States, Colonial Times to 1970 (Washington, 1975), Series D-1029 and D-1031.

Postwar Trends, 1945-1970

The economic boon and associated labor turnover during World War II worsened work safety in nearly all areas of the economy, but after 1945 accidents again declined as long-term forces reasserted themselves (Table 4). In addition, after World War II newly powerful labor unions played an increasingly important role in work safety. In the 1960s however economic expansion again led to rising injury rates and the resulting political pressures led Congress to establish the Occupational Safety and Health Administration (OSHA) and the Mine Safety and Health Administration in 1970. The continuing problem of mine explosions also led to the foundation of the Mine Safety and Health Administration (MSHA) that same year. The work of these agencies had been controversial but on balance they have contributed to the continuing reductions in work injuries after 1970.17

References and Further Reading

Aldrich, Mark. Safety First: Technology, Labor and Business in the Building of Work Safety, 1870-1939. Baltimore: Johns Hopkins University Press, 1997.

Aldrich, Mark. “Preventing ‘The Needless Peril of the Coal Mine': the Bureau of Mines and the Campaign Against Coal Mine Explosions, 1910-1940.” Technology and Culture 36, no. 3 (1995): 483-518.

Aldrich, Mark. “The Peril of the Broken Rail: the Carriers, the Steel Companies, and Rail Technology, 1900-1945.” Technology and Culture 40, no. 2 (1999): 263-291

Aldrich, Mark. “Train Wrecks to Typhoid Fever: The Development of Railroad Medicine Organizations, 1850 -World War I.” Bulletin of the History of Medicine, 75, no. 2 (Summer 2001): 254-89.

Derickson Alan. “Participative Regulation of Hazardous Working Conditions: Safety Committees of the United Mine Workers of America,” Labor Studies Journal 18, no. 2 (1993): 25-38.

Dix, Keith. Work Relations in the Coal Industry: The Hand Loading Era. Morgantown: University of West Virginia Press, 1977. The best discussion of coalmine work for this period.

Dix, Keith. What’s a Coal Miner to Do? Pittsburgh: University of Pittsburgh Press, 1988. The best discussion of coal mine labor during the era of mechanization.

Fairris, David. “From Exit to Voice in Shopfloor Governance: The Case of Company Unions.” Business History Review 69, no. 4 (1995): 494-529.

Fairris, David. “Institutional Change in Shopfloor Governance and the Trajectory of Postwar Injury Rates in U.S. Manufacturing, 1946-1970.” Industrial and Labor Relations Review 51, no. 2 (1998): 187-203.

Fishback, Price. Soft Coal Hard Choices: The Economic Welfare of Bituminous Coal Miners, 1890-1930. New York: Oxford University Press, 1992. The best economic analysis of the labor market for coalmine workers.

Fishback, Price and Shawn Kantor. A Prelude to the Welfare State: The Origins of Workers’ Compensation. Chicago: University of Chicago Press, 2000. The best discussions of how employers’ liability rules worked.

Graebner, William. Coal Mining Safety in the Progressive Period. Lexington: University of Kentucky Press, 1976.

Great Britain Board of Trade. General Report upon the Accidents that Have Occurred on Railways of the United Kingdom during the Year 1901. London, HMSO, 1902.

Great Britain Home Office Chief Inspector of Mines. General Report with Statistics for 1914, Part I. London: HMSO, 1915.

Hounshell, David. From the American System to Mass Production, 1800-1932: The Development of Manufacturing Technology in the United States. Baltimore: Johns Hopkins University Press, 1984.

Humphrey, H. B. “Historical Summary of Coal-Mine Explosions in the United States — 1810-1958.” United States Bureau of Mines Bulletin 586 (1960).

Kirkland, Edward. Men, Cities, and Transportation. 2 vols. Cambridge: Harvard University Press, 1948, Discusses railroad regulation and safety in New England.

Lankton, Larry. Cradle to Grave: Life, Work, and Death in Michigan Copper Mines. New York: Oxford University Press, 1991.

Licht, Walter. Working for the Railroad. Princeton: Princeton University Press, 1983.

Long, Priscilla. Where the Sun Never Shines. New York: Paragon, 1989. Covers coal mine safety at the end of the nineteenth century.

Mendeloff, John. Regulating Safety: An Economic and Political Analysis of Occupational Safety and Health Policy. Cambridge: MIT Press, 1979. An accessible modern discussion of safety under OSHA.

National Academy of Sciences. Toward Safer Underground Coal Mines. Washington, DC: NAS, 1982.

Rogers, Donald. “From Common Law to Factory Laws: The Transformation of Workplace Safety Law in Wisconsin before Progressivism.” American Journal of Legal History (1995): 177-213.

Root, Norman and Daley, Judy. “Are Women Safer Workers? A New Look at the Data.” Monthly Labor Review 103, no. 9 (1980): 3-10.

Rosenberg, Nathan. Technology and American Economic Growth. New York: Harper and Row, 1972. Analyzes the forces shaping American technology.

Rosner, David and Gerald Markowity, editors. Dying for Work. Blomington: Indiana University Press, 1987.

Shaw, Robert. Down Brakes: A History of Railroad Accidents, Safety Precautions, and Operating Practices in the United States of America. London: P. R. Macmillan. 1961.

Trachenberg, Alexander. The History of Legislation for the Protection of Coal Miners in Pennsylvania, 1824 – 1915. New York: International Publishers. 1942.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States, Colonial Times to 1970. Washington, DC, 1975.

Usselman, Steven. “Air Brakes for Freight Trains: Technological Innovation in the American Railroad Industry, 1869-1900.” Business History Review 58 (1984): 30-50.

Viscusi, W. Kip. Risk By Choice: Regulating Health and Safety in the Workplace. Cambridge: Harvard University Press, 1983. The most readable treatment of modern safety issues by a leading scholar.

Wallace, Anthony. Saint Clair. New York: Alfred A. Knopf, 1987. Provides a superb discussion of early anthracite mining and safety.

Whaples, Robert and David Buffum. “Fraternalism, Paternalism, the Family and the Market: Insurance a Century Ago.” Social Science History 15 (1991): 97-122.

White, John. The American Railroad Freight Car. Baltimore: Johns Hopkins University Press, 1993. The definitive history of freight car technology.

Whiteside, James. Regulating Danger: The Struggle for Mine Safety in the Rocky Mountain Coal Industry. Lincoln: University of Nebraska Press, 1990.

Wokutch, Richard. Worker Protection Japanese Style: Occupational Safety and Health in the Auto Industry. Ithaca, NY: ILR, 1992

Worrall, John, editor. Safety and the Work Force: Incentives and Disincentives in Workers’ Compensation. Ithaca, NY: ILR Press, 1983.

1 Injuries or fatalities are expressed as rates. For example, if ten workers are injured out of 450 workers during a year, the rate would be .006666. For readability it might be expressed as 6.67 per thousand or 666.7 per hundred thousand workers. Rates may also be expressed per million workhours. Thus if the average work year is 2000 hours, ten injuries in 450 workers results in [10/450×2000]x1,000,000 = 11.1 injuries per million hours worked.

2 For statistics on work injuries from 1922-1970 see U.S. Department of Commerce, Historical Statistics, Series 1029-1036. For earlier data are in Aldrich, Safety First, Appendix 1-3.

3 Hounshell, American System. Rosenberg, Technology,. Aldrich, Safety First.

4 On the workings of the employers’ liability system see Fishback and Kantor, A Prelude, chapter 2

5 Dix, Work Relations, and his What’s a Coal Miner to Do? Wallace, Saint Clair, is a superb discussion of early anthracite mining and safety. Long, Where the Sun, Fishback, Soft Coal, chapters 1, 2, and 7. Humphrey, “Historical Summary.” Aldrich, Safety First, chapter 2.

6 Aldrich, Safety First chapter 1.

7 Aldrich, Safety First chapter 3

8 Fishback and Kantor, A Prelude, chapter 3, discusses higher pay for risky jobs as well as worker savings and accident insurance See also Whaples and Buffum, “Fraternalism, Paternalism.” Aldrich, ” Train Wrecks to Typhoid Fever.”

9Kirkland, Men, Cities. Trachenberg, The History of Legislation Whiteside, Regulating Danger. An early discussion of factory legislation is in Susan Kingsbury, ed.,xxxxx. Rogers,” From Common Law.”

10 On the evolution of freight car technology see White, American Railroad Freight Car, Usselman “Air Brakes for Freight trains,” and Aldrich, Safety First, chapter 1. Shaw, Down Brakes, discusses causes of train accidents.

11 Details of these regulations may be found in Aldrich, Safety First, chapter 5.

12 Graebner, Coal-Mining Safety, Aldrich, “‘The Needless Peril.”

13 On the origins of these laws see Fishback and Kantor, A Prelude, and the sources cited therein.

14 For assessments of the impact of early compensation laws see Aldrich, Safety First, chapter 5 and Fishback and Kantor, A Prelude, chapter 3. Compensation in the modern economy is discussed in Worrall, Safety and the Work Force. Government and other scientific work that promoted safety on railroads and in coal mining are discussed in Aldrich, “‘The Needless Peril’,” and “The Broken Rail.”

15 Farris, “From Exit to Voice.”

16 Aldrich, “‘Needless Peril,” and Humphrey

17 Derickson, “Participative Regulation” and Fairris, “Institutional Change,” also emphasize the role of union and shop floor issues in shaping safety during these years. Much of the modern literature on safety is highly quantitative. For readable discussions see Mendeloff, Regulating Safety (Cambridge: MIT Press, 1979), and Viscusi, Risk by Choice

The US Coal Industry in the Nineteenth Century

Sean Patrick Adams, University of Central Florida

Introduction

The coal industry was a major foundation for American industrialization in the nineteenth century. As a fuel source, coal provided a cheap and efficient source of power for steam engines, furnaces, and forges across the United States. As an economic pursuit, coal spurred technological innovations in mine technology, energy consumption, and transportation. When mine managers brought increasing sophistication to the organization of work in the mines, coal miners responded by organizing into industrial trade unions. The influence of coal was so pervasive in the United States that by the advent of the twentieth century, it became a necessity of everyday life. In an era where smokestacks equaled progress, the smoky air and sooty landscape of industrial America owed a great deal to the growth of the nation’s coal industry. By the close of the nineteenth century, many Americans across the nation read about the latest struggle between coal companies and miners by the light of a coal-gas lamp and in the warmth of a coal-fueled furnace, in a house stocked with goods brought to them by coal-fired locomotives. In many ways, this industry served as a major factor of American industrial growth throughout the nineteenth century.

The Antebellum American Coal Trade

Although coal had served as a major source of energy in Great Britain for centuries, British colonists had little use for North America’s massive reserves of coal prior to American independence. With abundant supplies of wood, water, and animal fuel, there was little need to use mineral fuel in seventeenth and eighteenth-century America. But as colonial cities along the eastern seaboard grew in population and in prestige, coal began to appear in American forges and furnaces. Most likely this coal was imported from Great Britain, but a small domestic trade developed in the bituminous fields outside of Richmond, Virginia and along the Monongahela River near Pittsburgh, Pennsylvania.

The Richmond Basin

Following independence from Britain, imported coal became less common in American cities and the domestic trade became more important. Economic nationalists such as Tench Coxe, Albert Gallatin, and Alexander Hamilton all suggested that the nation’s coal trade — at that time centered in the Richmond coal basin of eastern Virginia — would serve as a strategic resource for the nation’s growth and independence. Although it labored under these weighty expectations, the coal trade of eastern Virginia was hampered by its existence on the margins of the Old Dominion’s plantation economy. Colliers of the Richmond Basin used slave labor effectively in their mines, but scrambled to fill out their labor force, especially during peak periods of agricultural activity. Transportation networks in the region also restricted the growth of coal mining. Turnpikes proved too expensive for the coal trade and the James River and Kanawha Canal failed to make necessary improvements in order to accommodate coal barge traffic and streamline the loading, conveyance, and distribution of coal at Richmond’s tidewater port. Although the Richmond Basin was nation’s first major coalfield, miners there found growth potential to be limited.

The Rise of Anthracite Coal

At the same time that the Richmond Basin’s coal trade declined in importance, a new type of mineral fuel entered urban markets of the American seaboard. Anthracite coal has higher carbon content and is much harder than bituminous coal, thus earning the nickname “stone coal” in its early years of use. In 1803, Philadelphians watched a load of anthracite coal actually squelch a fire during a trial run, and city officials used the load of “stone coal” as attractive gravel for sidewalks. Following the War of 1812, however, a series of events paved the way for anthracite coal’s acceptance in urban markets. Colliers like Jacob Cist saw the shortage of British and Virginia coal in urban communities as an opportunity to promote the use of “stone coal.” Philadelphia’s American Philosophical Society and Franklin Institute enlisted the aid of the area’s scientific community to disseminate information to consumers on the particular needs of anthracite. The opening of several links between Pennsylvania’s anthracite fields via the Lehigh Coal and Navigation Company (1820), the Schuylkill Navigation Company (1825), and the Delaware and Hudson (1829) insured that the flow of anthracite from mine to market would be cheap and fast. “Stone coal” became less a geological curiosity by the 1830s and instead emerged as a valuable domestic fuel for heating and cooking, as well as a powerful source of energy for urban blacksmiths, bakers, brewers, and manufacturers. As demonstrated in Figure 1, Pennsylvania anthracite dominated urban markets by the late 1830s. By 1840, annual production had topped one million tons, or about ten times the annual production of the Richmond bituminous field.

Figure One: Percentage of Seaboard Coal Consumption by Origin, 1822-1842

Sources:

Hunt’s Merchant’s Magazine and Commercial Review 8 (June 1843): 548;

Alfred Chandler, “Anthracite Coal and the Beginnings of the Industrial Revolution,” p. 154.

The Spread of Coalmining

The antebellum period also saw the expansion of coal mining into many more states than Pennsylvania and Virginia, as North America contains a variety of workable coalfields. Ohio’s bituminous fields employed 7,000 men and raised about 320,000 tons of coal in 1850 — only three years later the state’s miners had increased production to over 1,300,000 tons. In Maryland, the George’s Creek bituminous region began to ship coal to urban markets by the Baltimore and Ohio Railroad (1842) and the Chesapeake and Ohio Canal (1850). The growth of St. Louis provided a major boost to the coal industries of Illinois and Missouri, and by 1850 colliers in the two states raised about 350,000 tons of coal annually. By the advent of the Civil War, coal industries appeared in at least twenty states.

Organization of Antebellum Mines

Throughout the antebellum period, coal mining firms tended to be small and labor intensive. The seams that were first worked in the anthracite fields of eastern Pennsylvania or the bituminous fields in Virginia, western Pennsylvania, and Ohio tended to lie close to the surface. A skilled miner and a handful of laborers could easily raise several tons of coal a day through the use of a “drift” or “slope” mine that intersected a vein of coal along a hillside. In the bituminous fields outside of Pittsburgh, for example, coal seams were exposed along the banks of the Monongahela and colliers could simply extract the coal with a pickax or shovel and roll it down the riverbank via a handcart into a waiting barge. Once the coal left the mouth of the mine, however, the size of the business handling it varied. Proprietary colliers usually worked on land that was leased for five to fifteen years — often from a large landowner or corporation. The coal was often shipped to market via a large railroad or canal corporation such as the Baltimore and Ohio Railroad, or the Delaware and Hudson Canal. Competition between mining firms and increases in production kept prices and profit margins relatively low, and many colliers slipped in and out of bankruptcy. These small mining firms were typical of the “easy entry, easy exit” nature of American business competition in the antebellum period.

Labor Relations

Since most antebellum coal mining operations were often limited to a few skilled miners aided by lesser skilled laborers, the labor relations in American coal mining regions saw little extended conflict. Early coal miners also worked close to the surface, often in horizontal drift mines, which meant that work was not as dangerous in the era before deep shaft mining. Most mining operations were far-flung enterprises away from urban centers, which frustrated attempts to organize miners into a “critical mass” of collective power — even in the nation’s most developed anthracite fields. These factors, coupled with the mine operator’s belief that individual enterprise in the anthracite regions insured a harmonious system of independent producers, had inhibited the development of strong labor organizations in Pennsylvania’s antebellum mining industry. In less developed regions, proprietors often worked in the mines themselves, so the lines between ownership, management, and labor were often blurred.

Early Unions

Most disputes, when they did occur, were temporary affairs that focused upon the low wages spurred by the intense competition among colliers. The first such action in the anthracite industry occurred in July of 1842 when workers from Minersville in Schuylkill County marched on Pottsville to protest low wages. This short-lived strike was broken up by the Orwigsburgh Blues, a local militia company. In 1848 John Bates enrolled 5,000 miners and struck for higher pay in the summer of 1849. But members of the “Bates Union” found themselves locked out of work and the movement quickly dissipated. In 1853, the Delaware and Hudson Canal Company’s miners struck for a 2½ cent per ton increase in their piece rate. This strike was successful, but failed to produce any lasting union presence in the D&H’s operations. Reports of disturbances in the bituminous fields of western Pennsylvania and Ohio follow the same pattern, as antebellum strikes tended to be localized and short-lived. Production levels thus remained high, and consumers of mineral fuel could count upon a steady supply reaching market.

Use of Anthracite in the Iron Industry

The most important technological development in the antebellum American coal industry was the successful adoption of anthracite coal to iron making techniques. Since the 1780s, bituminous coal or coke — which is bituminous coal with the impurities burned away — had been the preferred fuel for British iron makers. Once anthracite had nearly successfully entered American hearths, there seemed to be no reason why stone coal could not be used to make iron. As with its domestic use, however, the industrial potential of anthracite coal faced major technological barriers. In British and American iron furnaces of the early nineteenth century, the high heat needed to smelt iron ore required a blast of excess air to aid the combustion of the fuel, whether it was coal, wood, or charcoal. While British iron makers in the 1820s attempted to increase the efficiency of the process by using superheated air, known commonly as a “hot blast,” American iron makers still used a “cold blast” to stoke their furnaces. The density of anthracite coal resisted attempts to ignite it through the cold blast and therefore appeared to be an inappropriate fuel for most American iron furnaces.

Anthracite iron first appeared in Pennsylvania in 1840, when David Thomas brought Welsh hot blast technology into practice at the Lehigh Crane Iron Company. The firm had been chartered in 1839 under the general incorporation act. The Allentown firm’s innovation created a stir in iron making circles, and iron furnaces for smelting ore with anthracite began to appear across eastern and central Pennsylvania. In 1841, only a year after the Lehigh Crane Iron Company’s success, Walter Johnson found no less than eleven anthracite iron furnaces in operation. That same year, an American correspondent of London bankers cited savings on iron making of up to twenty-five percent after the conversion to anthracite and noted that “wherever the coal can be procured the proprietors are changing to the new plan; and it is generally believed that the quality of the iron is much improved where the entire process is affected with anthracite coal.” Pennsylvania’s investment in anthracite iron paid dividends for the industrial economy of the state and proved that coal could be adapted to a number of industrial pursuits. By 1854, forty-six percent of all American pig iron had been smelted with anthracite coal as a fuel, and by 1860 anthracite’s share of pig iron was more than fifty-six percent.

Rising Levels of Coal Output and Falling Prices

The antebellum decades saw the coal industry emerge as a critical component of America’s industrial revolution. Anthracite coal became a fixture in seaboard cities up and down the east coast of North America — as cities grew, so did the demand for coal. To the west, Pittsburgh and Ohio colliers shipped their coal as far as Louisville, Cincinnati, or New Orleans. As wood, animal, and waterpower became scarcer, mineral fuel usually took their place in domestic consumption and small-scale manufacturing. The structure of the industry, many small-scale firms working on short-term leases, meant that production levels remained high throughout the antebellum period, even in the face of falling prices. In 1840, American miners raised 2.5 million tons of coal to serve these growing markets and by 1850 increased annual production to 8.4 million tons. Although prices tended to fluctuate with the season, in the long run, they fell throughout the antebellum period. For example, in 1830 anthracite coal sold for about $11 per ton. Ten years later, the price had dropped to $7 per ton and by 1860 anthracite sold for about $5.50 a ton in New York City. Annual production in 1860 also passed twenty million tons for the first time in history. Increasing production, intense competition, low prices, and quiet labor relations all were characteristics of the antebellum coal trade in the United States, but developments during and after the Civil War would dramatically alter the structure and character of this critical industrial pursuit.

Coal and the Civil War

The most dramatic expansion of the American coal industry occurred in the late antebellum decades but the outbreak of the Civil War led to some major changes. The fuel needs of the federal army and navy, along with their military suppliers, promised a significant increase in the demand for coal. Mine operators planned for rising, or at least stable, coal prices for the duration of the war. Their expectations proved accurate. Even when prices are adjusted for wartime inflation, they increased substantially over the course of the conflict. Over the years 1860 to 1863, the real (i.e., inflation-adjusted) price of a ton of anthracite rose by over thirty percent, and in 1864 the real price had increased to forty-five percent above its 1860 level. In response, the production of coal increased to over twelve million tons of anthracite and over twenty-four million tons nationwide by 1865.

The demand for mineral fuel in the Confederacy led to changes in southern coalfields as well. In 1862, the Confederate Congress organized the Niter and Mining Bureau within the War Department to supervise the collection of niter (also known as saltpeter) for the manufacture of gunpowder and the mining of copper, lead, iron, coal, and zinc. In addition to aiding the Richmond Basin’s production, the Niter and Mining Bureau opened new coalfields in North Carolina and Alabama and coordinated the flow of mineral fuel to Confederate naval stations along the coast. Although the Confederacy was not awash in coal during the conflict, the work of the Niter and Mining Bureau established the groundwork for the expansion of mining in the postbellum South.

In addition to increases in production, the Civil War years accelerated some qualitative changes in the structure of the industry. In the late 1850s, new railroads stretched to new bituminous coalfields in states like Maryland, Ohio, and Illinois. In the established anthracite coal regions of Pennsylvania, railroad companies profited immensely from the increased traffic spurred by the war effort. For example, the Philadelphia & Reading Railroad’s margin of profit increased from $0.88 per ton of coal in 1861 to $1.72 per ton in 1865. Railroad companies emerged from the Civil War as the most important actors in the nation’s coal trade.

The American Coal Trade after the Civil War

Railroads and the Expansion of the Coal Trade

In the years immediately following the Civil War, the expansion of the coal trade accelerated as railroads assumed the burden of carrying coal to market and opening up previously inaccessible fields. They did this by purchasing coal tracts directly and leasing them to subsidiary firms or by opening their own mines. In 1878, the Baltimore and Ohio Railroad shipped three million tons of bituminous coal from mines in Maryland and from the northern coalfields of the new state of West Virginia. When the Chesapeake and Ohio Railroad linked Huntington, West Virginia with Richmond, Virginia in 1873, the rich bituminous coal fields of southern West Virginia were open for development. The Norfolk and Western developed the coalfields of southwestern Virginia by completing their railroad from tidewater to remote Tazewell County in 1883. A network of smaller lines linking individual collieries to these large trunk lines facilitated the rapid development of Appalachian coal.

Railroads also helped open up the massive coal reserves west of the Mississippi. Small coal mines in Missouri and Illinois existed in the antebellum years, but were limited to the steamboat trade down the Mississippi River. As the nation’s web of railroad construction expanded across the Great Plains, coalfields in Colorado, New Mexico, and Wyoming witnessed significant development. Coal had truly become a national endeavor in the United States.

Technological Innovations

As the coal industry expanded, it also incorporated new mining methods. Early slope or drift mines intersected coal seams relatively close to the surface and needed only small capital investments to prepare. Most miners still used picks and shovels to extract the coal, but some miners used black powder to blast holes in the coal seams, then loaded the broken coal onto wagons by hand. But as miners sought to remove more coal, shafts were dug deeper below the water line. As a result, coal mining needed larger amounts of capital as new systems of pumping, ventilation, and extraction required the implementation of steam power in mines. By the 1890s, electric cutting machines replaced the blasting method of loosening the coal in some mines, and by 1900 a quarter of American coal was mined using these methods. As the century progressed, miners raised more and more coal by using new technology. Along with this productivity came the erosion of many traditional skills cherished by experienced miners.

The Coke Industry

Consumption patterns also changed. The late nineteenth century saw the emergence of coke — a form of processed bituminous coal in which impurities are “baked” out under high temperatures — as a powerful fuel in the iron and steel industry. The discovery of excellent coking coal in the Connellsville region of southwestern Pennsylvania spurred the aggressive growth of coke furnaces there. By 1880, the Connellsville region contained more than 4,200 coke ovens and the national production of coke in the United States stood at three million tons. Two decades later, the United States consumed over twenty million tons of coke fuel.

Competition and Profits

The successful incorporation of new mining methods and the emergence of coke as a major fuel source served as both a blessing and a curse to mining firms. With the new technology they raised more coal, but as more coalfields opened up and national production neared eighty million tons by 1880, coal prices remained relatively low. Cheap coal undoubtedly helped America’s rapidly industrializing economy, but it also created an industry structure characterized by boom and bust periods, low profit margins, and cutthroat competition among firms. But however it was raised, the United States became more and more dependent upon coal as the nineteenth century progressed, as demonstrated by Figure 2.

Figure 2: Coal as a Percentage of American Energy Consumption, 1850-1900

Source: Sam H. Schurr and Bruce C. Netschert, Energy in the American Economy, 1850-1975 (Baltimore: Johns Hopkins Press, 1960), 36-37.

The Rise of Labor Unions

As coal mines became more capital intensive over the course of the nineteenth century, the role of miners changed dramatically. Proprietary mines usually employed skilled miners as subcontractors in the years prior to the Civil War; by doing so they abdicated a great deal of control over the pace of mining. Corporate reorganization and the introduction of expensive machinery eroded the traditional authority of the skilled miner. By the 1870s, many mining firms employed managers to supervise the pace of work, but kept the old system of paying mine laborers per ton rather than an hourly wage. Falling piece rates quickly became a source of discontent in coal mining regions.

Miners responded to falling wages and the restructuring of mine labor by organizing into craft unions. The Workingmen’s Benevolent Association founded in Pennsylvania in 1868, united English, Irish, Scottish, and Welsh anthracite miners. The WBA won some concessions from coal companies until Franklin Gowen, acting president of the Philadelphia and Reading Railroad led a concerted effort to break the union in the winter of 1874-75. When sporadic violence plagued the anthracite fields, Gowen led the charge against the “Molly Maguires,” a clandestine organization supposedly led by Irish miners. After the breaking of the WBA, most coal mining unions served to organize skilled workers in specific regions. In 1890, a national mining union appeared when delegates from across the United States formed the United Mine Workers of America. The UMWA struggled to gain widespread acceptance until 1897, when widespread strikes pushed many workers into union membership. By 1903, the UMWA listed about a quarter of a million members, raised a treasury worth over one million dollars, and played a major role in industrial relations of the nation’s coal industry.

Coal at the Turn of the Century

By 1900, the American coal industry was truly a national endeavor that raised fifty-seven million tons of anthracite and 212 million tons of bituminous coal. (See Tables 1 and 2 for additional trends.) Some coal firms grew to immense proportions by nineteenth-century standards. The U.S. Coal and Oil Company, for example, was capitalized at six million dollars and owned the rights to 30,000 acres of coal-bearing land. But small mining concerns with one or two employees also persisted through the turn of the century. New developments in mine technology continued to revolutionize the trade as more and more coal fields across the United States became integrated into the national system of railroads. Industrial relations also assumed nationwide dimensions. John Mitchell, the leader of the UMWA, and L.M. Bowers of the Colorado Fuel and Iron Company, symbolized a new coal industry in which hard-line positions developed in both labor and capital’s respective camps. Since the bituminous coal industry alone employed over 300,000 workers by 1900, many Americans kept a close eye on labor relations in this critical trade. Although “King Coal” stood unchallenged as the nation’s leading supplier of domestic and industrial fuel, tension between managers and workers threatened the stability of the coal industry in the twentieth century.

 

Table 1: Coal Production in the United States, 1829-1899

Year Coal Production (thousands of tons) Percent Increase over Decade Tons per capita
Anthracite Bituminous
1829 138 102 0.02
1839 1008 552 550 0.09
1849 3995 2453 313 0.28
1859 9620 6013 142 0.50
1869 17,083 15,821 110 0.85
1879 30,208 37,898 107 1.36
1889 45,547 95,683 107 2.24
1899 60,418 193,323 80 3.34

Source: Fourteenth Census of the United States, Vol. XI, Mines and Quarries, 1922, Tables 8 and 9, pp. 258 and 260.

Table 2: Leading Coal Producing States, 1889

State Coal Production (thousands of tons)
Pennsylvania 81,719
Illinois 12,104
Ohio 9977
West Virginia 6232
Iowa 4095
Alabama 3573
Indiana 2845
Colorado 2544
Kentucky 2400
Kansas 2221
Tennessee 1926

Source: Thirteenth Census of the United States, Vol. XI, Mines and Quarries, 1913, Table 4, p. 187

Suggestions for Further Reading

Adams, Sean Patrick. “Different Charters, Different Paths: Corporations and Coal in Antebellum Pennsylvania and Virginia,” Business and Economic History 27 (Fall 1998): 78-90.

Binder, Frederick Moore. Coal Age Empire: Pennsylvania Coal and Its Utilization to 1860. Harrisburg: Pennsylvania Historical and Museum Commission, 1974.

Blatz, Perry. Democratic Miners: Work and Labor Relations in the Anthracite Coal Industry, 1875-1925. Albany: SUNY Press, 1994.

Broehl, Wayne G. The Molly Maguires. Cambridge, MA: Harvard University Press, 1964.

Bruce, Kathleen. Virginia Iron Manufacture in the Slave Era. New York: The Century Company, 1931.

Chandler, Alfred. “Anthracite Coal and the Beginnings of the ‘Industrial Revolution’ in the United States,” Business History Review 46 (1972): 141-181.

DiCiccio, Carmen. Coal and Coke in Pennsylvania. Harrisburg: Pennsylvania Historical and Museum Commission, 1996

Eavenson, Howard. The First Century and a Quarter of the American Coal Industry. Pittsburgh: Privately Printed, 1942.

Eller, Ronald. Miners, Millhands, and Mountaineers: Industrialization of the Appalachian South, 1880-1930. Knoxville: University of Tennessee Press, 1982.

Harvey, Katherine. The Best Dressed Miners: Life and Labor in the Maryland Coal Region, 1835-1910. Ithaca, NY: Cornell University Press, 1993.

Hoffman, John. “Anthracite in the Lehigh Valley of Pennsylvania, 1820-1845,” United States National Museum Bulletin 252 (1968): 91-141.

Laing, James T. “The Early Development of the Coal Industry in the Western Counties of Virginia,” West Virginia History 27 (January 1966): 144-155.

Laslett, John H.M. editor. The United Mine Workers: A Model of Industrial Solidarity? University Park: Penn State University Press, 1996.

Letwin, Daniel. The Challenge of Interracial Unionism: Alabama Coal Miners, 1878-1921 Chapel Hill: University of North Carolina Press, 1998.

Lewis, Ronald. Coal, Iron, and Slaves. Industrial Slavery in Maryland and Virginia, 1715-1865. Westport, Connecticut: Greenwood Press, 1979.

Long, Priscilla. Where the Sun Never Shines: A History of America’s Bloody Coal Industry. New York: Paragon, 1989.

Nye, David E.. Consuming Power: A Social History of American Energies. Cambridge: Massachusetts Institute of Technology Press, 1998.

Palladino, Grace. Another Civil War: Labor, Capital, and the State in the Anthracite Regions of Pennsylvania, 1840-1868. Urbana: University of Illinois Press, 1990.

Powell, H. Benjamin. Philadelphia’s First Fuel Crisis. Jacob Cist and the Developing Market for Pennsylvania Anthracite. University Park: The Pennsylvania State University Press, 1978.

Schurr, Sam H. and Bruce C. Netschert. Energy in the American Economy, 1850-1975: An Economic Study of Its History and Prospects. Baltimore: Johns Hopkins Press, 1960.

Stapleton, Darwin. The Transfer of Early Industrial Technologies to America. Philadelphia: American Philosophical Society, 1987.

Stealey, John E.. The Antebellum Kanawha Salt Business and Western Markets. Lexington: The University Press of Kentucky, 1993.

Wallace, Anthony F.C. St. Clair. A Nineteenth-Century Coal Town’s Experience with a Disaster-Prone Industry. New York: Alfred A. Knopf, 1981.

Warren, Kenneth. Triumphant Capitalism: Henry Clay Frick and the Industrial Transformation of America. Pittsburgh: University of Pittsburgh Press, 1996.

Woodworth, J. B.. “The History and Conditions of Mining in the Richmond Coal-Basin, Virginia.” Transactions of the American Institute of Mining Engineers 31 (1902): 477-484.

Yearley, Clifton K.. Enterprise and Anthracite: Economics and Democracy in Schuylkill County, 1820-1875. Baltimore: The Johns Hopkins University Press, 1961.

History of the U.S. Telegraph Industry

Tomas Nonnenmacher, Allegheny College

Introduction

The electric telegraph was one of the first telecommunications technologies of the industrial age. Its immediate predecessors were homing pigeons, visual networks, the Pony Express, and railroads. By transmitting information quickly over long distances, the telegraph facilitated the growth in the railroads, consolidated financial and commodity markets, and reduced information costs within and between firms. This entry focuses on the industrial organization of the telegraph industry from its inception through its demise and the industry’s impact on the American economy.

The Development of the Telegraph

The telegraph was similar to many other inventions of the nineteenth century. It replaced an existing technology, dramatically reduced costs, was monopolized by a single firm, and ultimately was displaced by a newer technology. Like most radical new technologies, the telecommunications revolution of the mid-1800s was not a revolution at all, but rather consisted of many inventions and innovations in both technology and industrial organization. This section is broken into four parts, each reviewing an era of telegraphy: precursors to the electric telegraph, early industrial organization of the industry, Western Union’s dominance, and the decline of the industry.

Precursors to the Electric Telegraph

Webster’s definition of a telegraph is “an apparatus for communicating at a distance by coded signals.” The earliest telegraph systems consisted of smoke signals, drums, and mirrors used to reflect sunlight. In order for these systems to work, both parties (the sender and the receiver) needed a method of interpreting the signals. Henry Wadsworth Longfellow’s poem recounting Paul Revere’s ride (“One if by land, two if by sea, and I on the opposite shore will be”) gives an example of a simple system. The first extensive telegraph network was the visual telegraph. In 1791 the Frenchman Claude Chappe used a visual network (which consisted of a telescope, a clock, a codebook, and black and white panels) to send a message ten miles. He called his invention the télégraphe, or far writer. Chappe refined and expanded his network, and by 1799 his telegraph consisted of a network of towers with mechanical arms spread across France. The position of the arms was interpreted using a codebook with over 8,000 entries.

Technological Advances

Due to technological difficulties, the electric telegraph could not at first compete with the visual telegraph. The basic science of the electric telegraph is to send an electric current through a wire. Breaking the current in a particular pattern denotes letters or phrases. The Morse code, named after Samuel Morse, is still used today. For instance, the code for SOS (… — …) is a well-known call for help. Two elements had to be perfected before an electric telegraph could work: a means of sending the signal (generating and storing electricity) and receiving the signal (recording the breaks in the current).

The science behind the telegraph dates back at least as far as Roger Bacon’s (1220-1292) experiments in magnetism. Numerous small steps in the science of electricity and magnetism followed. Important inventions include those of Giambattista della Porta (1558), William Gilbert (1603), Stephen Gray (1729), William Watson (1747), Pieter van Musschenbroek (1754), Luigi Galvani (1786), Alessandro Giuseppe Antonio Anastasio Volta (1800), André-Marie Ampere (1820), William Sturgeon (1825), and Joseph Henry (1829). A much longer list could be made, but the point is that no single person can be credited with developing the necessary technology of the telegraph.

1830-1866: Development and Consolidation of the Electric Telegraph Industry

In 1832, Samuel Morse returned to the United States from his artistic studies in Europe. While discussing electricity with fellow passengers, Morse conceived of the idea of a single-wire electric telegraph. No one until this time had Morse’s zeal for the applicability of electromagnetism to telecommunications or his conviction of its eventual profitability. Morse obtained a patent in the United States in 1838 but split his patent right to gain the support of influential partners. He obtained a $30,000 grant from Congress in 1843 to build an experimental line between Baltimore and Washington. The first public message over Morse’s line (“What hath God wrought?”) echoed the first message over Chappe’s system (“If you succeed, you will bask in glory”). Both indicated the inventors’ convictions about the importance of their systems.

Morse and His Partners

Morse realized early on that he was incapable of handling the business end of the telegraph and hired Amos Kendall, a former Postmaster General and a member of Andrew Jackson’s “Kitchen Cabinet,” to manage his business affairs. By 1848 Morse had consolidated the partnership to four members. Kendall managed the three-quarters of the patent belonging to Morse, Leonard Gale, and Alfred Vail. Gale and Vail had helped Morse develop the telegraph’s technology. F.O.J. Smith, a former Maine Representative whose help was instrumental in obtaining the government grant, decided to retain direct control of his portion of the patent right. The partnership agreement was vague, and led to discord between Kendall and Smith. Eventually the partners split the patent right geographically. Smith controlled New England, New York, and the upper-Midwest, and Morse controlled the rest of the country.

The availability of financing influenced the early industrial organization of the telegraph. Initially, Morse tried to sell his patent to the government, Kendall, Smith, and several groups of businessmen, but all attempts were unsuccessful. Kendall then attempted to generate interest in building a unified system across the country. This too failed, leaving Kendall to sell the patent right piecemeal to regional interests. These lines covered the most potentially profitable routes, emanating from New York and reaching Washington, Buffalo, Boston and New Orleans. Morse also licensed feeder lines to supply main lines with business.

Rival Patents

Royal House and Alexander Bain introduced rival patents in 1846 and 1849. Entrepreneurs constructed competing lines on the major eastern routes using the new patents. The House device needed a higher quality wire and more insulation as it was a more precise instrument. It had a keyboard at one end and printed out letters at the other. At its peak, it could send messages considerably faster than Morse’s technique. The Bain device was similar to Morse’s, except that instead of creating dots and dashes, it discolored a piece of chemically treated paper by sending an electric current through it. Neither competitor had success initially, leading Kendall to underestimate their eventual impact on the market.

By 1851, ten separate firms ran lines into New York City. There were three competing lines between New York and Philadelphia, three between New York and Boston, and four between New York and Buffalo. In addition, two lines operated between Philadelphia to Pittsburgh, two between Buffalo and Chicago, three between points in the Midwest and New Orleans, and entrepreneurs erected lines between many Midwestern cities. In all, in 1851 the Bureau of the Census reported 75 companies with 21,147 miles of wire.

Multilateral Oligopolies

The telegraph markets in 1850 were multilateral oligopolies. The term “multilateral” means that the production process extended in several directions. Oligopolies are markets in which a small number of firms strategically interact. Telegraph firms competed against rivals on the same route, but sought alliances with firms with which they connected. For example, four firms (New York, Albany & Buffalo; New York State Printing; Merchants’ State; and New York and Erie) competed on the route between New York City and Buffalo. Rates fell dramatically (by more than 50%) as new firms entered, so this market was quite competitive for a while. But each of these firms sought to create an alliance with connecting firms, such as those with lines from New York City to Boston or Washington. Increased business from exchanging messages meant increased profitability.

Mistransmission Problems

Quality competition was also fierce, with the line that erected the best infrastructure and supplied the fastest service usually dominating other, less capable firms. Messages could easily be garbled, and given the predominately business-related use of the telegraph, a garbled message was often worse than no message at all. A message sent from Boston to St. Louis could have traveled over the lines of five firms. Due to the complexity of the production process, messages were also often lost, with no firm taking responsibility for the mistransmission. This lack of responsibility gave firms an incentive to provide a lower quality service compared to an integrated network. These issues ultimately contributed to the consolidation of the industry.

Horizontal and System Integration

Horizontal integration-integration between two competing firms-and system integration-integration between two connecting firms-occurred in the telegraph industry during different periods. System integration occurred between 1846 and 1852, as main lines acquired most of the feeder lines in the country. In 1852 the Supreme Court declared the Bain telegraph an infringement on Morse’s patent, and Bain lines merged with Morse lines across the country. Between 1853 and 1857 regional monopolies formed and signed the “Treaty of Six Nations,” a pooling agreement between the six largest regional firms. During this phase the industry experienced both horizontal and system integration. By the end of the period, most remaining firms were regional monopolists, controlled several large cities and owned both the House and the Morse patents. Figure 1 shows the locations of these firms.

Figure 1: Treaty of Six Nations

Source: Thompson, p. 315

The final phase of integration occurred between 1857 and 1866. In this period the pool members consolidated into a national monopoly. By 1864 only Western Union and the American Telegraph Company remained of the “Six Nations.” The United States Telegraph Company entered the field by consolidating smaller, independent firms in the early 1860s, and operated in the territory of both the American Telegraph Company and Western Union. By 1866 Western Union absorbed its last two competitors and reached its position of market dominance.

Efficiency versus Market Power

Horizontal and system integration had two causes: efficiency and market power. Horizontal integration created economies of scale that could be realized from placing all of the wires between two cities on the same route or all the offices in a city in the same location. This consolidation reduced the cost of maintaining multiple lines. The reduction in competition due to horizontal integration also allowed firms to charge a higher price and earn monopoly profits. The efficiency gain from system integration was better control of messages travelling long distances. With responsibility for the message placed clearly in the hands of one firm, messages were transmitted with more care. System integration also created monopoly power, since to compete with a large incumbent system, a new entrant would have to also create a large infrastructure.

1866-1900: Western Union’s Dominance

The period from 1866 through the turn of the century was the apex of Western Union’s power. Yearly messages sent over its lines increased from 5.8 million in 1867 to 63.2 million in 1900. Over the same period, transmission rates fell from an average of $1.09 to 30 cents per message. Even with these lower prices, roughly 30 to 40 cents of every dollar of revenue were net profit for the company. Western Union faced three threats during this period: increased government regulation, new entrants into the field of telegraphy, and new competition from the telephone. The last two were the most important to the company’s future profitability.

Western Union Fends off Regulation

Western Union was the first nationwide industrial monopoly, with over 90% of the market share and dominance in every state. The states and the federal government responded to this market power. State regulation was largely futile given the interstate character of the industry. On the federal level, bills were introduced in almost every session of Congress calling for either regulation of or government entry into the industry. Western Union’s lobby was able to block almost any legislation. The few regulations that were passed either helped Western Union maintain its control over the market or were never enforced.

Western Union’s Smaller Rivals

Western Union’s first rival was the Atlantic and Pacific Telegraph Company, a conglomeration of new and merged lines created by Jay Gould in 1874. Gould sought to wrest control of Western Union from the Vanderbilts, and he succeeded in 1881 when the two firms merged. A more permanent rival appeared in the 1880s in the form of the Postal Telegraph Company. John Mackay, who had already made a fortune at the Comstock Lode, headed this firm. Mackay did what many of his telegraph predecessors did in the 1850s: create a network by buying out existing bankrupt firms and merging them into a network with large enough economies of scale to compete with Western Union. Postal never challenged Western Union’s market dominance, but did control over 10-20% of the market at various times.

The Threat from the Telephone

Western Union’s greatest threat came from a new technology, the telephone. Alexander Graham Bell patented the telephone in 1876, initially referring to it as a “talking telegraph.” Bell offered Western Union the patent for the telephone for $100,000, but the company declined to purchase it. Western Union could have easily gained control of AT&T in the 1890s, but management decided that higher dividends were more important than expansion. The telephone was used in the 1880s only for local calling, but with the development in the 1890s of “long lines,” the telephone offered increased competition to the telegraph. In 1900, local calls accounted for 97% of the telephone’s business, and it was not until the twentieth century that the telephone fully displaced the telegraph.

1900-1988: Increased Competition and Decline

The twentieth century saw the continued rise of the telephone and decline of the telegraph. Telegraphy continued to have a niche in inexpensive long-distance and international communication, including teletypewriters, Telex, and stock ticker. As shown in Table 1, after 1900, the rise in telegraph traffic slowed, and after 1930, the number of messages sent began to decline.

Table 1: Messages Handled by the Telegraph Network: 1870-1970

Date Messages Handled Date Messages Handled
1870 9,158,000 1930 211,971,000
1880 29,216,000 1940 191,645,000
1890 55,879,000 1945 236,169,000
1900 63,168,000 1950 178,904,000
1910 75,135,000 1960 124,319,000
1920 155,884,000 1970 69,679,000

Source: Historical Statistics.
Notes: Western Union messages 1870-1910; all telegraph companies, 1920-1970.

AT&T Obtains Western Union, Then Gives It Up

In 1909, AT&T gained control of Western Union by purchasing 30% of its stock. In many ways, the companies were heading in opposite directions. AT&T was expanding rapidly, while Western Union was content to reap handsome profits and issue large dividends but not reinvest in itself. Under AT&T’s ownership, Western Union was revitalized, but the two companies separated in 1913, succumbing to pressure from the Department of Justice. In 1911, the Department of Justice successfully used the Sherman Antitrust Act to force a breakup of Standard Oil. This success made the threat of antitrust action against AT&T very credible. Both Postal Telegraph and the independent telephone companies wishing to interconnect with AT&T lobbied for government regulation. In order to forestall any such government action, AT&T issued the “Kingsbury Commitment,” a unilateral commitment to divest itself of Western Union and allow independent telephone firms to interconnect.

Decline of the Telegraph

The telegraph flourished in the 1920s, but the Great Depression hit the industry hard, and it never recovered to its previous position. AT&T introduced the teletypewriter exchange service in 1931. The teletypewriter and the Telex allowed customers to install a machine on their premises that would send and receive messages directly. In 1938, AT&T had 18%, Postal 15% and Western Union 64% of telegraph traffic. In 1945, 236 million domestic messages were sent, generating $182 million in revenues. This was the most messages sent in a year over the telegraph network in the United States. By that time, Western Union had incorporated over 540 telegraph and cable companies into its system. The last important merger was between Western Union and Postal, which occurred in 1945. This final merger was not enough to stop the continuing rise of the telephone or the telegraph’s decline. Already in 1945, AT&T’s revenues and transmission dwarfed those of Western Union. AT&T made $1.9 billion in yearly revenues by transmitting 89.4 million local phone calls and 4.9 million toll calls daily. Table 2 shows the increasing competitiveness of telephone rates with telegraph rates.

Table 2: Telegraph and Telephone Rates from New York City to Chicago: 1850-1970

Date Telegraph* Telephone**
1850 $1.55
1870 1.00
1890 .40
1902 5.45
1919 .60 4.65
1950 .75 1.50
1960 1.45 1.45
1970 2.25 1.05

Source: Historical Statistics.
Notes: * Beginning 1960, for 15 word message. Prior to 1960 for 10 word message. ** Rates for station-to station, daytime, 3-minute call

The Effects of the Telegraph

The travel time from New York City to Cleveland in 1800 was two weeks, with another four weeks necessary to reach Chicago. By 1830, those travel times had fallen in half, and by 1860 it took only two days to reach Chicago from New York City. However, by use of the telegraph, news could travel between those two cities almost instantaneously. This section examines three instances where the telegraph affected economic growth: railroads, high throughput firms, and financial markets.

Telegraphs and Railroads

The telegraph and the railroad were natural partners in commerce. The telegraph needed the right of way that the railroads provided and the railroads needed the telegraph to coordinate the arrival and departure of trains. These synergies were not immediately recognized. Only in 1851 did railways start to use telegraphy. Prior to that, telegraph wires strung along the tracks were seen as a nuisance, occasionally sagging and causing accidents and even fatalities.

The greatest savings of the telegraph were from the continued use of single-tracked railroad lines. Prior to 1851, the U.S. system was single-tracked, and trains ran on a time-interval system. Two types of accidents could occur. Trains running in opposite directions could run into one another, as could trains running in the same direction. The potential for accidents required that railroad managers be very careful in dispatching trains. One way to reduce the number of accidents would have been to double-track the system. A second, better, way was to use the telegraph.

Double-tracking was a good alternative, but not perfect. Double-tracked lines would eliminate head-on collisions, but not same direction ones. This would still need to be done using a timing system, i.e. requiring a time interval between departing trains. Accidents were still possible using this system. By using the telegraph, station managers knew exactly what trains were on the tracks under their supervision. Double-tracking the U.S. rail system in 1893 has been estimated to cost $957 million. Western Union’s book capitalization was $123 million in 1893, making this seem like a good investment. Of course, the railroads could have used a system like Chappe’s visual telegraph to coordinate traffic, but such a system would have been less reliable and would not have been able to handle the same volume of traffic.

Telegraph and Perishable Products Industries

Other industries that had a high inventory turnover also benefited from the telegraph. Of particular importance were industries in which the product was perishable. These industries included meatpacking and the distribution of fruits and vegetables. The growth of both of these industries was facilitated by the introduction of the refrigerated car in 1874. The telegraph was required for the exact control of shipments. For instance, refrigeration and the telegraph allowed for the slaughter and disassembly of livestock in the giant stockyards of Chicago, Kansas City, St. Louis and Omaha. Beef would then be shipped east at a cost of 50% that of shipping the live cattle. The centralization of the stockyards also created tremendous amounts of by-products that could be processed into glue, tallow, dye, fertilizer, feed, brushes, false teeth, gelatin, oleomargarine, and many other useful products.

Telegraph and Financial Markets

The telegraph undoubtedly had a major impact on the structure of financial markets in the United States. New York became the financial center of the country, setting prices for a variety of commodities and financial instruments. Among these were beef, corn, wheat, stocks and bonds. As the telegraph spread, so too did the centralization of prices. For instance, in 1846, wheat and corn prices in Buffalo lagged four days behind those in New York City. In 1848, the two markets were linked telegraphically and prices were set simultaneously.

The centralization of stock prices helped make New York the financial capital of the United States. Over the course of the nineteenth century, hundreds of exchanges appeared and then disappeared across the country. Few of them remained, with only those in New York, Philadelphia, Boston, Chicago and San Francisco achieving any permanence. By 1910, 90 percent of all bond and two-thirds of all stock trades occurred on the New York Stock Exchange.

Centralization of the market created much more liquidity for stockholders. As the number of potential traders increased, so too did the ability to find a buyer or seller of a financial instrument. This increase in liquidity may have led to an increase in the total amount invested in the market, therefore leading to higher levels of investment and economic growth. Centralization may also have led to the development of certain financial institutions that could not have been developed otherwise. Although difficult to quantify, these aspects of centralization certainly had a positive effect on economic growth.

In some respects, we may tend to overestimate the telegraph’s influence on the economy. The rapid distribution of information may have had a collective action problem associated with it. If no one else in Buffalo has a piece of information, such as the change in the price of wheat in New York City, then there is a large private incentive to discover that piece of information quickly. But once everyone has the information, no one made better off. A great deal of effort may have been spent on an endeavor that, from society’s perspective, did not increase overall efficiency. The centralization in New York also increased the gains from other wealth-neutral or wealth-reducing activities, such as speculation and market manipulation. Higher volumes of trading increased the payoff from the successful manipulation of a market, yet did not increase society’s wealth.

Conclusion

The telegraph accelerated the speed of business transactions during the late nineteenth century and contributed to the industrialization of the United States. Like most industries, it faced new competition that ultimately proved its downfall. The telephone was easier and faster to use, and the telegraph ultimately lost its cost-advantages. In 1988, Western Union divested itself of its telegraph infrastructure and focused on financial services, such as money orders. A Western Union telegram is still available, currently costing $9.95 for 250 words.

Telegraph Timeline

1837 Cooke and Wheatstone patent telegraph in England.
1838 Morse’s Electro-Magnetic Telegraph patent approved.
1843 First message sent between Washington and Baltimore.
1846 First commercial telegraph line completed. The Magnetic Telegraph Company’s lines ran from New York to Washington.
House’s Printing Telegraph patent approved.
1848 Associated Press formed to pool telegraph traffic.
1849 Bain’s Electro-Chemical patent approved.
1851 Hiram Sibley and associates incorporate New York and Mississippi Valley Printing Telegraph Company. Later became Western Union.
1851 Telegraph first used to coordinate train departures.
1857 Treaty of Six Nations is signed, creating a national cartel
1859 First transatlantic cable is laid from Newfoundland to Valentia, Ireland. Fails after 23 days, having been used to send a total of 4,359 words. Total cost of laying the line was $1.2 million.
1861 First Transcontinental telegraph completed.
1866 First successful transatlantic telegraph laid
Western Union merges with major remaining rivals.
1867 Stock ticker service inaugurated.
1870 Western Union introduces the money order service.
1876 Alexander Graham Bell patents the telephone.
1908 AT&T gains control of Western Union. Divests itself of Western Union in 1913.
1924 AT&T offers Teletype system.
1926 Inauguration of the direct stock ticker circuit from New York to San Francisco.
1930 High-speed tickers can print 500 words per minute.
1945 Western Union and Postal Telegraph Company merge.
1962 Western Union offers Telex for international teleprinting.
1974 Western Union places Westar satellite in operation.
1988 Western Union Telegraph Company reorganized as Western Union Corporation. The telecommunications assets were divested and Western Union focuses on money transfers and loan services.

References

Blondheim, Menahem. News over the Wires. Cambridge: Harvard University Press, 1994.

Brock, Gerald. The Telecommunications Industry. Cambridge: Harvard University Press, 1981.

DuBoff, Richard. “Business Demand and the Development of the Telegraph in the United States, 1844-1860.” Business History Review 54 (1980): 461-477.

Field, Alexander. “The Telegraphic Transmission of Financial Asset Prices and Orders to Trade: Implications for Economic Growth, Trading Volume, and Securities Market Regulation.” Research in Economic History 18 (1998).

Field, Alexander. “French Optical Telegraphy, 1793-1855: Hardware, Software, Administration.” Technology and Culture 35 (1994): 315-47.

Field, Alexander. “The Magnetic Telegraph, Price and Quantity Data, and the New Management of Capital.” Journal of Economic History 52 (1992): 401-13.

Gabler, Edwin. The American Telegrapher: A Social History 1860-1900. New Brunswick: Rutgers University Press, 1988.

Goldin, H. H. “Governmental Policy and the Domestic Telegraph Industry.” Journal of Economic History 7 (1947): 53-68.

Israel, Paul. From Machine Shop to Industrial Laboratory. Baltimore: Johns Hopkins, 1992.

Lefferts, Marshall. “The Electric Telegraph: its Influence and Geographical Distribution.” American Geographical and Statistical Society Bulletin, II (1857).

Nonnenmacher, Tomas. “State Promotion and Regulation of the Telegraph Industry, 1845-1860.” Journal of Economic History 61 (2001).

Oslin, George. The Story of Telecommunications. Macon: Mercer University Press, 1992.

Reid, James. The Telegraph in America. New York: Polhemus, 1886.

Thompson, Robert. Wiring a Continent, Princeton: Princeton University Press, 1947.

U.S. Bureau of the Census. Report of the Superintendent of the Census for December 1, 1852, Washington: Robert Armstrong, 1853.

U.S. Bureau of the Census. Historical Statistics of the United States: Colonial Times to 1970: Bicentennial Edition, Washington: GPO, 1976.

Yates, JoAnne. “The Telegraph’s Effect on Nineteenth Century Markets and Firms.” Business and Economic History 15 (1986):149-63.

Citation: Nonnenmacher, Tomas. “History of the U.S. Telegraph Industry”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/history-of-the-u-s-telegraph-industry/

The 1929 Stock Market Crash

Harold Bierman, Jr., Cornell University

Overview

The 1929 stock market crash is conventionally said to have occurred on Thursday the 24th and Tuesday the 29th of October. These two dates have been dubbed “Black Thursday” and “Black Tuesday,” respectively. On September 3, 1929, the Dow Jones Industrial Average reached a record high of 381.2. At the end of the market day on Thursday, October 24, the market was at 299.5 — a 21 percent decline from the high. On this day the market fell 33 points — a drop of 9 percent — on trading that was approximately three times the normal daily volume for the first nine months of the year. By all accounts, there was a selling panic. By November 13, 1929, the market had fallen to 199. By the time the crash was completed in 1932, following an unprecedentedly large economic depression, stocks had lost nearly 90 percent of their value.

The events of Black Thursday are normally defined to be the start of the stock market crash of 1929-1932, but the series of events leading to the crash started before that date. This article examines the causes of the 1929 stock market crash. While no consensus exists about its precise causes, the article will critique some arguments and support a preferred set of conclusions. It argues that one of the primary causes was the attempt by important people and the media to stop market speculators. A second probable cause was the great expansion of investment trusts, public utility holding companies, and the amount of margin buying, all of which fueled the purchase of public utility stocks, and drove up their prices. Public utilities, utility holding companies, and investment trusts were all highly levered using large amounts of debt and preferred stock. These factors seem to have set the stage for the triggering event. This sector was vulnerable to the arrival of bad news regarding utility regulation. In October 1929, the bad news arrived and utility stocks fell dramatically. After the utilities decreased in price, margin buyers had to sell and there was then panic selling of all stocks.

The Conventional View

The crash helped bring on the depression of the thirties and the depression helped to extend the period of low stock prices, thus “proving” to many that the prices had been too high.

Laying the blame for the “boom” on speculators was common in 1929. Thus, immediately upon learning of the crash of October 24 John Maynard Keynes (Moggridge, 1981, p. 2 of Vol. XX) wrote in the New York Evening Post (25 October 1929) that “The extraordinary speculation on Wall Street in past months has driven up the rate of interest to an unprecedented level.” And the Economist when stock prices reached their low for the year repeated the theme that the U.S. stock market had been too high (November 2, 1929, p. 806): “there is warrant for hoping that the deflation of the exaggerated balloon of American stock values will be for the good of the world.” The key phrases in these quotations are “exaggerated balloon of American stock values” and “extraordinary speculation on Wall Street.” Likewise, President Herbert Hoover saw increasing stock market prices leading up to the crash as a speculative bubble manufactured by the mistakes of the Federal Reserve Board. “One of these clouds was an American wave of optimism, born of continued progress over the decade, which the Federal Reserve Board transformed into the stock-exchange Mississippi Bubble” (Hoover, 1952). Thus, the common viewpoint was that stock prices were too high.

There is much to criticize in conventional interpretations of the 1929 stock market crash, however. (Even the name is inexact. The largest losses to the market did not come in October 1929 but rather in the following two years.) In December 1929, many expert economists, including Keynes and Irving Fisher, felt that the financial crisis had ended and by April 1930 the Standard and Poor 500 composite index was at 25.92, compared to a 1929 close of 21.45. There are good reasons for thinking that the stock market was not obviously overvalued in 1929 and that it was sensible to hold most stocks in the fall of 1929 and to buy stocks in December 1929 (admittedly this investment strategy would have been terribly unsuccessful).

Were Stocks Obviously Overpriced in October 1929?
Debatable — Economic Indicators Were Strong

From 1925 to the third quarter of 1929, common stocks increased in value by 120 percent in four years, a compound annual growth of 21.8%. While this is a large rate of appreciation, it is not obvious proof of an “orgy of speculation.” The decade of the 1920s was extremely prosperous and the stock market with its rising prices reflected this prosperity as well as the expectation that the prosperity would continue.

The fact that the stock market lost 90 percent of its value from 1929 to 1932 indicates that the market, at least using one criterion (actual performance of the market), was overvalued in 1929. John Kenneth Galbraith (1961) implies that there was a speculative orgy and that the crash was predictable: “Early in 1928, the nature of the boom changed. The mass escape into make-believe, so much a part of the true speculative orgy, started in earnest.” Galbraith had no difficulty in 1961 identifying the end of the boom in 1929: “On the first of January of 1929, as a matter of probability, it was most likely that the boom would end before the year was out.”

Compare this position with the fact that Irving Fisher, one of the leading economists in the U.S. at the time, was heavily invested in stocks and was bullish before and after the October sell offs; he lost his entire wealth (including his house) before stocks started to recover. In England, John Maynard Keynes, possibly the world’s leading economist during the first half of the twentieth century, and an acknowledged master of practical finance, also lost heavily. Paul Samuelson (1979) quotes P. Sergeant Florence (another leading economist): “Keynes may have made his own fortune and that of King’s College, but the investment trust of Keynes and Dennis Robertson managed to lose my fortune in 1929.”

Galbraith’s ability to ‘forecast’ the market turn is not shared by all. Samuelson (1979) admits that: “playing as I often do the experiment of studying price profiles with their dates concealed, I discovered that I would have been caught by the 1929 debacle.” For many, the collapse from 1929 to 1933 was neither foreseeable nor inevitable.

The stock price increases leading to October 1929, were not driven solely by fools or speculators. There were also intelligent, knowledgeable investors who were buying or holding stocks in September and October 1929. Also, leading economists, both then and now, could neither anticipate nor explain the October 1929 decline of the market. Thus, the conviction that stocks were obviously overpriced is somewhat of a myth.

The nation’s total real income rose from 1921 to 1923 by 10.5% per year, and from 1923 to 1929, it rose 3.4% per year. The 1920s were, in fact, a period of real growth and prosperity. For the period of 1923-1929, wholesale prices went down 0.9% per year, reflecting moderate stable growth in the money supply during a period of healthy real growth.

Examining the manufacturing situation in the United States prior to the crash is also informative. Irving Fisher’s Stock Market Crash and After (1930) offers much data indicating that there was real growth in the manufacturing sector. The evidence presented goes a long way to explain Fisher’s optimism regarding the level of stock prices. What Fisher saw was manufacturing efficiency rapidly increasing (output per worker) as was manufacturing output and the use of electricity.

The financial fundamentals of the markets were also strong. During 1928, the price-earnings ratio for 45 industrial stocks increased from approximately 12 to approximately 14. It was over 15 in 1929 for industrials and then decreased to approximately 10 by the end of 1929. While not low, these price-earnings (P/E) ratios were by no means out of line historically. Values in this range would be considered reasonable by most market analysts today. For example, the P/E ratio of the S & P 500 in July 2003 reached a high of 33 and in May 2004 the high was 23.

The rise in stock prices was not uniform across all industries. The stocks that went up the most were in industries where the economic fundamentals indicated there was cause for large amounts of optimism. They included airplanes, agricultural implements, chemicals, department stores, steel, utilities, telephone and telegraph, electrical equipment, oil, paper, and radio. These were reasonable choices for expectations of growth.

To put the P/E ratios of 10 to 15 in perspective, note that government bonds in 1929 yielded 3.4%. Industrial bonds of investment grade were yielding 5.1%. Consider that an interest rate of 5.1% represents a 1/(0.051) = 19.6 price-earnings ratio for debt.

In 1930, the Federal Reserve Bulletin reported production in 1920 at an index of 87.1 The index went down to 67 in 1921, then climbed steadily (except for 1924) until it reached 125 in 1929. This is an annual growth rate in production of 3.1%. During the period commodity prices actually decreased. The production record for the ten-year period was exceptionally good.

Factory payrolls in September were at an index of 111 (an all-time high). In October the index dropped to 110, which beat all previous months and years except for September 1929. The factory employment measures were consistent with the payroll index.

The September unadjusted measure of freight car loadings was at 121 — also an all-time record.2 In October the loadings dropped to 118, which was a performance second only to September’s record measure.

J.W. Kendrick (1961) shows that the period 1919-1929 had an unusually high rate of change in total factor productivity. The annual rate of change of 5.3% for 1919-1929 for the manufacturing sector was more than twice the 2.5% rate of the second best period (1948-1953). Farming productivity change for 1919-1929 was second only to the period 1929-1937. Overall, the period 1919-1929 easily took first place for productivity increases, handily beating the six other time periods studied by Kendrick (all the periods studies were prior to 1961) with an annual productivity change measure of 3.7%. This was outstanding economic performance — performance which normally would justify stock market optimism.

In the first nine months of 1929, 1,436 firms announced increased dividends. In 1928, the number was only 955 and in 1927, it was 755. In September 1929 dividend increased were announced by 193 firms compared with 135 the year before. The financial news from corporations was very positive in September and October 1929.

The May issue of the National City Bank of New York Newsletter indicated the earnings statements for the first quarter of surveyed firms showed a 31% increase compared to the first quarter of 1928. The August issue showed that for 650 firms the increase for the first six months of 1929 compared to 1928 was 24.4%. In September, the results were expanded to 916 firms with a 27.4% increase. The earnings for the third quarter for 638 firms were calculated to be 14.1% larger than for 1928. This is evidence that the general level of business activity and reported profits were excellent at the end of September 1929 and the middle of October 1929.

Barrie Wigmore (1985) researched 1929 financial data for 135 firms. The market price as a percentage of year-end book value was 420% using the high prices and 181% using the low prices. However, the return on equity for the firms (using the year-end book value) was a high 16.5%. The dividend yield was 2.96% using the high stock prices and 5.9% using the low stock prices.

Article after article from January to October in business magazines carried news of outstanding economic performance. E.K. Berger and A.M. Leinbach, two staff writers of the Magazine of Wall Street, wrote in June 1929: “Business so far this year has astonished even the perennial optimists.”

To summarize: There was little hint of a severe weakness in the real economy in the months prior to October 1929. There is a great deal of evidence that in 1929 stock prices were not out of line with the real economics of the firms that had issued the stock. Leading economists were betting that common stocks in the fall of 1929 were a good buy. Conventional financial reports of corporations gave cause for optimism relative to the 1929 earnings of corporations. Price-earnings ratios, dividend amounts and changes in dividends, and earnings and changes in earnings all gave cause for stock price optimism.

Table 1 shows the average of the highs and lows of the Dow Jones Industrial Index for 1922 to 1932.

Table 1
Dow-Jones Industrials Index Average
of Lows and Highs for the Year
1922 91.0
1923 95.6
1924 104.4
1925 137.2
1926 150.9
1927 177.6
1928 245.6
1929 290.0
1930 225.8
1931 134.1
1932 79.4

Sources: 1922-1929 measures are from the Stock Market Study, U.S. Senate, 1955, pp. 40, 49, 110, and 111; 1930-1932 Wigmore, 1985, pp. 637-639.

Using the information of Table 1, from 1922 to 1929 stocks rose in value by 218.7%. This is equivalent to an 18% annual growth rate in value for the seven years. From 1929 to 1932 stocks lost 73% of their value (different indices measured at different time would give different measures of the increase and decrease). The price increases were large, but not beyond comprehension. The price decreases taken to 1932 were consistent with the fact that by 1932 there was a worldwide depression.

If we take the 386 high of September 1929 and the 1929-year end value of 248.5, the market lost 36% of its value during that four-month period. Most of us, if we held stock in September 1929 would not have sold early in October. In fact, if I had money to invest, I would have purchased after the major break on Black Thursday, October 24. (I would have been sorry.)

Events Precipitating the Crash

Although it can be argued that the stock market was not overvalued, there is evidence that many feared that it was overvalued — including the Federal Reserve Board and the United States Senate. By 1929, there were many who felt the market price of equity securities had increased too much, and this feeling was reinforced daily by the media and statements by influential government officials.

What precipitated the October 1929 crash?

My research minimizes several candidates that are frequently cited by others (see Bierman 1991, 1998, 1999, and 2001).

  • The market did not fall just because it was too high — as argued above it is not obvious that it was too high.
  • The actions of the Federal Reserve, while not always wise, cannot be directly identified with the October stock market crashes in an important way.
  • The Smoot-Hawley tariff, while looming on the horizon, was not cited by the news sources in 1929 as a factor, and was probably not important to the October 1929 market.
  • The Hatry Affair in England was not material for the New York Stock Exchange and the timing did not coincide with the October crashes.
  • Business activity news in October was generally good and there were very few hints of a coming depression.
  • Short selling and bear raids were not large enough to move the entire market.
  • Fraud and other illegal or immoral acts were not material, despite the attention they have received.

Barsky and DeLong (1990, p. 280) stress the importance of fundamentals rather than fads or fashions. “Our conclusion is that major decade-to-decade stock market movements arise predominantly from careful re-evaluation of fundamentals and less so from fads or fashions.” The argument below is consistent with their conclusion, but there will be one major exception. In September 1929, the market value of one segment of the market, the public utility sector, should be based on existing fundamentals, and fundamentals seem to have changed considerably in October 1929.

A Look at the Financial Press

Thursday, October 3, 1929, the Washington Post with a page 1 headline exclaimed “Stock Prices Crash in Frantic Selling.” the New York Times of October 4 headed a page 1 article with “Year’s Worst Break Hits Stock Market.” The article on the first page of the Times cited three contributing factors:

  • A large broker loan increase was expected (the article stated that the loans increased, but the increase was not as large as expected).
  • The statement by Philip Snowden, England’s Chancellor of the Exchequer that described America’s stock market as a “speculative orgy.”
  • Weakening of margin accounts making it necessary to sell, which further depressed prices.

While the 1928 and 1929 financial press focused extensively and excessively on broker loans and margin account activity, the statement by Snowden is the only unique relevant news event on October 3. The October 4 (p. 20) issue of the Wall Street Journal also reported the remark by Snowden that there was “a perfect orgy of speculation.” Also, on October 4, the New York Times made another editorial reference to Snowden’s American speculation orgy. It added that “Wall Street had come to recognize its truth.” The editorial also quoted Secretary of the Treasury Mellon that investors “acted as if the price of securities would infinitely advance.” The Times editor obviously thought there was excessive speculation, and agreed with Snowden.

The stock market went down on October 3 and October 4, but almost all reported business news was very optimistic. The primary negative news item was the statement by Snowden regarding the amount of speculation in the American stock market. The market had been subjected to a barrage of statements throughout the year that there was excessive speculation and that the level of stock prices was too high. There is a possibility that the Snowden comment reported on October 3 was the push that started the boulder down the hill, but there were other events that also jeopardized the level of the market.

On August 8, the Federal Reserve Bank of New York had increased the rediscount rate from 5 to 6%. On September 26 the Bank of England raised its discount rate from 5.5 to 6.5%. England was losing gold as a result of investment in the New York Stock Exchange and wanted to decrease this investment. The Hatry Case also happened in September. It was first reported on September 29, 1929. Both the collapse of the Hatry industrial empire and the increase in the investment returns available in England resulted in shrinkage of English investment (especially the financing of broker loans) in the United States, adding to the market instability in the beginning of October.

Wednesday, October 16, 1929

On Wednesday, October 16, stock prices again declined. the Washington Post (October 17, p. 1) reported “Crushing Blow Again Dealt Stock Market.” Remember, the start of the stock market crash is conventionally identified with Black Thursday, October 24, but there were price declines on October 3, 4, and 16.

The news reports of the Post on October 17 and subsequent days are important since they were Associated Press (AP) releases, thus broadly read throughout the country. The Associated Press reported (p. 1) “The index of 20 leading public utilities computed for the Associated Press by the Standard Statistics Co. dropped 19.7 points to 302.4 which contrasts with the year’s high established less than a month ago.” This index had also dropped 18.7 points on October 3 and 4.3 points on October 4. The Times (October 17, p. 38) reported, “The utility stocks suffered most as a group in the day’s break.”

The economic news after the price drops of October 3 and October 4 had been good. But the deluge of bad news regarding public utility regulation seems to have truly upset the market. On Saturday, October 19, the Washington Post headlined (p. 13) “20 Utility Stocks Hit New Low Mark” and (Associated Press) “The utility shares again broke wide open and the general list came tumbling down almost half as far.” The October 20 issue of the Post had another relevant AP article (p. 12) “The selling again concentrated today on the utilities, which were in general depressed to the lowest levels since early July.”

An evaluation of the October 16 break in the New York Times on Sunday, October 20 (pp. 1 and 29) gave the following favorable factors:

  • stable business condition
  • low money rates (5%)
  • good retail trade
  • revival of the bond market
  • buying power of investment trusts
  • largest short interest in history (this is the total dollar value of stock sold where the investors do not own the stock they sold)

The following negative factors were described:

  • undigested investment trusts and new common stock shares
  • increase in broker loans
  • some high stock prices
  • agricultural prices lower
  • nervous market

The negative factors were not very upsetting to an investor if one was optimistic that the real economic boom (business prosperity) would continue. The Times failed to consider the impact on the market of the news concerning the regulation of public utilities.

Monday, October 21, 1929

On Monday, October 21, the market went down again. The Times (October 22) identified the causes to be

  • margin sellers (buyers on margin being forced to sell)
  • foreign money liquidating
  • skillful short selling

The same newspaper carried an article about a talk by Irving Fisher (p. 24) “Fisher says prices of stocks are low.” Fisher also defended investment trusts as offering investors diversification, thus reduced risk. He was reminded by a person attending the talk that in May he had “pointed out that predicting the human behavior of the market was quite different from analyzing its economic soundness.” Fisher was better with fundamentals than market psychology.

Wednesday, October 23, 1929

On Wednesday, October 23 the market tumbled. The Times headlines (October 24, p.1) said “Prices of Stocks Crash in Heavy Liquidation.” The Washington Post (p. 1) had “Huge Selling Wave Creates Near-Panic as Stocks Collapse.” In a total market value of $87 billion the market declined $4 billion — a 4.6% drop. If the events of the next day (Black Thursday) had not occurred, October 23 would have gone down in history as a major stock market event. But October 24 was to make the “Crash” of October 23 become merely a “Dip.”

The Times lamented October 24, (p. 38) “There was hardly a single item of news which might be construed as bearish.”

Thursday, October 24, 1929

Thursday, October 24 (Black Thursday) was a 12,894,650 share day (the previous record was 8,246,742 shares on March 26, 1929) on the NYSE. The headline on page one of the Times (October 25) was “Treasury Officials Blame Speculation.”

The Times (p. 41) moaned that the cost of call money had been 20% in March and the price break in March was understandable. (A call loan is a loan payable on demand of the lender.) Call money on October 24 cost only 5%. There should not have been a crash. The Friday Wall Street Journal (October 25) gave New York bankers credit for stopping the price decline with $1 billion of support.

the Washington Post (October 26, p. 1) reported “Market Drop Fails to Alarm Officials.” The “officials” were all in Washington. The rest of the country seemed alarmed. On October 25, the market gained. President Hoover made a statement on Friday regarding the excellent state of business, but then added how building and construction had been adversely “affected by the high interest rates induced by stock speculation” (New York Times, October 26, p. 1). A Times editorial (p. 16) quoted Snowden’s “orgy of speculation” again.

Tuesday, October 29, 1929

The Sunday, October 27 edition of the Times had a two-column article “Bay State Utilities Face Investigation.” It implied that regulation in Massachusetts was going to be less friendly towards utilities. Stocks again went down on Monday, October 28. There were 9,212,800 shares traded (3,000,000 in the final hour). The Times on Tuesday, October 29 again carried an article on the New York public utility investigating committee being critical of the rate making process. October 29 was “Black Tuesday.” The headline the next day was “Stocks Collapse in 16,410,030 Share Day” (October 30, p. 1). Stocks lost nearly $16 billion in the month of October or 18% of the beginning of the month value. Twenty-nine public utilities (tabulated by the New York Times) lost $5.1 billion in the month, by far the largest loss of any of the industries listed by the Times. The value of the stocks of all public utilities went down by more than $5.1 billion.

An Interpretive Overview of Events and Issues

My interpretation of these events is that the statement by Snowden, Chancellor of the Exchequer, indicating the presence of a speculative orgy in America is likely to have triggered the October 3 break. Public utility stocks had been driven up by an explosion of investment trust formation and investing. The trusts, to a large extent, bought stock on margin with funds loaned not by banks but by “others.” These funds were very sensitive to any market weakness. Public utility regulation was being reviewed by the Federal Trade Commission, New York City, New York State, and Massachusetts, and these reviews were watched by the other regulatory commissions and by investors. The sell-off of utility stocks from October 16 to October 23 weakened prices and created “margin selling” and withdrawal of capital by the nervous “other” money. Then on October 24, the selling panic happened.

There are three topics that require expansion. First, there is the setting of the climate concerning speculation that may have led to the possibility of relatively specific issues being able to trigger a general market decline. Second, there are investment trusts, utility holding companies, and margin buying that seem to have resulted in one sector being very over-levered and overvalued. Third, there are the public utility stocks that appear to be the best candidate as the actual trigger of the crash.

Contemporary Worries of Excessive Speculation

During 1929, the public was bombarded with statements of outrage by public officials regarding the speculative orgy taking place on the New York Stock Exchange. If the media say something often enough, a large percentage of the public may come to believe it. By October 29 the overall opinion was that there had been excessive speculation and the market had been too high. Galbraith (1961), Kindleberger (1978), and Malkiel (1996) all clearly accept this assumption. the Federal Reserve Bulletin of February 1929 states that the Federal Reserve would restrain the use of “credit facilities in aid of the growth of speculative credit.”

In the spring of 1929, the U.S. Senate adopted a resolution stating that the Senate would support legislation “necessary to correct the evil complained of and prevent illegitimate and harmful speculation” (Bierman, 1991).

The President of the Investment Bankers Association of America, Trowbridge Callaway, gave a talk in which he spoke of “the orgy of speculation which clouded the country’s vision.”

Adolph Casper Miller, an outspoken member of the Federal Reserve Board from its beginning described 1929 as “this period of optimism gone wild and cupidity gone drunk.”

Myron C. Taylor, head of U.S. Steel described “the folly of the speculative frenzy that lifted securities to levels far beyond any warrant of supporting profits.”

Herbert Hoover becoming president in March 1929 was a very significant event. He was a good friend and neighbor of Adolph Miller (see above) and Miller reinforced Hoover’s fears. Hoover was an aggressive foe of speculation. For example, he wrote, “I sent individually for the editors and publishers of major newspapers and magazine and requested them systematically to warn the country against speculation and the unduly high price of stocks.” Hoover then pressured Secretary of the Treasury Andrew Mellon and Governor of the Federal Reserve Board Roy Young “to strangle the speculative movement.” In his memoirs (1952) he titled his Chapter 2 “We Attempt to Stop the Orgy of Speculation” reflecting Snowden’s influence.

Buying on Margin

Margin buying during the 1920’s was not controlled by the government. It was controlled by brokers interested in their own well-being. The average margin requirement was 50% of the stock price prior to October 1929. On selected stocks, it was as high as 75%. When the crash came, no major brokerage firm was bankrupted, because the brokers managed their finances in a conservative manner. At the end of October, margins were lowered to 25%.

Brokers’ loans received a lot of attention in England, as they did in the United States. The Financial Times reported the level and the changes in the amount regularly. For example, the October 4 issue indicated that on October 3 broker loans reached a record high as money rates dropped from 7.5% to 6%. By October 9, money rates had dropped further to below .06. Thus, investors prior to October 24 had relatively easy access to funds at the lowest rate since July 1928.

the Financial Times (October 7, 1929, p. 3) reported that the President of the American Bankers Association was concerned about the level of credit for securities and had given a talk in which he stated, “Bankers are gravely alarmed over the mounting volume of credit being employed in carrying security loans, both by brokers and by individuals.” The Financial Times was also concerned with the buying of investment trusts on margin and the lack of credit to support the bull market.

My conclusion is that the margin buying was a likely factor in causing stock prices to go up, but there is no reason to conclude that margin buying triggered the October crash. Once the selling rush began, however, the calling of margin loans probably exacerbated the price declines. (A calling of margin loans requires the stock buyer to contribute more cash to the broker or the broker sells the stock to get the cash.)

Investment Trusts

By 1929, investment trusts were very popular with investors. These trusts were the 1929 version of closed-end mutual funds. In recent years seasoned closed-end mutual funds sell at a discount to their fundamental value. The fundamental value is the sum of the market values of the fund’s components (securities in the portfolio). In 1929, the investment trusts sold at a premium — i.e. higher than the value of the underlying stocks. Malkiel concludes (p. 51) that this “provides clinching evidence of wide-scale stock-market irrationality during the 1920s.” However, Malkiel also notes (p. 442) “as of the mid-1990’s, Berkshire Hathaway shares were selling at a hefty premium over the value of assets it owned.” Warren Buffett is the guiding force behind Berkshire Hathaway’s great success as an investor. If we were to conclude that rational investors would currently pay a premium for Warren Buffet’s expertise, then we should reject a conclusion that the 1929 market was obviously irrational. We have current evidence that rational investors will pay a premium for what they consider to be superior money management skills.

There were $1 billion of investment trusts sold to investors in the first eight months of 1929 compared to $400 million in the entire 1928. the Economist reported that this was important (October 12, 1929, p. 665). “Much of the recent increase is to be accounted for by the extraordinary burst of investment trust financing.” In September alone $643 million was invested in investment trusts (Financial Times, October 21, p. 3). While the two sets of numbers (from the Economist and the Financial Times) are not exactly comparable, both sets of numbers indicate that investment trusts had become very popular by October 1929.

The common stocks of trusts that had used debt or preferred stock leverage were particularly vulnerable to the stock price declines. For example, the Goldman Sachs Trading Corporation was highly levered with preferred stock and the value of its common stock fell from $104 a share to less than $3 in 1933. Many of the trusts were levered, but the leverage of choice was not debt but rather preferred stock.

In concept, investment trusts were sensible. They offered expert management and diversification. Unfortunately, in 1929 a diversification of stocks was not going to be a big help given the universal price declines. Irving Fisher on September 6, 1929 was quoted in the New York Herald Tribune as stating: “The present high levels of stock prices and corresponding low levels of dividend returns are due largely to two factors. One, the anticipation of large dividend returns in the immediate future; and two, reduction of risk to investors largely brought about through investment diversification made possible for the investor by investment trusts.”

If a researcher could find out the composition of the portfolio of a couple of dozen of the largest investment trusts as of September-October 1929 this would be extremely helpful. Seven important types of information that are not readily available but would be of interest are:

  • The percentage of the portfolio that was public utilities.
  • The extent of diversification.
  • The percentage of the portfolios that was NYSE firms.
  • The investment turnover.
  • The ratio of market price to net asset value at various points in time.
  • The amount of debt and preferred stock leverage used.
  • Who bought the trusts and how long they held.

The ideal information to establish whether market prices are excessively high compared to intrinsic values is to have both the prices and well-defined intrinsic values at the same moment in time. For the normal financial security, this is impossible since the intrinsic values are not objectively well defined. There are two exceptions. DeLong and Schleifer (1991) followed one path, very cleverly choosing to study closed-end mutual funds. Some of these funds were traded on the stock market and the market values of the securities in the funds’ portfolios are a very reasonable estimate of the intrinsic value. DeLong and Schleifer state (1991, p. 675):

“We use the difference between prices and net asset values of closed-end mutual funds at the end of the 1920s to estimate the degree to which the stock market was overvalued on the eve of the 1929 crash. We conclude that the stocks making up the S&P composite were priced at least 30 percent above fundamentals in late summer, 1929.”

Unfortunately (p. 682) “portfolios were rarely published and net asset values rarely calculated.” It was only after the crash that investment trusts started to reveal routinely their net asset value. In the third quarter of 1929 (p. 682), “three types of event seemed to trigger a closed-end fund’s publication of its portfolio.” The three events were (1) listing on the New York Stock Exchange (most of the trusts were not listed), (2) start up of a new closed-end fund (this stock price reflects selling pressure), and (3) shares selling at a discount from net asset value (in September 1929 most trusts were not selling at a discount, the inclusion of any that were introduces a bias). After 1929, some trusts revealed 1929 net asset values. Thus, DeLong and Schleifer lacked the amount and quality of information that would have allowed definite conclusions. In fact, if investors also lacked the information regarding the portfolio composition we would have to place investment trusts in a unique investment category where investment decisions were made without reliable financial statements. If investors in the third quarter of 1929 did not know the current net asset value of investment trusts, this fact is significant.

The closed-end funds were an attractive vehicle to study since the market for investment trusts in 1929 was large and growing rapidly. In August and September alone over $1 billion of new funds were launched. DeLong and Schleifer found the premiums of price over value to be large — the median was about 50% in the third quarter of 1929) (p. 678). But they worried about the validity of their study because funds were not selected randomly.

DeLong and Schleifer had limited data (pp. 698-699). For example, for September 1929 there were two observations, for August 1929 there were five, and for July there were nine. The nine funds observed in July 1929 had the following premia: 277%, 152%, 48%, 22%, 18% (2 times), 8% (3 times). Given that closed-end funds tend to sell at a discount, the positive premiums are interesting. Given the conventional perspective in 1929 that financial experts could manager money better than the person not plugged into the street, it is not surprising that some investors were willing to pay for expertise and to buy shares in investment trusts. Thus, a premium for investment trusts does not imply the same premium for other stocks.

The Public Utility Sector

In addition to investment trusts, intrinsic values are usually well defined for regulated public utilities. The general rule applied by regulatory authorities is to allow utilities to earn a “fair return” on an allowed rate base. The fair return is defined to be equal to a utility’s weighted average cost of capital. There are several reasons why a public utility can earn more or less than a fair return, but the target set by the regulatory authority is the weighted average cost of capital.

Thus, if a utility has an allowed rate equity base of $X and is allowed to earn a return of r, (rX in terms of dollars) after one year the firm’s equity will be worth X + rX or (1 + r)X with a present value of X. (This assumes that r is the return required by the market as well as the return allowed by regulators.) Thus, the present value of the equity is equal to the present rate base, and the stock price should be equal to the rate base per share. Given the nature of public utility accounting, the book value of a utility’s stock is approximately equal to the rate base.

There can be time periods where the utility can earn more (or less) than the allowed return. The reasons for this include regulatory lag, changes in efficiency, changes in the weather, and changes in the mix and number of customers. Also, the cost of equity may be different than the allowed return because of inaccurate (or incorrect) or changing capital market conditions. Thus, the stock price may differ from the book value, but one would not expect the stock price to be very much different than the book value per share for very long. There should be a tendency for the stock price to revert to the book value for a public utility supplying an essential service where there is no effective competition, and the rate commission is effectively allowing a fair return to be earned.

In 1929, public utility stock prices were in excess of three times their book values. Consider, for example, the following measures (Wigmore, 1985, p. 39) for five operating utilities.

border=”1″ cellspacing=”0″ cellpadding=”2″ class=”encyclopedia” width=”580″>

1929 Price-earnings Ratio

High Price for Year

Market Price/Book Value

Commonwealth Edison

35

3.31

Consolidated Gas of New York

39

3.34

Detroit Edison

35

3.06

Pacific Gas & Electric

28

3.30

Public Service of New Jersey

35

3.14

Sooner or later this price bubble had to break unless the regulatory authorities were to decide to allow the utilities to earn more than a fair return, or an infinite stream of greater fools existed. The decision made by the Massachusetts Public Utility Commission in October 1929 applicable to the Edison Electric Illuminating Company of Boston made clear that neither of these improbable events were going to happen (see below).

The utilities bubble did burst. Between the end of September and the end of November 1929, industrial stocks fell by 48%, railroads by 32% and utilities by 55% — thus utilities dropped the furthest from the highs. A comparison of the beginning of the year prices and the highest prices is also of interest: industrials rose by 20%, railroads by 19%, and utilities by 48%. The growth in value for utilities during the first nine months of 1929 was more than twice that of the other two groups.

The following high and low prices for 1929 for a typical set of public utilities and holding companies illustrate how severely public utility prices were hit by the crash (New York Times, 1 January 1930 quotations.)

1929
Firm High Price Low Price Low Price DividedBy High Price
American Power & Light 1753/8 641/4 .37
American Superpower 711/8 15 .21
Brooklyn Gas 2481/2 99 .44
Buffalo, Niagara & Eastern Power 128 611/8 .48
Cities Service 681/8 20 .29
Consolidated Gas Co. of N.Y. 1831/4 801/8 .44
Electric Bond and Share 189 50 .26
Long Island Lighting 91 40 .44
Niagara Hudson Power 303/4 111/4 .37
Transamerica 673/8 201/4 .30

Picking on one segment of the market as the cause of a general break in the market is not obviously correct. But the combination of an overpriced utility segment and investment trusts with a portion of the market that had purchased on margin appears to be a viable explanation. In addition, as of September 1, 1929 utilities industry represented $14.8 billion of value or 18% of the value of the outstanding shares on the NYSE. Thus, they were a large sector, capable of exerting a powerful influence on the overall market. Moreover, many contemporaries pointed to the utility sector as an important force in triggering the market decline.

The October 19, 1929 issue of the Commercial and Financial Chronicle identified the main depressing influences on the market to be the indications of a recession in steel and the refusal of the Massachusetts Department of Public Utilities to allow Edison Electric Illuminating Company of Boston to split its stock. The explanations offered by the Department — that the stock was not worth its price and the company’s dividend would have to be reduced — made the situation worse.

the Washington Post (October 17, p. 1) in explaining the October 16 market declines (an Associated Press release) reported, “Professional traders also were obviously distressed at the printed remarks regarding inflation of power and light securities by the Massachusetts Public Utility Commission in its recent decision.”

Straws That Broke the Camel’s Back?

Edison Electric of Boston

On August 2, 1929, the New York Times reported that the Directors of the Edison Electric Illuminating Company of Boston had called a meeting of stockholders to obtain authorization for a stock split. The stock went up to a high of $440. Its book value was $164 (the ratio of price to book value was 2.6, which was less than many other utilities).

On Saturday (October 12, p. 27) the Times reported that on Friday the Massachusetts Department of Public Utilities has rejected the stock split. The heading said “Bars Stock Split by Boston Edison. Criticizes Dividend Policy. Holds Rates Should Not Be Raised Until Company Can Reduce Charge for Electricity.” Boston Edison lost 15 points for the day even though the decision was released after the Friday closing. The high for the year was $440 and the stock closed at $360 on Friday.

The Massachusetts Department of Public Utilities (New York Times, October 12, p. 27) did not want to imply to investors that this was the “forerunner of substantial increases in dividends.” They stated that the expectation of increased dividends was not justified, offered “scathing criticisms of the company” (October 16, p. 42) and concluded “the public will take over such utilities as try to gobble up all profits available.”

On October 15, the Boston City Council advised the mayor to initiate legislation for public ownership of Edison, on October 16, the Department announced it would investigate the level of rates being charged by Edison, and on October 19, it set the dates for the inquiry. On Tuesday, October 15 (p. 41), there was a discussion in the Times of the Massachusetts decision in the column “Topic in Wall Street.” It “excited intense interest in public utility circles yesterday and undoubtedly had effect in depressing the issues of this group. The decision is a far-reaching one and Wall Street expressed the greatest interest in what effect it will have, if any, upon commissions in other States.”

Boston Edison had closed at 360 on Friday, October 11, before the announcement was released. It dropped 61 points at its low on Monday, (October 14) but closed at 328, a loss of 32 points.

On October 16 (p. 42), the Times reported that Governor Allen of Massachusetts was launching a full investigation of Boston Edison including “dividends, depreciation, and surplus.”

One major factor that can be identified leading to the price break for public utilities was the ruling by the Massachusetts Public Utility Commission. The only specific action was that it refused to permit Edison Electric Illuminating Company of Boston to split its stock. Standard financial theory predicts that the primary effect of a stock split would be to reduce the stock price by 50% and would leave the total value unchanged, thus the denial of the split was not economically significant, and the stock split should have been easy to grant. But the Commission made it clear it had additional messages to communicate. For example, the Financial Times (October 16, 1929, p. 7) reported that the Commission advised the company to “reduce the selling price to the consumer.” Boston was paying $.085 per kilowatt-hour and Cambridge only $.055. There were also rumors of public ownership and a shifting of control. The next day (October 17), the Times reported (p. 3) “The worst pressure was against Public Utility shares” and the headline read “Electric Issue Hard Hit.”

Public Utility Regulation in New York

Massachusetts was not alone in challenging the profit levels of utilities. The Federal Trade Commission, New York City, and New York State were all challenging the status of public utility regulation. New York Governor (Franklin D. Roosevelt) appointed a committee on October 8 to investigate the regulation of public utilities in the state. The Committee stated, “this inquiry is likely to have far-reaching effects and may lead to similar action in other States.” Both the October 17 and October 19 issues of the Times carried articles regarding the New York investigative committee. Professor Bonbright, a Roosevelt appointee, described the regulatory process as a “vicious system” (October 19, p. 21), which ignored consumers. The Chairman of the Public Service Commission, testifying before the Committee wanted more control over utility holding companies, especially management fees and other transfers.

The New York State Committee also noted the increasing importance of investment trusts: “mention of the influence of the investment trust on utility securities is too important for this committee to ignore” (New York Times, October 17, p. 18). They conjectured that the trusts had $3.5 billion to invest, and “their influence has become very important” (p. 18).

In New York City Mayor Jimmy Walker was fighting the accusation of graft charges with statements that his administration would fight aggressively against rate increases, thus proving that he had not accepted bribes (New York Times, October 23). It is reasonable to conclude that the October 16 break was related to the news from Massachusetts and New York.

On October 17, the New York Times (p. 18) reported that the Committee on Public Service Securities of the Investment Banking Association warned against “speculative and uniformed buying.” The Committee published a report in which it asked for care in buying shares in utilities.

On Black Thursday, October 24, the market panic began. The market dropped from 305.87 to 272.32 (a 34 point drop, or 9%) and closed at 299.47. The declines were led by the motor stocks and public utilities.

The Public Utility Multipliers and Leverage

Public utilities were a very important segment of the stock market, and even more importantly, any change in public utility stock values resulted in larger changes in equity wealth. In 1929, there were three potentially important multipliers that meant that any change in a public utility’s underlying value would result in a larger value change in the market and in the investor’s value.

Consider the following hypothetical values for a public utility:

Book value per share for a utility $50

Market price per share 162.502

Market price of investment trust holding stock (assuming a 100% 325.00

premium over market value)

Eliminating the utility’s $112.50 market price premium over book value, the market price of the investment trust would be $50 without a premium. The loss in market value of the stock of the investment trust and the utility would be $387.50 (with no premium). The $387.50 is equal to the $112.50 loss in underlying stock value and the $275 reduction in investment trust stock value. The public utility holding companies, in fact, were even more vulnerable to a stock price change since their ratio of price to book value averaged 4.44 (Wigmore, p. 43). The $387.50 loss in market value implies investments in both the firm’s stock and the investment trust.

For simplicity, this discussion has assumed the trust held all the holding company stock. The effects shown would be reduced if the trust held only a fraction of the stock. However, this discussion has also assumed that no debt or margin was used to finance the investment. Assume the individual investors invested only $162.50 of their money and borrowed $162.50 to buy the investment trust stock costing $325. If the utility stock went down from $162.50 to $50 and the trust still sold at a 100% premium, the trust would sell at $100 and the investors would have lost 100% of their investment since the investors owe $162.50. The vulnerability of the margin investor buying a trust stock that has invested in a utility is obvious.

These highly levered non-operating utilities offered an opportunity for speculation. The holding company typically owned 100% of the operating companies’ stock and both entities were levered (there could be more than two levels of leverage). There were also holding companies that owned holding companies (e.g., Ebasco). Wigmore (p. 43) lists nine of the largest public utility holding companies. The ratio of the low 1929 price to the high price (average) was 33%. These stocks were even more volatile than the publicly owned utilities.

The amount of leverage (both debt and preferred stock) used in the utility sector may have been enormous, but we cannot tell for certain. Assume that a utility purchases an asset that costs $1,000,000 and that asset is financed with 40% stock ($400,000). A utility holding company owns the utility stock and is also financed with 40% stock ($160,000). A second utility holding company owns the first and it is financed with 40% stock ($64,000). An investment trust owns the second holding company’s stock and is financed with 40% stock ($25,600). An investor buys the investment trust’s common stock using 50% margin and investing $12,800 in the stock. Thus, the $1,000,000 utility asset is financed with $12,800 of equity capital.

When the large amount of leverage is combined with the inflated prices of the public utility stock, both holding company stocks, and the investment trust the problem is even more dramatic. Continuing the above example, assume the $1,000,000 asset again financed with $600,000 of debt and $400,000 common stock, but the common stock has a $1,200,000 market value. The first utility holding company has $720,000 of debt and $480,000 of common. The second holding company has $288,000 of debt and $192,000 of stock. The investment trust has $115,200 of debt and $76,800 of stock. The investor uses $38,400 of margin debt. The $1,000,000 asset is supporting $1,761,600 of debt. The investor’s $38,400 of equity is very much in jeopardy.

Conclusions and Lessons

Although no consensus has been reached on the causes of the 1929 stock market crash, the evidence cited above suggests that it may have been that the fear of speculation helped push the stock market to the brink of collapse. It is possible that Hoover’s aggressive campaign against speculation, helped by the overpriced public utilities hit by the Massachusetts Public Utility Commission decision and statements and the vulnerable margin investors, triggered the October selling panic and the consequences that followed.

An important first event may have been Lord Snowden’s reference to the speculative orgy in America. The resulting decline in stock prices weakened margin positions. When several governmental bodies indicated that public utilities in the future were not going to be able to justify their market prices, the decreases in utility stock prices resulted in margin positions being further weakened resulting in general selling. At some stage, the selling panic started and the crash resulted.

What can we learn from the 1929 crash? There are many lessons, but a handful seem to be most applicable to today’s stock market.

  • There is a delicate balance between optimism and pessimism regarding the stock market. Statements and actions by government officials can affect the sensitivity of stock prices to events. Call a market overpriced often enough, and investors may begin to believe it.
  • The fact that stocks can lose 40% of their value in a month and 90% over three years suggests the desirability of diversification (including assets other than stocks). Remember, some investors lose all of their investment when the market falls 40%.
  • A levered investment portfolio amplifies the swings of the stock market. Some investment securities have leverage built into them (e.g., stocks of highly levered firms, options, and stock index futures).
  • A series of presumably undramatic events may establish a setting for a wide price decline.
  • A segment of the market can experience bad news and a price decline that infects the broader market. In 1929, it seems to have been public utilities. In 2000, high technology firms were candidates.
  • Interpreting events and assigning blame is unreliable if there has not been an adequate passage of time and opportunity for reflection and analysis — and is difficult even with decades of hindsight.
  • It is difficult to predict a major market turn with any degree of reliability. It is impressive that in September 1929, Roger Babson predicted the collapse of the stock market, but he had been predicting a collapse for many years. Also, even Babson recommended diversification and was against complete liquidation of stock investments (Financial Chronicle, September 7, 1929, p. 1505).
  • Even a market that is not excessively high can collapse. Both market psychology and the underlying economics are relevant.

References

Barsky, Robert B. and J. Bradford DeLong. “Bull and Bear Markets in the Twentieth Century,” Journal of Economic History 50, no. 2 (1990): 265-281.

Bierman, Harold, Jr. The Great Myths of 1929 and the Lessons to be Learned. Westport, CT: Greenwood Press, 1991.

Bierman, Harold, Jr. The Causes of the 1929 Stock Market Crash. Westport, CT, Greenwood Press, 1998.

Bierman, Harold, Jr. “The Reasons Stock Crashed in 1929.” Journal of Investing (1999): 11-18.

Bierman, Harold, Jr. “Bad Market Days,” World Economics (2001) 177-191.

Commercial and Financial Chronicle, 1929 issues.

Committee on Banking and Currency. Hearings on Performance of the National and Federal Reserve Banking System. Washington, 1931.

DeLong, J. Bradford and Andrei Schleifer, “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” Journal of Economic History 51, no. 3 (1991): 675-700.

Federal Reserve Bulletin, February, 1929.

Fisher, Irving. The Stock Market Crash and After. New York: Macmillan, 1930.

Galbraith, John K. The Great Crash, 1929. Boston, Houghton Mifflin, 1961.

Hoover, Herbert. The Memoirs of Herbert Hoover. New York, Macmillan, 1952.

Kendrick, John W. Productivity Trends in the United States. Princeton University Press, 1961.

Kindleberger, Charles P. Manias, Panics, and Crashes. New York, Basic Books, 1978.

Malkiel, Burton G., A Random Walk Down Wall Street. New York, Norton, 1975 and 1996.

Moggridge, Donald. The Collected Writings of John Maynard Keynes, Volume XX. New York: Macmillan, 1981.

New York Times, 1929 and 1930.

Rappoport, Peter and Eugene N. White, “Was There a Bubble in the 1929 Stock Market?” Journal of Economic History 53, no. 3 (1993): 549-574.

Samuelson, Paul A. “Myths and Realities about the Crash and Depression.” Journal of Portfolio Management (1979): 9.

Senate Committee on Banking and Currency. Stock Exchange Practices. Washington, 1928.

Siegel, Jeremy J. “The Equity Premium: Stock and Bond Returns since 1802,”

Financial Analysts Journal 48, no. 1 (1992): 28-46.

Wall Street Journal, October 1929.

Washington Post, October 1929.

Wigmore, Barry A. The Crash and Its Aftermath: A History of Securities Markets in the United States, 1929-1933. Greenwood Press, Westport, 1985.

1 1923-25 average = 100.

2 Based a price to book value ratio of 3.25 (Wigmore, p. 39).

Citation: Bierman, Harold. “The 1929 Stock Market Crash”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/the-1929-stock-market-crash/

History of Workplace Safety in the United States, 1880-1970

Mark Aldrich, Smith College

The dangers of work are usually measured by the number of injuries or fatalities occurring to a group of workers, usually over a period of one year. 1 Over the past century such measures reveal a striking improvement in the safety of work in all the advanced countries. In part this has been the result of the gradual shift of jobs from relatively dangerous goods production such as farming, fishing, logging, mining, and manufacturing into such comparatively safe work as retail trade and services. But even the dangerous trades are now far safer than they were in 1900. To take but one example, mining today remains a comparatively risky activity. Its annual fatality rate is about nine for every one hundred thousand miners employed. A century ago in 1900 about three hundred out of every one hundred thousand miners were killed on the job each year. 2

The Nineteenth Century

Before the late nineteenth century we know little about the safety of American workplaces because contemporaries cared little about it. As a result, only fragmentary information exists prior to the 1880s. Pre-industrial laborers faced risks from animals and hand tools, ladders and stairs. Industrialization substituted steam engines for animals, machines for hand tools, and elevators for ladders. But whether these new technologies generally worsened the dangers of work is unclear. What is clear is that nowhere was the new work associated with the industrial revolution more dangerous than in America.

US Was Unusually Dangerous

Americans modified the path of industrialization that had been pioneered in Britain to fit the particular geographic and economic circumstances of the American continent. Reflecting the high wages and vast natural resources of a new continent, this American system encouraged use of labor saving machines and processes. These developments occurred within a legal and regulatory climate that diminished employer’s interest in safety. As a result, Americans developed production methods that were both highly productive and often very dangerous. 3

Accidents Were “Cheap”

While workers injured on the job or their heirs might sue employers for damages, winning proved difficult. Where employers could show that the worker had assumed the risk, or had been injured by the actions of a fellow employee, or had himself been partly at fault, courts would usually deny liability. A number or surveys taken about 1900 showed that only about half of all workers fatally injured recovered anything and their average compensation only amounted to about half a year’s pay. Because accidents were so cheap, American industrial methods developed with little reference to their safety. 4

Mining

Nowhere was the American system more dangerous than in early mining. In Britain, coal seams were deep and coal expensive. As a result, British mines used mining methods that recovered nearly all of the coal because they used waste rock to hold up the roof. British methods also concentrated the working, making supervision easy, and required little blasting. American coal deposits by contrast, were both vast and near the surface; they could be tapped cheaply using techniques known as “room and pillar” mining. Such methods used coal pillars and timber to hold up the roof, because timber and coal were cheap. Since miners worked in separate rooms, labor supervision was difficult and much blasting was required to bring down the coal. Miners themselves were by no means blameless; most were paid by the ton, and when safety interfered with production, safety often took a back seat. For such reasons, American methods yielded more coal per worker than did European techniques, but they were far more dangerous, and toward the end of the nineteenth century, the dangers worsened (see Table 1).5

Table 1
British and American Mine Safety, 1890 -1904
(Fatality rates per Thousand Workers per Year)

Years American Anthracite American Bituminous Great Britain
1890-1894 3.29 2.52 1.61
1900-1904 3.13 3.53 1.28

Source: British data from Great Britain, General Report. Other data from Aldrich, Safety First.

Railroads

Nineteenth century American railroads were also comparatively dangerous to their workers – and their passengers as well – and for similar reasons. Vast North American distances and low population density turned American carriers into predominantly freight haulers – and freight was far more dangerous to workers than passenger traffic, for men had to go in between moving cars for coupling and uncoupling and ride the cars to work brakes. The thin traffic and high wages also forced American carriers to economize on both capital and labor. Accordingly, American carriers were poorly built and used few signals, both of which resulted in many derailments and collisions. Such conditions made American railroad work far more dangerous than that in Britain (see Table 2).6

Table 2
Comparative Safety of British and American Railroad Workers, 1889 – 1901
(Fatality Rates per Thousand Workers per Year)

1889 1895 1901
British railroad workers
All causes
1.14 0.95 0.89
British trainmena
All causes
4.26 3.22 2.21
Coupling 0.94 0.83 0.74
American Railroad workers
All causes
2.67 2.31 2.50
American trainmen
All causes
8.52 6.45 7.35
Coupling 1.73c 1.20 0.78
Brakingb 3.25c 2.44 2.03

Source: Aldrich, Safety First, Table 1 and Great Britain Board of Trade, General Report.

1

Note: Death rates are per thousand employees.
a. Guards, brakemen, and shunters.
b. Deaths from falls from cars and striking overhead obstructions.

Manufacturing

American manufacturing also developed in a distinctively American fashion that substituted power and machinery for labor and manufactured products with interchangeable arts for ease in mass production. Whether American methods were less safe than those in Europe is unclear but by 1900 they were extraordinarily risky by modern standards, for machines and power sources were largely unguarded. And while competition encouraged factory managers to strive for ever-increased output, they showed little interest in improving safety.7

Worker and Employer Responses

Workers and firms responded to these dangers in a number of ways. Some workers simply left jobs they felt were too dangerous, and risky jobs may have had to offer higher pay to attract workers. After the Civil War life and accident insurance companies expanded, and some workers purchased insurance or set aside savings to offset the income risks from death or injury. Some unions and fraternal organizations also offered their members insurance. Railroads and some mines also developed hospital and insurance plans to care for injured workers while many carriers provided jobs for all their injured men. 8

Improving safety, 1910-1939

Public efforts to improve safety date from the very beginnings of industrialization. States established railroad regulatory commissions as early as the 1840s. But while most of the commissions were intended to improve safety, they had few powers and were rarely able to exert much influence on working conditions. Similarly, the first state mining commission began in Pennsylvania in 1869, and other states soon followed. Yet most of the early commissions were ineffectual and as noted safety actually deteriorated after the Civil War. Factory commissions also dated from but most were understaffed and they too had little power.9

Railroads

The most successful effort to improve work safety during the nineteenth century began on the railroads in the 1880s as a small band of railroad regulators, workers, and managers began to campaign for the development of better brakes and couplers for freight cars. In response George Westinghouse modified his passenger train air brake in about 1887 so it would work on long freights, while at roughly the same time Ely Janney developed an automatic car coupler. For the railroads such equipment meant not only better safety, but also higher productivity and after 1888 they began to deploy it. The process was given a boost in 1889-1890 when the newly-formed Interstate Commerce Commission (ICC) published its first accident statistics. They demonstrated conclusively the extraordinary risks to trainmen from coupling and riding freight (Table 2). In 1893 Congress responded, passing the Safety Appliance Act, which mandated use of such equipment. It was the first federal law intended primarily to improve work safety, and by 1900 when the new equipment was widely diffused, risks to trainmen had fallen dramatically.10

Federal Safety Regulation

In the years between 1900 and World War I, a rather strange band of Progressive reformers, muckraking journalists, businessmen, and labor unions pressed for changes in many areas of American life. These years saw the founding of the Federal Food and Drug Administration, the Federal Reserve System and much else. Work safety also became of increased public concern and the first important developments came once again on the railroads. Unions representing trainmen had been impressed by the safety appliance act of 1893 and after 1900 they campaigned for more of the same. In response Congress passed a host of regulations governing the safety of locomotives and freight cars. While most of these specific regulations were probably modestly beneficial, collectively their impact was small because unlike the rules governing automatic couplers and air brakes they addressed rather minor risks.11

In 1910 Congress also established the Bureau of Mines in response to a series of disastrous and increasingly frequent explosions. The Bureau was to be a scientific, not a regulatory body and it was intended to discover and disseminate new knowledge on ways to improve mine safety.12

Workers’ Compensation Laws Enacted

Far more important were new laws that raised the cost of accidents to employers. In 1908 Congress passed a federal employers’ liability law that applied to railroad workers in interstate commerce and sharply limited defenses an employee could claim. Worker fatalities that had once cost the railroads perhaps $200 now cost $2,000. Two years later in 1910, New York became the first state to pass a workmen’s compensation law. This was a European idea. Instead of requiring injured workers to sue for damages in court and prove the employer was negligent, the new law automatically compensated all injuries at a fixed rate. Compensation appealed to businesses because it made costs more predictable and reduced labor strife. To reformers and unions it promised greater and more certain benefits. Samuel Gompers, leader of the American Federation of Labor had studied the effects of compensation in Germany. He was impressed with how it stimulated business interest in safety, he said. Between 1911 and 1921 forty-four states passed compensation laws.13

Employers Become Interested in Safety

The sharp rise in accident costs that resulted from compensation laws and tighter employers’ liability initiated the modern concern with work safety and initiated the long-term decline in work accidents and injuries. Large firms in railroading, mining, manufacturing and elsewhere suddenly became interested in safety. Companies began to guard machines and power sources while machinery makers developed safer designs. Managers began to look for hidden dangers at work, and to require that workers wear hard hats and safety glasses. They also set up safety departments run by engineers and safety committees that included both workers and managers. In 1913 companies founded the National Safety Council to pool information. Government agencies such as the Bureau of Mines and National Bureau of Standards provided scientific support while universities also researched safety problems for firms and industries14

Accident Rates Begin to Fall Steadily

During the years between World War I and World War II the combination of higher accident costs along with the institutionalization of safety concerns in large firms began to show results. Railroad employee fatality rates declined steadily after 1910 and at some large companies such as DuPont and whole industries such as steel making (see Table 3) safety also improved dramatically. Largely independent changes in technology and labor markets also contributed to safety as well. The decline in labor turnover meant fewer new employees who were relatively likely to get hurt, while the spread of factory electrification not only improved lighting but reduced the dangers from power transmission as well. In coal mining the shift from underground work to strip mining also improved safety. Collectively these long-term forces reduced manufacturing injury rates about 38 percent between 1926 and 1939 (see Table 4).15

Table 3
Steel Industry fatality and Injury rates, 1910-1939
(Rates are per million manhours)

Period Fatality rate Injury Rate
1910-1913 0.40 44.1
1937-1939 0.13 11.7

Pattern of Improvement Was Uneven

Yet the pattern of improvement was uneven, both over time and among firms and industries. Safety still deteriorated in times of economic boon when factories mines and railroads were worked to the limit and labor turnover rose. Nor were small companies as successful in reducing risks, for they paid essentially the same compensation insurance premium irrespective of their accident rate, and so the new laws had little effect there. Underground coal mining accidents also showed only modest improvement. Safety was also expensive in coal and many firms were small and saw little payoff from a lower accident rate. The one source of danger that did decline was mine explosions, which diminished in response to technologies developed by the Bureau of Mines. Ironically, however, in 1940 six disastrous blasts that killed 276 men finally led to federal mine inspection in 1941.16

Table 4
Work Injury Rates, Manufacturing and Coal Mining, 1926-1970
(Per Million Manhours)

.

Year Manufacturing Coal Mining
1926 24.2
1931 18.9 89.9
1939 14.9 69.5
1945 18.6 60.7
1950 14.7 53.3
1960 12.0 43.4
1970 15.2 42.6

Source: U.S. Department of Commerce Bureau of the Census, Historical Statistics of the United States, Colonial Times to 1970 (Washington, 1975), Series D-1029 and D-1031.

Postwar Trends, 1945-1970

The economic boon and associated labor turnover during World War II worsened work safety in nearly all areas of the economy, but after 1945 accidents again declined as long-term forces reasserted themselves (Table 4). In addition, after World War II newly powerful labor unions played an increasingly important role in work safety. In the 1960s however economic expansion again led to rising injury rates and the resulting political pressures led Congress to establish the Occupational Safety and Health Administration (OSHA) and the Mine Safety and Health Administration in 1970. The continuing problem of mine explosions also led to the foundation of the Mine Safety and Health Administration (MSHA) that same year. The work of these agencies had been controversial but on balance they have contributed to the continuing reductions in work injuries after 1970.17

References and Further Reading

Aldrich, Mark. Safety First: Technology, Labor and Business in the Building of Work Safety, 1870-1939. Baltimore: Johns Hopkins University Press, 1997.

Aldrich, Mark. “Preventing ‘The Needless Peril of the Coal Mine': the Bureau of Mines and the Campaign Against Coal Mine Explosions, 1910-1940.” Technology and Culture 36, no. 3 (1995): 483-518.

Aldrich, Mark. “The Peril of the Broken Rail: the Carriers, the Steel Companies, and Rail Technology, 1900-1945.” Technology and Culture 40, no. 2 (1999): 263-291

Aldrich, Mark. “Train Wrecks to Typhoid Fever: The Development of Railroad Medicine Organizations, 1850 -World War I.” Bulletin of the History of Medicine, 75, no. 2 (Summer 2001): 254-89.

Derickson Alan. “Participative Regulation of Hazardous Working Conditions: Safety Committees of the United Mine Workers of America,” Labor Studies Journal 18, no. 2 (1993): 25-38.

Dix, Keith. Work Relations in the Coal Industry: The Hand Loading Era. Morgantown: University of West Virginia Press, 1977. The best discussion of coalmine work for this period.

Dix, Keith. What’s a Coal Miner to Do? Pittsburgh: University of Pittsburgh Press, 1988. The best discussion of coal mine labor during the era of mechanization.

Fairris, David. “From Exit to Voice in Shopfloor Governance: The Case of Company Unions.” Business History Review 69, no. 4 (1995): 494-529.

Fairris, David. “Institutional Change in Shopfloor Governance and the Trajectory of Postwar Injury Rates in U.S. Manufacturing, 1946-1970.” Industrial and Labor Relations Review 51, no. 2 (1998): 187-203.

Fishback, Price. Soft Coal Hard Choices: The Economic Welfare of Bituminous Coal Miners, 1890-1930. New York: Oxford University Press, 1992. The best economic analysis of the labor market for coalmine workers.

Fishback, Price and Shawn Kantor. A Prelude to the Welfare State: The Origins of Workers’ Compensation. Chicago: University of Chicago Press, 2000. The best discussions of how employers’ liability rules worked.

Graebner, William. Coal Mining Safety in the Progressive Period. Lexington: University of Kentucky Press, 1976.

Great Britain Board of Trade. General Report upon the Accidents that Have Occurred on Railways of the United Kingdom during the Year 1901. London, HMSO, 1902.

Great Britain Home Office Chief Inspector of Mines. General Report with Statistics for 1914, Part I. London: HMSO, 1915.

Hounshell, David. From the American System to Mass Production, 1800-1932: The Development of Manufacturing Technology in the United States. Baltimore: Johns Hopkins University Press, 1984.

Humphrey, H. B. “Historical Summary of Coal-Mine Explosions in the United States — 1810-1958.” United States Bureau of Mines Bulletin 586 (1960).

Kirkland, Edward. Men, Cities, and Transportation. 2 vols. Cambridge: Harvard University Press, 1948, Discusses railroad regulation and safety in New England.

Lankton, Larry. Cradle to Grave: Life, Work, and Death in Michigan Copper Mines. New York: Oxford University Press, 1991.

Licht, Walter. Working for the Railroad. Princeton: Princeton University Press, 1983.

Long, Priscilla. Where the Sun Never Shines. New York: Paragon, 1989. Covers coal mine safety at the end of the nineteenth century.

Mendeloff, John. Regulating Safety: An Economic and Political Analysis of Occupational Safety and Health Policy. Cambridge: MIT Press, 1979. An accessible modern discussion of safety under OSHA.

National Academy of Sciences. Toward Safer Underground Coal Mines. Washington, DC: NAS, 1982.

Rogers, Donald. “From Common Law to Factory Laws: The Transformation of Workplace Safety Law in Wisconsin before Progressivism.” American Journal of Legal History (1995): 177-213.

Root, Norman and Daley, Judy. “Are Women Safer Workers? A New Look at the Data.” Monthly Labor Review 103, no. 9 (1980): 3-10.

Rosenberg, Nathan. Technology and American Economic Growth. New York: Harper and Row, 1972. Analyzes the forces shaping American technology.

Rosner, David and Gerald Markowity, editors. Dying for Work. Blomington: Indiana University Press, 1987.

Shaw, Robert. Down Brakes: A History of Railroad Accidents, Safety Precautions, and Operating Practices in the United States of America. London: P. R. Macmillan. 1961.

Trachenberg, Alexander. The History of Legislation for the Protection of Coal Miners in Pennsylvania, 1824 – 1915. New York: International Publishers. 1942.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States, Colonial Times to 1970. Washington, DC, 1975.

Usselman, Steven. “Air Brakes for Freight Trains: Technological Innovation in the American Railroad Industry, 1869-1900.” Business History Review 58 (1984): 30-50.

Viscusi, W. Kip. Risk By Choice: Regulating Health and Safety in the Workplace. Cambridge: Harvard University Press, 1983. The most readable treatment of modern safety issues by a leading scholar.

Wallace, Anthony. Saint Clair. New York: Alfred A. Knopf, 1987. Provides a superb discussion of early anthracite mining and safety.

Whaples, Robert and David Buffum. “Fraternalism, Paternalism, the Family and the Market: Insurance a Century Ago.” Social Science History 15 (1991): 97-122.

White, John. The American Railroad Freight Car. Baltimore: Johns Hopkins University Press, 1993. The definitive history of freight car technology.

Whiteside, James. Regulating Danger: The Struggle for Mine Safety in the Rocky Mountain Coal Industry. Lincoln: University of Nebraska Press, 1990.

Wokutch, Richard. Worker Protection Japanese Style: Occupational Safety and Health in the Auto Industry. Ithaca, NY: ILR, 1992

Worrall, John, editor. Safety and the Work Force: Incentives and Disincentives in Workers’ Compensation. Ithaca, NY: ILR Press, 1983.

1 Injuries or fatalities are expressed as rates. For example, if ten workers are injured out of 450 workers during a year, the rate would be .006666. For readability it might be expressed as 6.67 per thousand or 666.7 per hundred thousand workers. Rates may also be expressed per million workhours. Thus if the average work year is 2000 hours, ten injuries in 450 workers results in [10/450×2000]x1,000,000 = 11.1 injuries per million hours worked.

2 For statistics on work injuries from 1922-1970 see U.S. Department of Commerce, Historical Statistics, Series 1029-1036. For earlier data are in Aldrich, Safety First, Appendix 1-3.

3 Hounshell, American System. Rosenberg, Technology,. Aldrich, Safety First.

4 On the workings of the employers’ liability system see Fishback and Kantor, A Prelude, chapter 2

5 Dix, Work Relations, and his What’s a Coal Miner to Do? Wallace, Saint Clair, is a superb discussion of early anthracite mining and safety. Long, Where the Sun, Fishback, Soft Coal, chapters 1, 2, and 7. Humphrey, “Historical Summary.” Aldrich, Safety First, chapter 2.

6 Aldrich, Safety First chapter 1.

7 Aldrich, Safety First chapter 3

8 Fishback and Kantor, A Prelude, chapter 3, discusses higher pay for risky jobs as well as worker savings and accident insurance See also Whaples and Buffum, “Fraternalism, Paternalism.” Aldrich, ” Train Wrecks to Typhoid Fever.”

9Kirkland, Men, Cities. Trachenberg, The History of Legislation Whiteside, Regulating Danger. An early discussion of factory legislation is in Susan Kingsbury, ed.,xxxxx. Rogers,” From Common Law.”

10 On the evolution of freight car technology see White, American Railroad Freight Car, Usselman “Air Brakes for Freight trains,” and Aldrich, Safety First, chapter 1. Shaw, Down Brakes, discusses causes of train accidents.

11 Details of these regulations may be found in Aldrich, Safety First, chapter 5.

12 Graebner, Coal-Mining Safety, Aldrich, “‘The Needless Peril.”

13 On the origins of these laws see Fishback and Kantor, A Prelude, and the sources cited therein.

14 For assessments of the impact of early compensation laws see Aldrich, Safety First, chapter 5 and Fishback and Kantor, A Prelude, chapter 3. Compensation in the modern economy is discussed in Worrall, Safety and the Work Force. Government and other scientific work that promoted safety on railroads and in coal mining are discussed in Aldrich, “‘The Needless Peril’,” and “The Broken Rail.”

15 Farris, “From Exit to Voice.”

16 Aldrich, “‘Needless Peril,” and Humphrey

17 Derickson, “Participative Regulation” and Fairris, “Institutional Change,” also emphasize the role of union and shop floor issues in shaping safety during these years. Much of the modern literature on safety is highly quantitative. For readable discussions see Mendeloff, Regulating Safety (Cambridge: MIT Press, 1979), and

Citation: Aldrich, Mark. “History of Workplace Safety in the United States, 1880-1970″. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/history-of-workplace-safety-in-the-united-states-1880-1970/

Military Spending Patterns in History

Jari Eloranta, Appalachian State University

Introduction

Determining adequate levels of military spending and sustaining the burden of conflicts have been among key fiscal problems in history. Ancient societies were usually less complicated in terms of the administrative, fiscal, technological, and material demands of warfare. The most pressing problem was frequently the adequate maintenance of supply routes for the armed forces. On the other hand, these societies were by and large subsistence societies, so they could not extract massive resources for such ventures, at least until the arrival of the Roman and Byzantine Empires. The emerging nation states of the early modern period were much better equipped to fight wars. On the one hand, the frequent wars, new gunpowder technologies, and the commercialization of warfare forced them to consolidate resources for the needs of warfare. On the other hand, the rulers had to – slowly but surely – give up some of their sovereignty to be able to secure required credit both domestically and abroad. The Dutch and the British were masters at this, with the latter amassing an empire that spanned the globe at the eve of the First World War.

The early modern expansion of Western European states started to challenge other regimes all over the world, made possible by their military and naval supremacy as well as later on by their industrial prowess. The age of total war in the nineteenth and twentieth centuries finally pushed these states to adopt more and more efficient fiscal systems and enabled some of them to dedicate more than half of their GDP to the war effort during the world wars. Comparatively, even though military spending was regularly the biggest item in the budget for most states before the twentieth century, it still represented only a modest amount of their GDP. The Cold War period again saw high relative spending levels, due to the enduring rivalry between the West and the Communist Bloc. Finally, the collapse of the Soviet Union alleviated some of these tensions and lowered the aggregate military spending in the world. Newer security challenges such as terrorism and various interstate rivalries have again pushed the world towards growing overall military spending.

This article will first elaborate on some of the research trends in studying military spending and the multitude of theories attempting to explain the importance of warfare and military finance in history. This survey will be followed by a chronological sweep, starting with the military spending of the ancient empires and ending with a discussion of the current behavior of states in the post-Cold War international system. By necessity, this chronological review will be selective at best, given the enormity of the time period in question and the complexity of the topic at hand.

Theoretical Approaches

Military spending is a key phenomenon in order to understand various aspects of economic history: the cost, funding, and burden of conflicts; the creation of nation states; and in general the increased role of government in everyone’s lives especially since the nineteenth century. Nonetheless, certain characteristics can be distinguished from the efforts to study this complex topic among different sciences (mainly history, economics, and political sciences). Historians, especially diplomatic and military historians, have been keen on studying the origins of the two World Wars and perhaps certain other massive conflicts. Nonetheless, many of the historical studies on war and societies have analyzed developments at an elusive macro-level, often without a great deal of elaboration on the quantitative evidence behind the assumptions on the effects of military spending. For example, Paul Kennedy argued in his famous The Rise and Fall of the Great Powers: Economic Change and Military Conflict from 1500 to 2000 (1989) that military spending by hegemonic states eventually becomes excessive and a burden on its economy, finally leading to economic ruin. This argument has been criticized by many economists and historians, since it seems to lack the proper quantitative sources to support his notion of interaction between military spending and economic growth.[2] Quite frequently, as emerging from the classic studies by A.J.P. Taylor and many of the more current works, historians tend to be more interested in the impact of foreign policy decision-making and alliances, in addition to resolving the issue of “blame,” on the road towards major conflicts[3], rather than how reliable quantitative evidence can be mustered to support or disprove the key arguments. Economic historians, in turn, have not been particularly interested in the long-term economic impacts of military spending. Usually the interest of economic historians has centered on the economics of global conflicts — of which a good example of recent work combining the theoretical aspects of economics with historical case studies is The Economics of World War II, a compilation edited by Mark Harrison — as well as the immediate short-term economic impacts of wartime mobilization.[4]

The study of defense economics and military spending patterns as such is related to the immense expansion of military budgets and military establishments in the Cold War era. It involves the application of the methods and tools of economics to the study of issues arising from such a huge expansion. At least three aspects in defense economics set it apart from other fields of economics: 1) the actors (both private and public, for example in contracting); 2) theoretical challenges introduced by the interaction of different institutional and organizational arrangements, both in the budgeting and the allocation procedures; 3) the nature of military spending as a tool for destruction as well as providing security.[5] One of the shortcomings in the study of defense economics has been, at least so far, the lack of interest in periods before the Second World War.[6] For example, how much has the overall military burden (military expenditures as a percentage of GDP) of nation states changed over the last couple of centuries? Or, how big of a financial burden did the Thirty Years War (1618-1648) impose on the participating Great Powers?

A “typical” defense economist (see especially Sandler and Hartley (1995)) would model and attempt, based on public good theories, to explain military spending behavior (essentially its demand) by states with the following base equation:

(1)

In Equation 1, ME represents military expenditures by state i in year t, PRICE the price of military goods (affected by technological changes as well), INCOME most commonly the real GDP of the state in question, SPILLINS the impact of friendly states’ military spending (for example in an alliance), THREATS the impact of hostile states’ or alliances’ military expenditures, and STRATEGY the constraints imposed by changes in the overall strategic parameters of a nation. Most commonly, a higher price for military goods lowers military spending; higher income tends to increase ME (like during the industrial revolutions); alliances often lower ME due to the free riding tendencies of most states; threats usually increase military spending (and sometimes spur on arms races); and changes in the overall defensive strategy of a nation can affect ME in either direction, depending on the strategic framework implemented. While this model may be suitable for the study of, for example, the Cold War period, it fails to capture many other important explanatory factors, such as the influence of various organizations and interest groups in the budgetary processes as well as the impact of elections and policy-makers in general. For example, interest groups can get policy-makers to ignore price increases (on, for instance, domestic military goods), and election years usually alter (or focus) the behavior of elected officials.

In turn within peace sciences, a broader yet overlapping school of thought compared to defense economics, the focus in research has been to find the causal factors behind the most destructive conflicts. One of the most significant of such interdisciplinary efforts has been the Correlates of War (COW) project, which started in the spring of 1963. This project and the researchers loosely associated with it, not to mention its importance in producing comparative statistics, have had a big impact on the study of conflicts.[7] As Daniel S. Geller and J. David Singer have noted, the number of territorial states in the global system has ranged from fewer than 30 after the Napoleonic Wars to nearly 200 at the end of the twentieth century, and it is essential to test the various indicators collected by peace scientists against the historical record until theoretical premises can be confirmed or rejected.[8] In fact, a typical feature in most studies of this type is that they are focused on finding those sets of variables that might predict major wars and other conflicts, in a way similar to the historians’ origins-of-wars approach, whereas studies investigating the military spending behavior of monads (single states), dyads (pairs of states), or systems in particular are quite rare. Moreover, even though some cycle theorists and conflict scientists have been interested in the formation of modern nation states and the respective system of states since 1648, they have not expressed any real interest in pre-modern societies and warfare.[9]

Nevertheless, these contributions have had a lot to offer to the study of long-run dynamics of military spending, state formation, and warfare. According to Charles Tilly, there are four approximate approaches to the study of the relationships between war and power: 1) the statist; 2) the geopolitical; 3) the world system; and 4) the mode of production approach. The statist approach presents war, international relations, and state formation chiefly as a conse­quence of events within particular states. The geopolitical analysis is centered on the argument that state formation responds strongly to the current system of relations among states. The world system approach, á la Wallerstein, is mainly rooted in the idea that the different paths of state formation are influenced by the division of resources in the world system. In the mode of production framework, the way that production is organized determines the outcome of state formation. None of the approaches, as Tilly has pointed out, are adequate in their purest form in explaining state formation, international power relations, and economic growth as a whole.[10] Tilly himself maintains that coercion (a monopoly of violence by rulers and ability to wield coercion also externally) and capital (means of financing warfare) were the key elements in the European ascendancy to world domination in the early modern era. Warfare, state formation, and technological supremacy were all interrelated fundamentals of the same process.[11]

How can these theories of state behavior at the system level be linked to the analysis of military spending? According to George Modelski and William R. Thompson, proponents of Kondratieff waves and long cycles as explanatory forces in the development of world leadership patterns, the key aspect in a state’s ascendancy to prominence via such cycles in such models is naval power; i.e., a state’s ability to vie for world political leadership, colonization, and domination in trade.[12] One of the less explored aspects in most studies of hegemonic patterns is the military expenditure component in the competition between the states for military and economic leadership in the system. It is often argued, for example, that uneven economic growth levels cause nations to compete for economic and military prow­ess. The leader nation(s) thus has to dedicate increasing resources to armaments in order to maintain its position, while the other states, the so-called followers, can benefit from greater investments in other areas of economic activity. Therefore, the follower states act as free-riders in the international system stabilized by the hegemon. A built-in assumption in this hypothesized development pattern is that military spending eventually becomes harmful for economic development; a notion that has often been challenged based on empirical studies.[13]

Overall, the assertion arising from such a framework is that economic development and military spending are closely interdependent, with military spending being the driving force behind economic cycles. Moreover, based on this development pattern, it has been suggested that a country’s poor economic performance is linked to the “wasted” economic resources represented by military expenditures. However, as recent studies have shown, economic development is often more significant in explaining military spending rather than vice versa. The development of the U.S. economy since the Second World War certainly does not the type of hegemonic decline as predicted by Kennedy.[14] The aforementioned development pattern can be paraphrased as the so-called war chest hypothesis. As some of the hegemonic theorists reviewed above suggest, economic prosperity might be a necessary prerequisite for war and expansion. Thus, as Brian M. Pollins and Randall L. Schweller have indicated, economic growth would induce rising government expenditures, which in turn would enable higher military spending — therefore military expenditures would be “caused” by economic growth at a certain time lag.[15] In order for military spending to hinder economic performance, it would have to surpass all other areas of an economy, such as is often the case during wartime.

There have been relatively few credible attempts to model the military (or budgetary) spending behavior of states based on their long-run regime characteristics. Here I am going to focus on three in particular: 1) the Webber-Wildawsky model of budgeting; 2) the Richard Bonney model of fiscal systems; and 3) the Niall Ferguson model of interaction between public debts and forms of government. Caroly Webber and Aaron Wildawsky maintain essentially that each political culture generates its characteristic budgetary objectives; namely, productivity in market regimes, redistribution in sects (specific groups dissenting from an established authority), and more complex procedures in hierarchical regimes.[16] Thus, according to them the respective budgetary consequences arising from the chosen regime can be divided into four categories: despotism, state capitalism, American individualism, and social democracy. All of them in turn have implications for the respective regimes’ revenue and spending needs.

This model, however, is essentially a static one. It does not provide clues as to why nations’ behavior may change over time. Richard Bonney has addressed this problem in his writings on mainly the early modern states.[17] He has emphasized that the states’ revenue and tax collection systems, the backbone of any militarily successful nation state, have evolved over time. For example, in most European states the government became the arbiter of disputes and the defender of certain basic rights in the society by the early modern period. During the Middle Ages, the European fiscal systems were relatively backward and autarchic, with mostly predatory rulers (or roving bandits, as Mancur Olson has coined them).[18] In his model this would be the stage of the so-called tribute state. Next in the evolution came, respectively, the domain state (with stationary bandits, providing some public goods), the tax state (more reliance on credit and revenue collection), and finally the fiscal state (embodying more complex fiscal and political structures). A superpower like Great Britain in the nineteenth century, in fact, had to be a fiscal state to be able to dominate the world, due to all the burdens that went with an empire.[19]

While both of the models mentioned above have provided important clues as to how and why nations have prepared fiscally for wars, the most complete account of this process (along with Charles Tilly’s framework covered earlier) has been provided by Niall Ferguson.[20] He has maintained that wars have shaped all the most relevant institutions of modern economic life: tax-collecting bureaucracies, central banks, bond markets, and stock exchanges. Moreover, he argues that the invention of public debt instruments has gone hand-in-hand with more democratic forms of government and military supremacy – hence, the so-called Dutch or British model. These types of regimes have also been the most efficient economically, which has in turned reinforced the success of this fiscal regime model. In fact, military expenditures may have been the principal cause of fiscal innovation for most of history. Ferguson’s model highlights the importance, for a state’s survival among its challengers, of the adoption of the right types of institutions, technology, and a sufficient helping of external ambitions. All in all, I would summarize the required model, combining elements from the various frameworks, as being evolutionary, with regimes during different stages having different priorities and burdens imposed by military spending, depending also on their position in the international system. A successful ascendancy to a leadership position required higher expenditures, a substantial navy, fiscal and political structures conducive to increasing the availability of credit, and reoccurring participation in international conflicts.

Military Spending and the Early Empires

For most societies since the ancient river valley civilizations, military exertions and the means by which to finance them have been the crucial problems of governance. A centralized ability to plan and control spending were lacking in most governments until the nineteenth century. In fact, among the ancient civilizations, financial administration and the government were inseparable. Governments were organized on hierarchical basis, with the rulers having supreme control over military decisions. Taxes were often paid in kind to support the rulers, thus making it more difficult to monitor and utilize the revenues for military campaigns over great distances. For these agricultural economies, victory in war usually yielded lavish tribute to supplement royal wealth and helped to maintain the army and control the population. Thus, support of the large military forces and expeditions, contingent on food and supplies, was the ancient government’s principal expense and problem. Dependence on distant, often external suppliers of food limited the expansion of these empires. Fiscal management in turn was usually cumbersome and costly, and all of the ancient governments were internally unstable and vulnerable to external incursions.[21]

Soldiers, however, often supplemented their supplies by looting the enemy territory. The optimal size of an ancient empire was determined by the efficiency of tax collection and allocation, resource extraction, and its transportation system. Moreover, the supply of metal and weaponry, though important, was seldom the only critical variable for the military success an ancient empire. There were, however, important changing points in this respect, for example the introduction of bronze weaponry, starting with Mesopotamia about 3500 B.C. The introduction of iron weaponry about 1200 B.C. in eastern parts of Asia Minor, although the subsequent spread of this technology was fairly slow and gathered momentum from about 1000 B.C. onwards, and the use of chariot warfare introduced a new phase in warfare, due to the superior efficiency and cheapness of iron armaments as well as the hierarchical structures that were needed to use them during the chariot era.[22]

The river valley civilizations, nonetheless, paled in comparison with the military might and economy of one of the most efficient military behemoths of all time: the Roman Empire. Military spending was the largest item of public spending throughout Roman history. All Roman governments, similar to Athens during the time of Pericles, had problems in gathering enough revenue. Therefore, for example in the third century A.D. Roman citizenship was extended to all residents of the empire in order to raise revenue, as only citizens paid taxes. There were also other constraints on their spending, such as technological, geographic, and other productivity concerns. Direct taxation was, however, regarded as a dishonor, only to be extended in crisis times. Thus, taxation during most of the empire remained moderate, consisting of extraordinary taxes (so-called liturgies in ancient Athens) during such episodes. During the first two centuries of empire, the Roman army had about 150,000 to 160,000 legionnaires, in addition to 150,000 other troops, and during the first two centuries of empire soldiers’ wages began to increase rapidly to ensure the army’s loyalty. For example, in republican and imperial Rome military wages accounted for more than half of the revenue. The demands of the empire became more and more extensive during the third and fourth centuries A.D., as the internal decline of the empire became more evident and Rome’s external challengers became stronger. For example, the limited use of direct taxes and the commonness of tax evasion could not fulfill the fiscal demands of the crumbling empire. Armed forces were in turn used to maintain internal order. Societal unrest, inflation, and external incursions finally brought the Roman Empire, at least in the West, to an end.[23]

Warfare and the Rise of European Supremacy

During the Middle Ages, following the decentralized era of barbarian invasions, a varied system of European feudalism emerged, in which often feudal lords provided protection for communities for service or price. Since the Merovingian era, soldiers became more specialized professionals, with expensive horses and equipment. By the Carolingian era, military service had become largely the prerogative of an aristocratic elite. Prior to 1000 A.D., the command system was preeminent in mobilizing human and material resources for large-scale military enterprises, mostly on a contingency basis.[24] The isolated European societies, with the exception of the Byzantine Empire, paled in comparison with the splendor and accomplishment of the empires in China and the Muslim world. Also, in terms of science and inventions the Europeans were no match for these empires until the early modern period. Moreover, it was not until the twelfth century and the Crusades that the feudal kings needed to supplement the ordinary revenues to finance large armies. Internal discontent in the Middle Ages often led to an expansionary drive as the spoils of war helped calm the elite — for example, the French kings had to establish firm taxing power in the fourteenth century out of military necessity. The political ambitions of medieval kings, however, still relied on revenue strategies that catered to the short-term deficits, which made long-term credit and prolonged military campaigns difficult.[25]

Innovations in the ways of waging war and technology invented by the Chinese and the Islamic societies permeated Europe with a delay, such as the use of pikes in the fourteenth century and the gunpowder revolution of the fifteenth century, which in turn permitted armies to attack and defend larger territories. This also made possible a commercialization of warfare in Europe in the fourteenth and fifteenth centuries as feudal armies had to give way to professional mercenary forces. Accordingly, medieval states had to increase their taxation levels and tax collection to support the growing costs of warfare and the maintenance of larger standing armies. Equally, the age of commercialization of warfare was accompanied by the rising importance of sea power as European states began to build their overseas empires (as opposed to for example the isolationist turn of Ming China in the fifteenth century). States such as Portugal, the Netherlands, and England, respectively, became the “systemic leaders” due to their extensive fleets and commercial expansion in the period before the Napoleonic Wars. These were also states that were economically cohesive due to internal waterways and small geographic size as well. The early winners in the fight for world leadership, such as England, were greatly influenced by the availability of inexpensive credit, enabling them to mobilize limited resources effectively to meet military expenses. Their rise was of course preceded by the naval exploration and empire-building of many successful European states, especially Spain, both in Europe and around the globe.[26]

This pattern from command to commercialized warfare, from short-term to more permanent military management system, can be seen in the English case. In the period 1535-1547, the English defense share (military expenditures as a percentage of central government expenditures) averaged at 29.4 percent, with large fluctuations from year to year. However, in the period 1685-1813, the mean English defense share was 74.6 percent, never dropping below 55 percent in the said period. The newly-emerging nation states began to develop more centralized and productive revenue-expenditure systems, the goal of which was to enhance the state’s power, especially in the absolutist era. This also reflected on the growing cost and scale of warfare: During the Thirty Years’ War between 100,000 and 200,000 men fought under arms, whereas twenty years later 450,000 to 500,000 men fought on both sides in the War of the Spanish Succession. The numbers notwithstanding, the Thirty Years’ War was a conflict directly comparable to the world wars in terms of destruction. For example, Charles Tilly has estimated the battle deaths to have exceeded two million. Henry Kamen, in turn, has emphasized the mass scale destruction and economic dislocation this caused in the German lands, especially to the civilian population.[27]

With the increasing scale of armed conflicts in the seventeenth century, the participants became more and more dependent on access to long-term credit, because whichever government ran out of money had to surrender first. For example, even though the causes of Spain’s supposed decline in the seventeenth century are still disputed, nonetheless it can be said that the lack of royal credit and the poor management of government finances resulted in heavy deficit spending as military exertions followed one after another in the seventeenth century. Therefore, the Spanish Crown defaulted repeatedly during the sixteenth and seventeenth centuries, and on several occasions forced Spain to seek an end to its military activities. Spain still remained one of the most important Great Powers of the period, and was able to sustain its massive empire mostly intact until the nineteenth century.[28]

What about other country cases – can they shed further light into the importance of military spending and warfare in their early modern economic and political development? A key question for France, for example, was the financing of its military exertions. According to Richard Bonney, the cost of France’s armed forces in its era of “national greatness” were stupendous, with expenditure on the army by the period 1708-1714 averaging 218 million livres, whereas during the Dutch War of 1672-1678 it had averaged only 99 million in nominal terms. This was due to both growth in the size of the army and the navy, and the decline in the purchasing power of the French livre. The overall burden of war, however, remained roughly similar in this period: War expenditures accounted roughly 57 percent of total expenditure in 1683, whereas they represented about 52 percent in 1714. Moreover, as for all the main European monarchies, it was the expenditure on war that brought fiscal change in France, especially after the Napoleonic wars. Between 1815 and 1913, there was a 444 percent increase in French public expenditure and a consolidation of the emerging fiscal state. This also embodied a change in the French credit market structure.[29]

A success story, in a way a predecessor to the British model, was the Dutch state in this period. As Marjolein ‘t Hart has noted, the domestic investors were instrumental in supporting their new-born state as the state was able to borrow the money it needed from the credit markets, thus providing a stability in public finances even during crises. This financial regime lasted up until the end of the eighteenth century. Here again we can observe the intermarriage of military spending and the availability of credit, essentially the basic logic in the Ferguson model. One of the key features in the Dutch success in the seventeenth century was their ability to pay their soldiers relatively promptly. The Dutch case also underlines the primacy of military spending in state budgets and the burden involved for the early modern states. As we can see in Figure 1, the defense share of the Dutch region of Groningen remained consistently around 80 to 90 percent until the mid-seventeenth century, and then it declined, at least temporarily during periods of peace.[30]

Figure 1

Groningen’s Defense Share (Military Spending as a Percentage of Central Government Expenditures), 1596-1795

Source: L. van der Ent, et al. European State Finance Database. ESFD, 1999 [cited 1.2.2001]. Available from: http://www.le.ac.uk/hi/bon/ESFDB/frameset.html.

Respectively, in the eighteenth century, with rapid population growth in Europe, armies also grew in size, especially the Russian army. In Western Europe, a mounting intensity of warfare with the Seven Years War (1756-1763) finally culminated in the French Revolution and Napoleon’s conquests and defeat (1792-1815). The new style of warfare brought on by the Revolutionary Wars, with conscription and war of attrition as new elements, can be seen in the growth of army sizes. For example, the French army grew over 3.5 times in size from 1789 to 1793 – up to 650,000 men. Similarly, the British army grew from 57,000 in 1783 to 255,000 men in 1816. The Russian army acquired the massive size of 800,000 men in 1816, and Russia also kept the size of its armed forces at similar levels in the nineteenth century. However, the number of Great Power wars declined in number (see Table 1), as did the average duration of these wars. Yet, some of the conflicts of the industrial era became massive and deadly events, drawing in most parts of the world into essentially European skirmishes.

Table 1

Wars Involving the Great Powers

Century Number of wars Average duration of wars (years) Proportion of years war was underway, percentage
16th 34 1.6 95
17th 29 1.7 94
18th 17 1.0 78
19th 20 0.4 40
20th 15 0.4 53

Source: Charles Tilly. Coercion, Capital, and European States, AD 990-1990. Cambridge, Mass: Basil Blackwell, 1990.

The Age of Total War and Industrial Revolutions

With the new kind of mobilization, which became more or less a permanent state of affairs in the nineteenth century, centralized governments required new methods of finance. The nineteenth century brought on reforms, such as centralized public administration, reliance on specific, balanced budgets, innovations in public banking and public debt management, and reliance on direct taxation for revenue. However, for the first time in history, these reforms were also supported with the spread of industrialization and rising productivity. The nineteenth century was also the century of the industrialization of war, starting in the mid-century and gathering breakneck speed quickly. By the 1880s, military engineering began to forge ahead of even civil engineering. Also, a revolution in transportation with steamships and railroads made massive, long-distance mobilizations possible, as shown by the Prussian example against the French in 1870-1871.[31]

The demands posed by these changes on the state finances and economies differed. In the French case, the defense share stayed roughly the same, a little over 30 percent, throughout the nineteenth and early twentieth centuries, whereas its military burden increased about one percent to 4.2 percent. In the UK case, the defense share mean declined two percent to 36.7 percent in 1870-1913, compared to early nineteenth century. However, the strength of the British economy made it possible that the military burden actually declined a little to 2.6 percent, a similar figure incurred by Germany in the same period. For most countries the period leading to the First World War meant higher military burdens than that, such as Japan’s 6.1 percent. However, the United States, the new economic leader by the closing decades of the century, averaged spending a meager 0.7 percent of its GDP for military purposes, a trend that continued throughout the interwar period as well (military burden of 1.2 percent). As seen in Figure 2, the military burdens incurred by the Great Powers also varied in terms of timing, suggesting different reactions to external and internal pressures. Nonetheless, the aggregate, systemic real military spending of the period showed a clear upward trend for the entire period. Moreover, the impact of the Russo-Japanese was immense for the total (real) spending of the sixteen states represented in the figure below, due to the fact that both countries were Great Powers and Russian military expenditures alone were massive. The unexpected defeat of the Russians unleashed, along with the arrival of dreadnoughts, an intensive arms race.[32]

Figure 2

Military Burdens of Four Great Powers and Aggregate Real Military Expenditure (ME) for Sixteen Countries on the Aggregate, 1870-1913

Sources: See Jari Eloranta, “Struggle for Leadership? Military Spending Behavior of the Great Powers, 1870-1913,” Appalachian State University, Department of History, unpublished manuscript 2005b, also on the constructed system of states and the methods involved in converting the expenditures into a common currency (using exchange rates and purchasing power parities), which is always a controversial exercise.

With the beginning of the First World War in 1914, this military potential was unleashed in Europe with horrible consequences, as most of the nations anticipated a quick victory but ended up fighting a war of attrition in the trenches. Mankind had finally, even officially, entered the age of total war.[33] It has been estimated that about nine million combatants and twelve million civilians died during the so-called Great War, with property damage especially in France, Belgium, and Poland. According to Rondo Cameron and Larry Neal, the direct financial losses arising from the Great War were about 180-230 billion 1914 U.S. dollars, whereas the indirect losses of property and capital rose to over 150 billion dollars.[34] According to the most recent estimates, the economic losses arising from the war could be as high as 692 billion 1938 U.S. dollars.[35] But how much of their resources did they have to mobilize and what were the human costs of the war?

As Table 2 displays, the French military burden was fairly high, in addition to the size of its military forces and the number of battle deaths. Therefore, France mobilized the most resources in the war and, subsequently, suffered the greatest losses. The mobilization by Germany was also quite efficient, because almost the entire state budget was used to support the war effort. On the other hand, the United States barely participated in the war, and its personnel losses in the conflict were relatively small, as were its economic burdens. In comparison, the massive population reserves of Russia enabled fairly high personnel losses, quite similar to the Soviet experience in the Second World War.

Table 2

Resource Mobilization by the Great Powers in the First World War

Country and years in the war Average military burden (percent of GDP) Average defense share of government spending Military personnel as a percentage of population Battle deaths as a percentage of population
France

1914-1918

43 77 11 3.5
Germany

1914-1918

.. 91 7.3 2.7
Russia

1914-1917

.. .. 4.3 1.4
UK

1914-1918

22 49 7.3 2.0
US

1917-1918

7 47 1.7 0.1

Sources: Historical Statistics of the United States, Colonial Times to 1970, Washington, DC: U.S. Bureau of Census, 1975; Louis Fontvieille. Evolution et croissance de l’Etat Français: 1815-1969, Economies et sociëtës, Paris: Institut de Sciences Mathematiques et Economiques Appliquees, 1976 ; B. R. Mitchell. International Historical Statistics: Europe, 1750-1993, 4th edition, Basingstoke: Macmillan Academic and Professional, 1998a; E. V. Morgan, Studies in British Financial Policy, 1914-1925., London: Macmillan, 1952; J. David Singer and Melvin Small. National Material Capabilities Data, 1816-1985. Ann Arbor, MI: Inter-university Consortium for Political and Social Research, 1993. See also Jari Eloranta, “Sotien taakka: Makrotalouden ongelmat ja julkisen talouden kipupisteet maailmansotien jälkeen (The Burden of Wars: The Problems of Macro Economy and Public Sector after the World Wars),” in Kun sota on ohi, edited by Petri Karonen and Kerttu Tarjamo (forthcoming), 2005a.

In the interwar period, the pre-existing tendencies to continue social programs and support new bureaucracies made it difficult for the participants to cut their public expenditure, leading to a displacement of government spending to a slightly higher level for many countries. Public spending especially in the 1920s was in turn very static by nature, plagued by budgetary immobility and standoffs especially in Europe. This meant that although in many countries, except the authoritarian regimes, defense shares dropped noticeably, their respective military burdens stayed either at similar levels or even increased — for example, the French military burden rose to a mean level of 7.2 percent in this period. In Great Britain also, the defense share mean dropped to 18.0 percent, although the military burden mean actually increased compared to the pre-war period, despite the military expenditure cuts and the “Ten-Year Rule” in the 1920s. For these countries, the mid-1930s marked the beginning of intense rearmament whereas some of the authoritarian regimes had begun earlier in the decade. Germany under Hitler increased its military burden from 1.6 percent in 1933 to 18.9 percent in 1938, a rearmament program combining creative financing and promising both guns and butter for the Germans. Mussolini was not quite as successful in his efforts to realize the new Roman Empire, with a military burden fluctuating between four and five percent in the 1930s (5.0 percent in 1938). The Japanese rearmament drive was perhaps the most impressive, with as high as 22.7 percent military burden and over 50 percent defense share in 1938. For many countries, such as France and Russia, the rapid pace of technological change in the 1930s rendered many of the earlier armaments obsolete only two or three years later.[36]

Figure 3
Military Burdens of Denmark, Finland, France, and the UK, 1920-1938

Source: Jari Eloranta, “External Security by Domestic Choices: Military Spending as an Impure Public Good among Eleven European States, 1920-1938,” Dissertation, European University Institute, 2002.

There were differences between democracies as well, as seen in Figure 3. Finland’s behavior was similar to the UK and France, i.e. part of the so-called high spending group among European democracies. This was also similar to the actions of most East European states. Denmark was among the low-spending group, perhaps due to the futility of trying to defend its borders amidst probable conflicts involving giants in the south, France and Germany. Overall, the democracies maintained fairly steady military burdens throughout the period. Their rearmament was, however, much slower than the effort amassed by most autocracies. This is also amply displayed in Figure 4.

Figure 4
Military Burdens of Germany, Italy, Japan, and Russia/USSR, 1920-1938

Sources: Eloranta (2002), see especially appendices for the data sources. There are severe limitations and debates related to, for example, the German (see e.g. Werner Abelshauser, “Germany: Guns, Butter, and Economic Miracles,” in The Economics of World War II: Six Great Powers in International Comparison, edited by Mark Harrison, 122-176, Cambridge: Cambridge University Press, 2000) and the Soviet data (see especially R. W. Davies, “Soviet Military Expenditure and the Armaments Industry, 1929-33: A Reconsideration,” Europe-Asia Studies 45, no. 4 (1993): 577-608, as well as R. W. Davies and Mark Harrison. “The Soviet Military-Economic Effort under the Second Five-Year Plan, 1933-1937,” Europe-Asia Studies 49, no. 3 (1997): 369-406).

In the ensuing conflict, the Second World War, the initial phase from 1939 to early 1942 favored the Axis as far as strategic and economic potential was concerned. After that, the war of attrition, with the United States and the USSR joining the Allies, turned the tide in favor of the Allies. For example, in 1943 the Allied total GDP was 2,223 billion international dollars (in 1990 prices), whereas the Axis accounted for only 895 billion. Also, the impact of the Second World War was much more profound for the participants’ economies. For example, Great Britain at the height of the First World War incurred a military burden of about 27 percent, whereas the military burden level consistently held throughout the Second World War was over 50 percent.[37]

Table 3

Resource Mobilization by the Great Powers in the Second World War

Country and years in the war Average military burden (percent of GDP) Average defense share of government spending Military personnel as a percentage of population Battle deaths as a percentage of population
France

1939-1945

.. .. 4.2 0.5
Germany

1939-1945

50 .. 6.4 4.4
Soviet Union

1939-1945

44 48 3.3 4.4
UK

1939-1945

45 69 6.2 0.9
USA

1941-1945

32 71 5.5 0.3

Sources: Singer and Small (1993); Stephen Broadberry and Peter Howlett, “The United Kingdom: ‘Victory at All Costs’,” in The Economics of World War II: Six Great Powers in International Comparisons, edited by Mark Harrison (Cambridge University Press, 1998); Mark Harrison. “The Economics of World War II: An Overview,” in The Economics of World War II: Six Great Powers in International Comparisons, edited by Mark Harrison (Cambridge: Cambridge University Press, 1998a); Mark Harrison, “The Soviet Union: The Defeated Victor,” in The Economics of World War II: Six Great Powers in International Comparison, edited by Mark Harrison, 268-301 (Cambridge: Cambridge University Press, 2000); Mitchell (1998a); B.R. Mitchell. International Historical Statistics: The Americas, 1750-1993, fourth edition, London: Macmillan, 1998b. The Soviet defense share only applies to years 1940-1945, whereas the military burden applies to 1940-1944. These two measures are not directly comparable, since the former is measured in current prices and the latter in constant prices.

As Table 3 shows, the greatest military burden was most likely incurred by Germany, even though the other Great Powers experienced similar levels. Only the massive economic resources of the United States made possible its lower military burden. Also the UK and the United States mobilized their central/federal government expenditures efficiently for the military effort. In this sense the Soviet Union fared the worst, and additionally the share of military personnel out of the population was relatively small compared to the other Great Powers. On the other hand, the economic and demographic resources that the Soviet Union possessed ultimately ensured its survival during the German onslaught. On the aggregate, the largest personnel losses were incurred by Germany and the Soviet Union, in fact many times those of the other Great Powers.[38] In comparison with the First World War, the second one was even more destructive and lethal, and the aggregate economic losses from the war exceeded even 4,000 billion 1938 U.S. dollars. After the war, the European industrial and agricultural production amounted to only half of the 1938 total.[39]

The Atomic Age and Beyond

The Second World War brought with it also a new role for the United States in world politics, a military-political leadership role warranted by its dominant economic status established over fifty years earlier. With the establishment of NATO in 1949, a formidable defense alliance was formed for the capitalist countries. The USSR, rising to new prominence due to the war, established the Warsaw Pact in 1955 to counter these efforts. The war also meant a change in the public spending and taxation levels of most Western nations. The introduction of welfare states brought the OECD government expenditure average from just under 30 percent of the GDP in the 1950s to over 40 percent in the 1970s. Military spending levels followed suit and peaked during the early Cold War. The American military burden increased above 10 percent in 1952-1954, and the United States has retained a high mean value for the post-war period of 6.7 percent. Great Britain and France followed the American example after the Korean War.[40]

The Cold War embodied a relentless armaments race, with nuclear weapons now as the main investment item, between the two superpowers (see Figure 5). The USSR, according to some figures, spent about 60 to 70 percent of the American level in the 1950s, and actually spent more than the United States in the 1970s. Nonetheless, the United States maintained a massive advantage over the Soviets in terms of nuclear warheads. However, figures collected by SIPRI (Stockholm International Peace Research Institute), suggest an enduring yet dwindling lead for the US even in the 1970s. On the other hand, the same figures point to a 2-to-1 lead in favor of the NATO countries over the Warsaw Pact members in the 1970s and early 1980s. Part of this armaments race was due to technological advances that led to increases in the cost per soldier — it has been estimated that technological increases have produced a mean annual increase in real costs of around 5.5 percent in the post-war period. Nonetheless, spending on personnel and their maintenance has remained the biggest spending item for most countries.

Figure 5

Military Burdens (=MILBUR) of the United States and the United Kingdom, and the Soviet Military Spending as a Percentage of the US Military Spending (ME), 1816-1993

Sources: References to the economic data can be found in Jari Eloranta, “National Defense,” in The Oxford Encyclopedia of Economic History, edited by Joel Mokyr, 30-33 (Oxford: Oxford University Press, 2003b). ME (Military Expenditure) data from Singer and Small (1993), supplemented with the SIPRI (available from: http://www.sipri.org/) data for 1985-1993. Details are available from the author upon request. Exchange rates from Global Financial Data (Online databank), 2003. Available from http://www.globalfindata.com/. The same caveats apply to the underlying currency conversion methods as in Figure 2.

The one outcome of this Cold War arms race that is often cited is the so-called Military Industrial Complex (MIC), referring usually to the influence that the military and industry would have on each other’s policies. The more nefarious connotation refers to the unduly large influence that military producers might have over public sector’s acquisitions and foreign policy in particular in such a collusive relationship. In fact, the origins of this type of interaction can be found further back in history. As Paul Koistinen has emphasized, the First World War was a watershed in business-government relationships, since businessmen were often brought into government, to make supply decisions during this total conflict. Most governments, as a matter of fact, needed the expertise of the core business elites during the world wars. In the United States some form of an MIC came into existence before 1940. Similar developments can be seen in other countries before the Second World War, for example in the Soviet Union. The Cold War simply reinforced these tendencies.[41] Findings by, for example, Robert Higgs establish that the financial performance of the leading defense contracting companies was, on the average, much better than that of comparable large corporations during the period 1948-1989. Nonetheless, his findings do not support the normative conclusion that the profits of defense contractors were “too high.”[42]

World spending levels began a slow decline from the 1970s onwards, with the Reagan years being an exception for the US. In 1986, the US military burden was 6.5 percent, whereas in 1999 it was down to 3.0 percent. In France during the period 1977-1999, the military burden has declined from the post-war peak levels in the 1950s to a mean level of 3.6 percent at the turn of the millennium. This has been mostly the outcome of the reduction in tensions between the rival groups and the downfall of the USSR and the communist regimes in Eastern Europe. The USSR was spending almost as much on its armed forces as the United States up until mid-1980s, and the Soviet military burden was still 12.3 percent in 1990. Under the Russian Federation, with a declining GDP, this level has dropped rapidly to 3.2 percent in 1998. Similarly, other nations have downscaled their military spending since the late 1980s and the 1990s. For example, German military spending in constant US dollars in 1991 was over 52 billion, whereas in 1999 it declined to less than 40 billion. In the French case, the decline was from little over 52 billion in 1991 to below 47 billion in 1999, with its military burden decreasing from 3.6 percent to 2.8 percent.[43]

Overall, according to the SIPRI figures, there was a reduction of about one-third in real terms in world military spending in 1989-1996, with some fluctuation and even small increase since then. In the global scheme, world military expenditure is still highly concentrated on a few countries, with the 15 major spenders accounting for 80 percent of the world total in 1999. The newest military spending estimates (see e.g. http://www.sipri.org/) put the world military expenditures on a growth trend once again due to new threats such as international terrorism and the conflicts related to terrorism. In terms of absolute figures, the United States still dominates the world military spending with a 47 percent share of the world total in 2003. The U.S. spending total becomes less impressive when purchasing power parities are utilized. Nonetheless, the United States has entered the third millennium as the world’s only real superpower – a role that it embraces sometimes awkwardly. Whereas the United States was an absent hegemon in the late nineteenth and first half of the twentieth century, it now has to maintain its presence in many parts of the world, sometimes despite objections from the other players in the international system.[44]

Conclusions

Warfare has played a crucial role in the evolution of human societies. The ancient societies were usually less complicated in terms of the administrative, fiscal, technological, and material demands of warfare. The most pressing problem was commonly the maintenance of adequate supply for the armed forces during prolonged campaigns. This also put constraints on the size and expansion of the early empires, at least until the introduction of iron weaponry. The Roman Empire, for example, was able to sustain a large, geographically diverse empire for a long time period. The disjointed Middle Ages splintered the European societies into smaller communities, in which so-called roving bandits ruled, at least until the arrival of more organized military forces from the tenth century onwards. At the same time, the empires in China and the Muslim world developed into cradles of civilization in terms of scientific discoveries and military technologies.

The geographic and economic expansion of early modern European states started to challenge other regimes all over the world, made possible in part by their military and naval supremacy as well as their industrial prowess later on. The age of total war and revolutions in the nineteenth and twentieth centuries finally pushed these states to adopt more and more efficient fiscal systems and enabled some of them to dedicate more than half of their GDP to the war effort during the world wars. Even though military spending was regularly the biggest item in the budget for most states before the twentieth century, it still represented only a modest amount of their respective GDP. The Cold War period again saw high relative spending levels, due to the enduring rivalry between the West and the Communist bloc. Finally, the collapse of the Soviet Union alleviated some of these tensions and lowered the aggregate military spending in the world, if only temporarily. Newer security challenges such as terrorism and various interstate rivalries have again pushed the world towards a growth path in terms of overall military spending.

The cost of warfare has increased especially since the early modern period. The adoption of new technologies and massive standing armies, in addition to the increase in the “bang-for-buck” (namely, the destructive effect of military investments), have kept military expenditures in a central role vis-à-vis modern fiscal regimes. Although the growth of welfare states in the twentieth century has forced some tradeoffs between “guns and butter,” usually the spending choices have not been competing rather than complementary. Thus, the size and spending of governments have increased. Even though the growth in welfare spending has abated somewhat since the 1980s, according to Peter Lindert they will most likely still experience at least modest expansion in the future. Nor is it likely that military spending will be displaced as a major spending item in national budgets. Various international threats and the lack of international cooperation will ensure that military spending will remain the main contender to social expenditures.[45]


[1] I thank several colleagues for their helpful comments, especially Mark Harrison, Scott Jessee, Mary Valante, Ed Behrend, David Reid, as well as an anonymous referee and EH.Net editor Robert Whaples. The remaining errors and interpretations are solely my responsibility.

[2] See Paul Kennedy, The Rise and Fall of the Great Powers: Economic Change and Military Conflict from 1500 to 2000 (London: Fontana, 1989). Kennedy calls this type of approach, following David Landes, “large history.” On criticism of Kennedy’s “theory,” see especially Todd Sandler and Keith Hartley, The Economics of Defense, ed. Mark Perlman, Cambridge Surveys of Economic Literature (Cambridge: Cambridge University Press, 1995) and the studies listed in it. Other examples of long-run explanations can be found in, e.g., Maurice Pearton, The Knowledgeable State: Diplomacy, War, and Technology since 1830 (London: Burnett Books: Distributed by Hutchinson, 1982) and William H. McNeill, The Pursuit of Power: Technology, Armed Force, and Society since A.D. 1000 (Chicago: University of Chicago Press, 1982).

[3] Jari Eloranta, “Kriisien ja konfliktien tutkiminen kvantitatiivisena ilmiönä: Poikkitieteellisyyden haaste suomalaiselle sotahistorian tutkimukselle (The Study of Crises and Conflicts as Quantitative Phenomenon: The Challenge of Interdisciplinary Approaches to Finnish Study of Military History),” in Toivon historia – Toivo Nygårdille omistettu juhlakirja, ed. Kalevi Ahonen, et al. (Jyväskylä: Gummerus Kirjapaino Oy, 2003a).

[4] See Mark Harrison, ed., The Economics of World War II: Six Great Powers in International Comparisons (Cambridge, UK: Cambridge University Press, 1998b). Classic studies of this type are Alan Milward’s works on the European war economies; see e.g. Alan S. Milward, The German Economy at War (London: Athlon Press, 1965) and Alan S. Milward, War, Economy and Society 1939-1945 (London: Allen Lane, 1977).

[5] Sandler and Hartley, The Economics of Defense, xi; Jari Eloranta, “Different Needs, Different Solutions: The Importance of Economic Development and Domestic Power Structures in Explaining Military Spending in Eight Western Democracies during the Interwar Period” (Licentiate Thesis, University of Jyväskylä, 1998).

[6] See Jari Eloranta, “External Security by Domestic Choices: Military Spending as an Impure Public Good among Eleven European States, 1920-1938″ (Dissertation, European University Institute, 2002) for details.

[7] Ibid.

[8] Daniel S. Geller and J. David Singer, Nations at War. A Scientific Study of International Conflict, vol. 58, Cambridge Studies in International Relations (Cambridge: Cambridge University Press, 1998), e.g. 1-7.

[9] See e.g. Jack S. Levy, “Theories of General War,” World Politics 37, no. 3 (1985). For an overview, see especially Geller and Singer, Nations at War: A Scientific Study of International Conflict. A classic study of war from the holistic perspective is Quincy Wright, A Study of War (Chicago: University of Chicago Press, 1942). See also Geoffrey Blainey, The Causes of War (New York: Free Press, 1973). On rational explanations of conflicts, see James D. Fearon, “Rationalist Explanations for War,” International Organization 49, no. 3 (1995).

[10] Charles Tilly, Coercion, Capital, and European States, AD 990-1990 (Cambridge, MA: Basil Blackwell, 1990), 6-14.

[11] For more, see especially ibid., Chapters 1 and 2.

[12] George Modelski and William R. Thompson, Leading Sectors and World Powers: The Coevolution of Global Politics and Economics, Studies in International Relations (Columbia, SC: University of South Carolina Press, 1996), 14-40. George Modelski and William R. Thompson, Seapower in Global Politics, 1494-1993 (Houndmills, UK: Macmillan Press, 1988).

[13] Kennedy, The Rise and Fall of the Great Powers: Economic Change and Military Conflict from 1500 to 2000, xiii. On specific criticism, see e,g, Jari Eloranta, “Military Competition between Friends? Hegemonic Development and Military Spending among Eight Western Democracies, 1920-1938,” Essays in Economic and Business History XIX (2001).

[14] Eloranta, “External Security by Domestic Choices: Military Spending as an Impure Public Good among Eleven European States, 1920-1938,” Sandler and Hartley, The Economics of Defense.

[15] Brian M. Pollins and Randall L. Schweller, “Linking the Levels: The Long Wave and Shifts in U.S. Foreign Policy, 1790- 1993,” American Journal of Political Science 43, no. 2 (1999), e.g. 445-446. E.g. Alex Mintz and Chi Huang, “Guns versus Butter: The Indirect Link,” American Journal of Political Science 35, no. 1 (1991) suggest an indirect (negative) growth effect via investment at a lag of at least five years.

[16] Caroly Webber and Aaron Wildavsky, A History of Taxation and Expenditure in the Western World (New York: Simon and Schuster, 1986).

[17] He outlines most of the following in Richard Bonney, “Introduction,” in The Rise of the Fiscal State in Europe c. 1200-1815, ed. Richard Bonney (Oxford: Oxford University Press, 1999b).

[18] Mancur Olson, “Dictatorship, Democracy, and Development,” American Political Science Review 87, no. 3 (1993).

[19] On the British Empire, see especially Niall Ferguson, Empire: The Rise and Demise of the British World Order and the Lessons for Global Power (New York: Basic Books, 2003). Ferguson has also tackled the issue of a possible American empire in a more polemical Niall Ferguson, Colossus: The Price of America’s Empire (New York: Penguin Press, 2004).

[20] Ferguson outlines his analytical framework most concisely in Niall Ferguson, The Cash Nexus: Money and Power in the Modern World, 1700-2000 (New York: Basic Books, 2001), especially Chapter 1.

[21] Webber and Wildavsky, A History of Taxation and Expenditure in the Western World, 39-67. See also McNeill, The Pursuit of Power: Technology, Armed Force, and Society since A.D. 1000.

[22] McNeill, The Pursuit of Power: Technology, Armed Force, and Society since A.D. 1000 , 9-12.

[23] Webber and Wildavsky, A History of Taxation and Expenditure in the Western World.

[24] This interpretation of early medieval warfare and societies, including the concept of feudalism, has been challenged in more recent military history literature. See especially John France, “Recent Writing on Medieval Warfare: From the Fall of Rome to c. 1300,” Journal of Military History 65, no. 2 (2001).

[25] Webber and Wildavsky, A History of Taxation and Expenditure in the Western World, McNeill, The Pursuit of Power. Technology, Armed Force, and Society since A.D. 1000. See also Richard Bonney, ed., The Rise of the Fiscal State in Europe c. 1200-1815 (Oxford: Oxford University Press, 1999c).

[26] Ferguson, The Cash Nexus: Money and Power in the Modern World, 1700-2000, Tilly, Coercion, Capital, and European States, AD 990-1990, Jari Eloranta, “National Defense,” in The Oxford Encyclopedia of Economic History, ed. Joel Mokyr (Oxford: Oxford University Press, 2003b). See also Modelski and Thompson, Seapower in Global Politics, 1494-1993.

[27] Tilly, Coercion, Capital, and European States, AD 990-1990, 165, Henry Kamen, “The Economic and Social Consequences of the Thirty Years’ War,” Past and Present April (1968).

[28] Eloranta, “National Defense,” Henry Kamen, Empire: How Spain Became a World Power, 1492-1763, 1st American ed. (New York: HarperCollins, 2003), Douglass C. North, Institutions, Institutional Change, and Economic Performance (New York.: Cambridge University Press, 1990).

[29] Richard Bonney, “France, 1494-1815,” in The Rise of the Fiscal State in Europe c. 1200-1815, ed. Richard Bonney (Oxford: Oxford University Press, 1999a). War expenditure percentages (for the seventeenth and eighteenth centuries) were calculated using the so-called Forbonnais (and Bonney) database(s), available from European State Finance Database: http://www.le.ac.uk/hi/bon/ESFDB/RJB/FORBON/forbon.html and should be considered only illustrative.

[30] Marjolein ’t Hart, “The United Provinces, 1579-1806,” in The Rise of the Fiscal State in Europe c. 1200-1815, ed. Richard Bonney (Oxford: Oxford University Press, 1999). See also Ferguson, The Cash Nexus..

[31] See especially McNeill, The Pursuit of Power..

[32] Eloranta, “External Security by Domestic Choices: Military Spending as an Impure Public Good Among Eleven European States, 1920-1938,” Eloranta, “National Defense”. See also Ferguson, The Cash Nexus.. On the military spending patterns of Great Powers in particular, see J. M. Hobson, “The Military-Extraction Gap and the Wary Titan: The Fiscal Sociology of British Defence Policy 1870-1914,” Journal of European Economic History 22, no. 3 (1993).

[33] The practice of total war, of course, is as old as civilizations themselves, ranging from the Punic Wars to the more modern conflicts. Here total war refers to the twentieth century connotation of this term, embodying the use of all economic, political, and military might of a nation to destroy another in war. Therefore, even though the destruction of Carthage certainly qualifies as an action of total war, it is only in the nineteenth and twentieth centuries that this type of warfare and strategic thinking comes to full fruition. For example, the famous ancient military genius Sun Tzu advocated caution and planning in warfare, rather than using all means possible to win a war: “Thus, those skilled in war subdue the enemy’s army without battle. They capture his cities without assaulting them and overthrow his state without protracted operations.” Sun Tzu, The Art of War (Oxford: Oxford University Press, 1963), 79. With the ideas put forth by Clausewitz (see Carl von Clausewitz, On War (London: Penguin Books, 1982, e.g. Book Five, Chapter II) in the century century, the French Revolution, and Napoleon, the nature of warfare began to change. Clausewitz’s absolute war did not go as far as prescribing indiscriminate slaughter or other ruthless means to subdue civilian populations, but did contribute to the new understanding of the means of warfare and military strategy in the industrial age. The generals and despots of the twentieth century drew their own conclusions, and thus total war came to include not only subjugating the domestic economy to the needs of the war effort but also propaganda, destruction of civilian (economic) targets, and genocide.

[34] Rondo Cameron and Larry Neal, A Concise Economic History of the World: From Paleolithic Times to the Present, 4th ed. (Oxford: The Oxford University Press, 2003), 339. Thus, the estimate in e.g. Eloranta, “National Defense” is a hypothetical minimum estimate originally expressed in Gerard J. de Groot, The First World War (New York: Palgrave, 2001).

[35] See Table 13 in Stephen Broadberry and Mark Harrison, “The Economics of World War I: An Overview,” in The Economics of World War I, ed. Stephen Broadberry and Mark Harrison ((forthcoming), Cambridge University Press, 2005). The figures are, as the authors point out, only tentative.

[36] Eloranta, “External Security by Domestic Choices: Military Spending as an Impure Public Good Among Eleven European States, 1920-1938″, Eloranta, “National Defense”, Webber and Wildavsky, A History of Taxation and Expenditure in the Western World.

[37] Eloranta, “National Defense”.

[38] Mark Harrison, “The Economics of World War II: An overview,” in The Economics of World War II: Six Great Powers in International Comparisons, ed. Mark Harrison (Cambridge, UK: Cambridge University Press, 1998a), Eloranta, “National Defense.”

[39] Cameron and Neal, A Concise Economic History of the World, Harrison, “The Economics of World War II: An Overview,” Broadberry and Harrison, “The Economics of World War I: An Overview.” Again, the same caveats apply to the Harrison-Broadberry figures as disclaimed earlier.

[40] Eloranta, “National Defense”.

[41] Mark Harrison, “Soviet Industry and the Red Army under Stalin: A Military-Industrial Complex?” Les Cahiers du Monde russe 44, no. 2-3 (2003), Paul A.C. Koistinen, The Military-Industrial Complex: A Historical Perspective (New York: Praeger Publishers, 1980).

[42] Robert Higgs, “The Cold War Economy: Opportunity Costs, Ideology, and the Politics of Crisis,” Explorations in Economic History 31, no. 3 (1994); Ruben Trevino and Robert Higgs. 1992. “Profits of U.S. Defense Contractors,” Defense Economics Vol. 3, no. 3: 211-18.

[43] Eloranta, “National Defense”.

[44] See more Eloranta, “Military Competition between Friends? Hegemonic Development and Military Spending among Eight Western Democracies, 1920-1938.”

[45] For more, see especially Ferguson, The Cash Nexus, Peter H. Lindert, Growing Public. Social Spending and Economic Growth since the Eighteenth Century, 2 Vols., Vol. 1 (Cambridge: Cambridge University Press, 2004). On tradeoffs, see e.g. David R. Davis and Steve Chan, “The Security-Welfare Relationship: Longitudinal Evidence from Taiwan,” Journal of Peace Research 27, no. 1 (1990), Herschel I. Grossman and Juan Mendoza, “Butter and Guns: Complementarity between Economic and Military Competition,” Economics of Governance, no. 2 (2001), Alex Mintz, “Guns Versus Butter: A Disaggregated Analysis,” The American Political Science Review 83, no. 4 (1989), Mintz and Huang, “Guns versus Butter: The Indirect Link,” Kevin Narizny, “Both Guns and Butter, or Neither: Class Interests in the Political Economy of Rearmament,” American Political Science Review 97, no. 2 (2003).

Citation: Eloranta, Jari. “Military Spending Patterns in History”. EH.Net Encyclopedia, edited by Robert Whaples. September 16, 2005. URL http://eh.net/encyclopedia/military-spending-patterns-in-history/

Labor Unions in the United States

Gerald Friedman, University of Massachusetts at Amherst

Unions and Collective Action

In capitalist labor markets, which developed in the nineteenth-century in the United States and Western Europe, workers exchange their time and effort for wages. But even while laboring under the supervision of others, wage earners have never been slaves, because they have recourse from abuse. They can quit to seek better employment. Or they are free to join with others to take collective action, forming political movements or labor unions. By the end of the nineteenth century, labor unions and labor-oriented political parties had become major forces influencing wages and working conditions. This article explores the nature and development of labor unions in the United States. It reviews the growth and recent decline of the American labor movement and makes comparisons with the experience of foreign labor unions to clarify particular aspects of the history of labor unions in the United States.

Unions and the Free-Rider Problem

Quitting, exit, is straightforward, a simple act for individuals unhappy with their employment. By contrast, collective action, such as forming a labor union, is always difficult because it requires that individuals commit themselves to produce “public goods” enjoyed by all, including those who “free ride” rather than contribute to the group effort. If the union succeeds, free riders receive the same benefits as do activists; but if it fails, the activists suffer while those who remained outside lose nothing. Because individualist logic leads workers to “free ride,” unions cannot grow by appealing to individual self-interest (Hirschman, 1970; 1982; Olson, 1966; Gamson, 1975).

Union Growth Comes in Spurts

Free riding is a problem for all collective movements, including Rotary Clubs, the Red Cross, and the Audubon Society. But unionization is especially difficult because unions must attract members against the opposition of often-hostile employers. Workers who support unions sacrifice money and risk their jobs, even their lives. Success comes only when large numbers simultaneously follow a different rationality. Unions must persuade whole groups to abandon individualism to throw themselves into the collective project. Rarely have unions grown incrementally, gradually adding members. Instead, workers have joined unions en masse in periods of great excitement, attracted by what the French sociologist Emile Durkheim labeled “collective effervescence” or the joy of participating in a common project without regard for individual interest. Growth has come in spurts, short periods of social upheaval punctuated by major demonstrations and strikes when large numbers see their fellow workers publicly demonstrating a shared commitment to the collective project. Union growth, therefore, is concentrated in short periods of dramatic social upheaval; in the thirteen countries listed in Tables 1 and 2, 67 percent of growth comes in only five years, and over 90 percent in only ten years. As Table 3 shows, in these thirteen countries, unions grew by over 10 percent a year in years with the greatest strike activity but by less than 1 percent a year in the years with the fewest strikers (Friedman, 1999; Shorter and Tilly, 1974; Zolberg, 1972).

Table 1
Union Members per 100 Nonagricultural Workers, 1880-1985: Selected Countries

Year Canada US Austria Denmark France Italy Germany Netherlands Norway Sweden UK Australia Japan
1880 n.a. 1.8 n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a.
1900 4.6 7.5 n.a. 20.8 5.0 n.a. 7.0 n.a. 3.4 4.8 12.7 n.a. n.a.
1914 8.6 10.5 n.a. 25.1 8.1 n.a. 16.9 17.0 13.6 9.9 23.0 32.8 n.a.
1928 11.6 9.9 41.7 39.7 8.0 n.a. 32.5 26.0 17.4 32.0 25.6 46.2 n.a.
1939 10.9 20.7 n.a. 51.8 22.4 n.a. n.a. 32.5 57.0 53.6 31.6 39.2 n.a.
1947 24.6 31.4 64.6 55.9 40.0 n.a. 29.1 40.4 55.1 64.6 44.5 52.9 45.3
1950 26.3 28.4 62.3 58.1 30.2 49.0 33.1 43.0 58.4 67.7 44.1 56.0 46.2
1960 28.3 30.4 63.4 64.4 20.0 29.6 37.1 41.8 61.5 73.0 44.2 54.5 32.2
1975 35.6 26.4 58.5 66.6 21.4 50.1 38.2 39.1 60.5 87.2 51.0 54.7 34.4
1985 33.7 18.9 57.8 82.2 14.5 51.0 39.3 28.6 65.3 103.0 44.2 51.5 28.9

Note: This table shows the unionization rate, the share of nonagricultural workers belonging to unions, in different countries in different years, 1880-1985. Because union membership often includes unemployed and retired union members it may exceed the number of employed workers, giving a unionization rate of greater than 100 percent.

Table 2
Union Growth in Peak and Other Years

Country Years Membership Growth Share of Growth (%) Excess Growth (%)
Top 5 Years Top 10 Years All Years 5 Years 10 Years 5 Years 10 Years
Australia 83 720 000 1 230 000 3 125 000 23.0 39.4 17.0 27.3
Austria 52 5 411 000 6 545 000 3 074 000 176.0 212.9 166.8 194.4
Canada 108 855 000 1 532 000 4 028 000 21.2 38.0 16.6 28.8
Denmark 85 521 000 795 000 1 883 000 27.7 42.2 21.8 30.5
France 92 6 605 000 7 557 000 2 872 000 230.0 263.1 224.5 252.3
Germany 82 10 849 000 13 543 000 9 120 000 119.0 148.5 112.9 136.3
Italy 38 3 028 000 4 671 000 3 713 000 81.6 125.8 68.4 99.5
Japan 43 4 757 000 6 692 000 8 983 000 53.0 74.5 41.3 51.2
Netherlands 71 671 000 1 009 000 1 158 000 57.9 87.1 50.9 73.0
Norway 85 304 000 525 000 1 177 000 25.8 44.6 19.9 32.8
Sweden 99 633 000 1 036 000 3 859 000 16.4 26.8 11.4 16.7
UK 96 4 929 000 8 011 000 8 662 000 56.9 92.5 51.7 82.1
US 109 10 247 000 14 796 000 22 293 000 46.0 66.4 41.4 57.2
Total 1043 49 530 000 67 942 000 73 947 000 67.0 91.9 60.7 79.4

Note: This table shows that most union growth comes in a few years. Union membership growth (net of membership losses) has been calculated for each country for each year. Years were then sorted for each country according to membership growth. This table reports growth for each country for the five and the ten years with the fastest growth and compares this with total growth over all years for which data are available. Excess growth has been calculated as the difference between the share of growth in the top five or ten years and the share that would have come in these periods if growth had been distributed evenly across all years.

Note that years of rapid growth are not necessarily contiguous. There can be more growth in years of rapid growth than over the entire period. This is because some is temporary when years of rapid growth are followed by years of decline.

Sources: Bain and Price (1980): 39, Visser (1989)

Table 3
Impact of Strike Activity on Union Growth
Average Union Membership Growth in Years Sorted by Proportion of Workers Striking

Country Striker Rate Quartile Change
Lowest Third Second Highest
Australia 5.1 2.5 4.5 2.7 -2.4
Austria 0.5 -1.9 0.4 2.4 1.9
Canada 1.3 1.9 2.3 15.8 14.5
Denmark 0.3 1.1 3.0 11.3 11.0
France 0.0 2.1 5.6 17.0 17.0
Germany -0.2 0.4 1.3 20.3 20.5
Italy -2.2 -0.3 2.3 5.8 8.0
Japan -0.2 5.1 3.0 4.3 4.5
Netherlands -0.9 1.2 3.5 6.3 7.2
Norway 1.9 4.3 8.6 10.3 8.4
Sweden 2.5 3.2 5.9 16.9 14.4
UK 1.7 1.7 1.9 3.4 1.7
US -0.5 0.6 2.1 19.9 20.4
Total: Average 0.72 1.68 3.42 10.49 9.78

Note: This table shows that except in Australia unions grew fastest in years with large number of strikers. The proportion of workers striking was calculated for each country for each year as the number of strikers divided by the nonagricultural labor force. Years were then sorted into quartiles, each including one-fourth of the years, according to this striker rate statistic. The average annual union membership growth rate was then calculated for each quartile as the mean of the growth rate in each year in the quartile.

Rapid Union Growth Provokes a Hostile Reaction

These periods of rapid union growth end because social upheaval provokes a hostile reaction. Union growth leads employers to organize, to discover their own collective interests. Emulating their workers, they join together to discharge union activists, to support each other in strikes, and to demand government action against unions. This rising opposition ends periods of rapid union growth, beginning a new phase of decline followed by longer periods of stagnant membership. The weakest unions formed during the union surge succumb to the post-boom reaction; but if enough unions survive they leave a movement larger and broader than before.

Early Labor Unions, Democrats and Socialists

Guilds

Before modern labor unions, guilds united artisans and their employees. Craftsmen did the work of early industry, “masters” working beside “journeymen” and apprentices in small workplaces. Throughout the cities and towns of medieval Europe, guilds regulated production by setting minimum prices and quality, and capping wages, employment, and output. Controlled by independent craftsmen, “masters” who employed journeymen and trained apprentices, guilds regulated industry to protect the comfort and status of the masters. Apprentices and journeymen benefited from guild restrictions only when they advanced to master status.

Guild power was gradually undermined in the early-modern period. Employing workers outside the guild system, including rural workers and semiskilled workers in large urban workplaces, merchants transformed medieval industry. By the early 1800s, few could anticipate moving up to becoming a master artisan or owning their own establishment. Instead, facing the prospect of a lifetime of wage labor punctuated by periods of unemployment, some wage earners began to seek a collective regulation of their individual employment (Thompson, 1966; Scott, 1974; Dawley, 1976; Sewell, 1980; Wilentz, 1984; Blewett, 1988).

The labor movement within the broader movement for democracy

This new wage-labor regime led to the modern labor movement. Organizing propertyless workers who were laboring for capitalists, organized labor formed one wing of a broader democratic movement struggling for equality and for the rights of commoners (Friedman, 1998). Within the broader democratic movement for legal and political equality, labor fought the rise of a new aristocracy that controlled the machinery of modern industry just as the old aristocracy had monopolized land. Seen in this light, the fundamental idea of the labor movement, that employees should have a voice in the management of industry, is comparable to the demand that citizens should have a voice in the management of public affairs. Democratic values do not, by any means, guarantee that unions will be fair and evenhanded to all workers. In the United States, by reserving good jobs for their members, unions of white men sometimes contributed to the exploitation of women and nonwhites. Democracy only means that exploitation will be carried out at the behest of a political majority rather than at the say of an individual capitalist (Roediger, 1991; Arnesen, 2001; Foner, 1974; 1979; Milkman, 1985).

Craft unions’ strategy

Workers formed unions to voice their interests against their employers, and also against other workers. Rejecting broad alliances along class lines, alliances uniting workers on the basis of their lack of property and their common relationship with capitalists, craft unions followed a narrow strategy, uniting workers with the same skill against both the capitalists and against workers in different trades. By using their monopoly of knowledge of the work process to restrict access to the trade, craft unions could have a strong bargaining position that was enhanced by alliances with other craftsmen to finance long strikes. A narrow craft strategy was followed by the first successful unions throughout Europe and America, especially in small urban shops using technologies that still depended on traditional specialized skills, including printers, furniture makers, carpenters, gold beaters and jewelry makers, iron molders, engineers, machinists, and plumbers. Craft unions’ characteristic action was the small, local strike, the concerted withdrawal of labor by a few workers critical to production. Typically, craft unions would present a set of demands to local employers on a “take-it-or-leave-it” basis; either the employer accepted their demands or fought a contest of strength to determine whether the employers could do without the skilled workers for longer than the workers could manage without their jobs.

The craft strategy offered little to the great masses of workers. Because it depends on restricting access to trades it could not be applied by common laborers, who were untrained, nor by semi-skilled employees in modern mass-production establishments whose employers trained them on-the-job. Shunned by craft unions, most women and African-Americans in the United States were crowded into nonunion occupations. Some sought employment as strikebreakers in occupations otherwise monopolized by craft unions controlled by white, native-born males (Washington, 1913; Whatley, 1993).

Unions among unskilled workers

To form unions, the unskilled needed a strategy of the weak that would utilize their numbers rather than specialized knowledge and accumulated savings. Inclusive unions have succeeded but only when they attract allies among politicians, state officials, and the affluent public. Sponsoring unions and protecting them from employer repression, allies can allow organization among workers without specialized skills. When successful, inclusive unions can grow quickly in mass mobilization of common laborers. This happened, for example, in Germany at the beginning of the Weimar Republic, during the French Popular Front of 1936-37, and in the United States during the New Deal of the 1930s. These were times when state support rewarded inclusive unions for organizing the unskilled. The bill for mass mobilization usually came later. Each boom was followed by a reaction against the extensive promises of the inclusive labor movement when employers and conservative politicians worked to put labor’s genie back in the bottle.

Solidarity and the Trade Unions

Unionized occupations of the late 1800s

By the late-nineteenth century, trade unions had gained a powerful position in several skilled occupations in the United States and elsewhere. Outside of mining, craft unions were formed among well-paid skilled craft workers — workers whom historian Eric Hobsbawm labeled the “labor aristocracy” (Hobsbawm, 1964; Geary, 1981). In 1892, for example, nearly two-thirds of British coal miners were union members, as were a third of machinists, millwrights and metal workers, cobblers and shoe makers, glass workers, printers, mule spinners, and construction workers (Bain and Price, 1980). French miners had formed relatively strong unions, as had skilled workers in the railroad operating crafts, printers, jewelry makers, cigar makers, and furniture workers (Friedman, 1998). Cigar makers, printers, furniture workers, some construction and metal craftsmen took the lead in early German unions (Kocka, 1986). In the United States, there were about 160,000 union members in 1880, including 120,000 belonging to craft unions, including carpenters, engineers, furniture makers, stone-cutters, iron puddlers and rollers, printers, and several railroad crafts. Another 40,000 belonged to “industrial” unions organized without regard for trade. About half of these were coal miners; most of the rest belonged to the Knights of Labor (KOL) (Friedman, 1999).

The Knights of Labor

In Europe, these craft organizations were to be the basis of larger, mass unions uniting workers without regard for trade or, in some cases, industry (Ansell, 2001). This process began in the United States in the 1880s when craft workers in the Knights of Labor reached out to organize more broadly. Formed by skilled male, native-born garment cutters in 1869, the Knights of Labor would seem an odd candidate to mobilize the mass of unskilled workers. But from a few Philadelphia craft workers, the Knights grew to become a national and even international movement. Membership reached 20,000 in 1881 and grew to 100,000 in 1885. Then, in 1886, when successful strikes on some western railroads attracted a mass of previously unorganized unskilled workers, the KOL grew to a peak membership of a million workers. For a brief time, the Knights of Labor was a general movement of the American working class (Ware, 1929; Voss, 1993).

The KOL became a mass movement with an ideology and program that united workers without regard for occupation, industry, race or gender (Hattam, 1993). Never espousing Marxist or socialist doctrines, the Knights advanced an indigenous form of popular American radicalism, a “republicanism” that would overcome social problems by extending democracy to the workplace. Valuing citizens according to their work, their productive labor, the Knights were true heirs of earlier bourgeois radicals. Open to all producers, including farmers and other employers, they excluded only those seen to be parasitic on the labor of producers — liquor dealers, gamblers, bankers, stock manipulators and lawyers. Welcoming all others without regard for race, gender, or skill, the KOL was the first American labor union to attract significant numbers of women, African-Americans, and the unskilled (Foner, 1974; 1979; Rachleff, 1984).

The KOL’s strategy

In practice, most KOL local assemblies acted like craft unions. They bargained with employers, conducted boycotts, and called members out on strike to demand higher wages and better working conditions. But unlike craft unions that depended on the bargaining leverage of a few strategically positioned workers, the KOL’s tactics reflected its inclusive and democratic vision. Without a craft union’s resources or control over labor supply, the Knights sought to win labor disputes by widening them to involve political authorities and the outside public able to pressure employers to make concessions. Activists hoped that politicizing strikes would favor the KOL because its large membership would tempt ambitious politicians while its members’ poverty drew public sympathy.

In Europe, a strategy like that of the KOL succeeded in promoting the organization of inclusive unions. But it failed in the United States. Comparing the strike strategies of trade unions and the Knights provides insight into the survival and eventual success of the trade unions and their confederation, the American Federation of Labor (AFL) in late-nineteenth century America. Seeking to transform industrial relations, local assemblies of the KOL struck frequently with large but short strikes involving skilled and unskilled workers. The Knights’ industrial leverage depended on political and social influence. It could succeed where trade unions would not go because the KOL strategy utilized numbers, the one advantage held by common laborers. But this strategy could succeed only where political authorities and the outside public might sympathize with labor. Later industrial and regional unions tried the same strategy, conducting short but large strikes. By demonstrating sufficient numbers and commitment, French and Italian unions, for example, would win from state officials concessions they could not force from recalcitrant employers (Shorter and Tilly, 1974; Friedman, 1998). But compared with the small strikes conducted by craft unions, “solidarity” strikes must walk a fine line, aggressive enough to draw attention but not so threatening to provoke a hostile reaction from threatened authorities. Such a reaction doomed the KOL.

The Knights’ collapse in 1886

In 1886, the Knights became embroiled in a national general strike demanding an eight-hour workday, the world’s first May Day. This led directly to the collapse of the KOL. The May Day strike wave in 1886 and the bombing at Haymarket Square in Chicago provoked a “red scare” of historic proportions driving membership down to half a million in September 1887. Police in Chicago, for example, broke up union meetings, seized union records, and even banned the color red from advertisements. The KOL responded politically, sponsoring a wave of independent labor parties in the elections of 1886 and supporting the Populist Party in 1890 (Fink, 1983). But even relatively strong showings by these independent political movements could not halt the KOL’s decline. By 1890, its membership had fallen by half again, and it fell to under 50,000 members by 1897.

Unions and radical political movements in Europe in the late 1800s

The KOL spread outside the United States, attracting an energetic following in the Canada, the United Kingdom, France, and other European countries. Industrial and regional unionism fared better in these countries than in the United States. Most German unionists belonged to industrial unions allied with the Social Democratic Party. Under Marxist leadership, unions and political party formed a centralized labor movement to maximize labor’s political leverage. English union membership was divided between members of a stable core of craft unions and a growing membership in industrial and regional unions based in mining, cotton textiles, and transportation. Allied with political radicals, these industrial and regional unions formed the backbone of the Labor Party, which held the balance of power in British politics after 1906.

The most radical unions were found in France. By the early 1890s, revolutionary syndicalists controlled the national union center, the Confédération générale du travail (or CGT), which they tried to use as a base for a revolutionary general strike where the workers would seize economic and political power. Consolidating craft unions into industrial and regional unions, the Bourses du travail, syndicalists conducted large strikes designed to demonstrate labor’s solidarity. Paradoxically, the syndicalists’ large strikes were effective because they provoked friendly government mediation. In the United States, state intervention was fatal for labor because government and employers usually united to crush labor radicalism. But in France, officials were more concerned to maintain a center-left coalition with organized labor against reactionary employers opposed to the Third Republic. State intervention helped French unionists to win concessions beyond any they could win with economic leverage. A radical strategy of inclusive industrial and regional unionism could succeed in France because the political leadership of the early Third Republic needed labor’s support against powerful economic and social groups who would replace the Republic with an authoritarian regime. Reminded daily of the importance of republican values and the coalition that sustained the Republic, French state officials promoted collective bargaining and labor unions. Ironically, it was the support of liberal state officials that allowed French union radicalism to succeed, and allowed French unions to grow faster than American unions and to organize the semi-skilled workers in the large establishments of France’s modern industries (Friedman, 1997; 1998).

The AFL and American Exceptionalism

By 1914, unions outside the United States had found that broad organization reduced the availability of strike breakers, advanced labor’s political goals, and could lead to state intervention on behalf of the unions. The United States was becoming exceptional, the only advanced capitalist country without a strong, united labor movement. The collapse of the Knights of Labor cleared the way for the AFL. Formed in 1881 as the Federation of Trade and Labor Unions, the AFL was organized to uphold the narrow interests of craft workers against the general interests of common laborers in the KOL. In practice, AFL-craft unions were little labor monopolies, able to win concessions because of their control over uncommon skills and because their narrow strategy did not frighten state officials. Many early AFL leaders, notably the AFL’s founding president Samuel Gompers and P. J. McGuire of the Carpenters, had been active in radical political movements. But after 1886, they learned to reject political involvements for fear that radicalism might antagonize state officials or employers and provoke repression.

AFL successes in the early twentieth-century

Entering the twentieth century, the AFL appeared to have a winning strategy. Union membership rose sharply in the late 1890s, doubling between 1896 and 1900 and again between 1900 and 1904. Fewer than 5 percent of industrial wage earners belonged to labor unions in 1895, but this share rose to 7 percent in 1900 and 13 percent in 1904, including over 21 percent of industrial wage earners (workers outside of commerce, government, and the professions). Half of coal miners in 1904 belonged to an industrial union (the United Mine Workers of America), but otherwise, most union members belonged to craft organizations, including nearly half the printers, and a third of cigar makers, construction workers and transportation workers. As shown in Table 4, other pockets of union strength included skilled workers in the metal trades, leather, and apparel. These craft unions had demonstrated their economic power, raising wages by around 15 percent and reducing hours worked (Friedman, 1991; Mullin, 1993).

Table 4
Unionization rates by industry in the United States, 1880-2000

Industry 1880 1910 1930 1953 1974 1983 2000
Agriculture Forestry Fishing 0.0 0.1 0.4 0.6 4.0 4.8 2.1
Mining 11.2 37.7 19.8 64.7 34.7 21.1 10.9
Construction 2.8 25.2 29.8 83.8 38.0 28.0 18.3
Manufacturing 3.4 10.3 7.3 42.4 37.2 27.9 14.8
Transportation Communication Utilities 3.7 20.0 18.3 82.5 49.8 46.4 24.0
Private Services 0.1 3.3 1.8 9.5 8.6 8.7 4.8
Public Employment 0.3 4.0 9.6 11.3 38.0 31.1 37.5
All Private 1.7 8.7 7.0 31.9 22.4 18.4 10.9
All 1.7 8.5 7.1 29.6 24.8 20.4 14.1

Note: This table shows the unionization rate, the share of workers belonging to unions, in different industries in the United States, 1880-1996.

Sources: 1880 and 1910: Friedman (1999): 83; 1930: Union membership from Wolman (1936); employment from United States, Bureau of the Census (1932); 1953: Troy (1957); 1974, 1986, 2000: United States, Current Population Survey.

Limits to the craft strategy

Even at this peak, the craft strategy had clear limits. Craft unions succeeded only in a declining part of American industry among workers still performing traditional tasks where training was through apprenticeship programs controlled by the workers themselves. By contrast, there were few unions in the rapidly growing industries employing semi-skilled workers. Nor was the AFL able to overcome racial divisions and state opposition to organize in the South (Friedman, 2000; Letwin, 1998). Compared with the KOL in the early 1880s, or with France’s revolutionary syndicalist unions, American unions were weak in steel, textiles, chemicals, paper and metal fabrication using technologies without traditional craft skills. AFL strongholds included construction, printing, cigar rolling, apparel cutting and pressing, and custom metal engineering, employed craft workers in relatively small establishments little changed from 25 years earlier (see Table 4).

Dependent on skilled craftsmen’s economic leverage, the AFL was poorly organized to battle large, technologically dynamic corporations. For a brief time, the revolutionary International Workers of the World (IWW), formed in 1905, organized semi-skilled workers in some mass production industries. But by 1914, it too had failed. It was state support that forced powerful French employers to accept unions. Without such assistance, no union strategy could force large American employers to accept unions.

Unions in the World War I Era

The AFL and World War I

For all its limits, it must be acknowledged that the AFL and its craft affiliates survived after their rivals ignited and died. The AFL formed a solid union movement among skilled craftsmen that with favorable circumstances could form the core of a broader union movement like what developed in Europe after 1900. During World War I, the Wilson administration endorsed unionization and collective bargaining in exchange for union support for the war effort. AFL affiliates used state support to organize mass-production workers in shipbuilding, metal fabrication, meatpacking and steel doubling union membership between 1915 and 1919. But when Federal support ended after the war’s end, employers mobilized to crush the nascent unions. The post-war union collapse has been attributed to the AFL’s failings. The larger truth is that American unions needed state support to overcome the entrenched power of capital. The AFL did not fail because of its deficient economic strategy; it failed because it had an ineffective political strategy (Friedman, 1998; Frank, 1994; Montgomery, 1987).

International effects of World War I

War gave labor extraordinary opportunities. Combatant governments rewarded pro-war labor leaders with positions in the expanded state bureaucracy and support for collective bargaining and unions. Union growth also reflected economic conditions when wartime labor shortages strengthened the bargaining position of workers and unions. Unions grew rapidly during and immediately after the war. British unions, for example, doubled their membership between 1914 and 1920, to enroll eight million workers, almost half the nonagricultural labor force (Bain and Price, 1980; Visser, 1989). Union membership tripled in Germany and Sweden, doubled in Canada, Denmark, the Netherlands, and Norway, and almost doubled in the United States (see Table 5 and Table 1). For twelve countries, membership grew by 121 percent between 1913 and 1920, including 119 percent growth in seven combatant countries and 160 percent growth in five neutral states.

Table 5
Impact of World War I on Union Membership Growth
Membership Growth in Wartime and After

12 Countries 7 Combatants 5 Neutrals
War-Time 1913 12 498 000 11 742 000 756 000
1920 27 649 000 25 687 000 1 962 000
Growth 1913-20: 121% 119% 160%
Post-war 1920 27 649 000
1929 18 149 000
Growth 1920-29: -34%

Shift toward the revolutionary left

Even before the war, frustration with the slow pace of social reform had led to a shift towards the revolutionary socialist and syndicalist left in Germany, the United Kingdom, and the United States (Nolan, 1981; Montgomery, 1987). In Europe, frustrations with rising prices, declining real wages and working conditions, and anger at catastrophic war losses fanned the flames of discontent into a raging conflagration. Compared with pre-war levels, the number of strikers rose ten or even twenty times after the war, including 2.5 million strikers in France in 1919 and 1920, compared with 200,000 strikers in 1913, 13 million German strikers, up from 300,000 in 1913, and 5 million American strikers, up from under 1 million in 1913. British Prime Minister Lloyd George warned in March 1919 that “The whole of Europe is filled with the spirit of revolution. There is a deep sense not only of discontent, but of anger and revolt among the workmen . . . The whole existing order in its political, social and economic aspects is questioned by the masses of the population from one end of Europe to the other” (quoted in Cronin, 1983: 22).

Impact of Communists

Inspired by the success of the Bolshevik revolution in Russia, revolutionary Communist Parties were organized throughout the world to promote revolution by organizing labor unions, strikes, and political protest. Communism was a mixed blessing for labor. The Communists included some of labor’s most dedicated activists and organizers who contributed greatly to union organization. But Communist help came at a high price. Secretive, domineering, intolerant of opposition, the Communists divided unions between their dwindling allies and a growing collection of outraged opponents. Moreover, they galvanized opposition, depriving labor of needed allies among state officials and the liberal bourgeoisie.

The “Lean Years”: Welfare Capitalism and the Open Shop

Aftermath of World War I

As with most great surges in union membership, the postwar boom was self-limiting. Helped by a sharp post- war economic contraction, employers and state officials ruthlessly drove back the radical threat, purging their workforce of known union activists and easily absorbing futile strikes during a period of rising unemployment. Such campaigns drove membership down by a third from a 1920 peak of 26 million members in eleven countries in 1920 to fewer than 18 million in 1924. In Austria, France, Germany, and the United States, labor unrest contributed to the election of conservative governments; in Hungary, Italy, and Poland it led to the installation of anti- democratic dictatorships that ruthlessly crushed labor unions. Economic stagnation, state repression, and anti-union campaigns by employers prevented any union resurgence through the rest of the 1920s. By 1929, unions in these eleven countries had added only 30,000 members, one-fifth of one percent.

Injunctions and welfare capitalism

The 1920s was an especially dark period for organized labor in the United States where weaknesses visible before World War I became critical failures. Labor’s opponents used fear of Communism to foment a post-war red scare that targeted union activists for police and vigilante violence. Hundreds of foreign-born activists were deported, and mobs led by the American Legion and the Ku Klux Klan broke up union meetings and destroyed union offices (see, for example, Frank, 1994: 104-5). Judges added law to the campaign against unions. Ignoring the intent of the Clayton Anti-Trust Act (1914) they used anti-trust law and injunctions against unions, forbidding activists from picketing or publicizing disputes, holding signs, or even enrolling new union members. Employers competed for their workers’ allegiance, offering paternalist welfare programs and systems of employee representation as substitutes for independent unions. They sought to build a nonunion industrial relations system around welfare capitalism (Cohen, 1990).

Stagnation and decline

After the promises of the war years, the defeat of postwar union drives in mass production industries like steel and meatpacking inaugurated a decade of union stagnation and decline. Membership fell by a third between 1920 and 1924. Unions survived only in the older trades where employment was usually declining. By 1924, they were almost completely eliminated from the dynamic industries of the second industrial revolution: including steel, automobiles, consumer electronics, chemicals and rubber manufacture.

New Deals for Labor

Great Depression

The nonunion industrial relations system of the 1920s might have endured and produced a docile working class organized in company unions (Brody, 1985). But the welfare capitalism of the 1920s collapsed when the Great Depression of the 1930s exposed its weaknesses and undermined political support for the nonunion, open shop. Between 1929 and 1933, real national income in the United States fell by one third, nonagricultural employment fell by a quarter, and unemployment rose from under 2 million in 1929 to 13 million in 1933, a quarter of the civilian labor force. Economic decline was nearly as great elsewhere, raising unemployment to over 15 percent in Austria, Canada, Germany, and the United Kingdom (Maddison, 1991: 260-61). Only the Soviet Union, with its authoritarian political economy was largely spared the scourge of unemployment and economic collapse — a point emphasized by Communists throughout the 1930s and later. Depression discredited the nonunion industrial relations system by forcing welfare capitalists to renege on promises to stabilize employment and to maintain wages. Then, by ignoring protests from members of employee representation plans, welfare capitalists further exposed the fundamental weakness of their system. Lacking any independent support, paternalist promises had no standing but depended entirely on the variable good will of employers. And sometimes that was not enough (Cohen, 1990).

Depression-era political shifts

Voters, too, lost confidence in employers. The Great Depression discredited the old political economy. Even before Franklin Roosevelt’s election as President of the United States in 1932, American states enacted legislation restricting the rights of creditors and landlords, restraining the use of the injunction in labor disputes, and providing expanded relief for the unemployed (Ely, 1998; Friedman, 2001). European voters abandoned centrist parties, embracing extremists of both left and right, Communists and Fascists. In Germany, the Nazis won, but Popular Front governments uniting Communists and socialists with bourgeois liberals assumed power in other countries, including Sweden, France and Spain. (The Spanish Popular Front was overthrown by a Fascist rebellion that installed a dictatorship led by Francisco Franco.) Throughout there was an impulse to take public control over the economy because free market capitalism and orthodox finance had led to disaster (Temin, 1990).

Economic depression lowers union membership when unemployed workers drop their membership and employers use their stronger bargaining position to defeat union drives (Bain and Elsheikh, 1976). Indeed, union membership fell with the onset of the Great Depression but, contradicting the usual pattern, membership rebounded sharply after 1932 despite high unemployment, rising by over 76 percent in ten countries by 1938 (see Table 6 and Table 1). The fastest growth came in countries with openly pro-union governments. In France, where the Socialist Léon Blum led a Popular Front government, and the United States, during Franklin Roosevelt’s New Deal, membership rose by 160 percent 1933-38. But membership grew by 33 percent in eight other countries even without openly pro-labor governments.

Table 6
Impact of the Great Depression and World War II on Union Membership Growth

11 Countries (no Germany) 10 Countries (no Austria)
Depression 1929 12 401 000 11 508 000
1933 11 455 000 10 802 000
Growth 1929-33 -7.6% -6.1%
Popular Front Period 1933 10 802 000
1938 19 007 000
Growth 1933-38 76.0%
Second World War 1938 19 007 000
1947 35 485 000
Growth 1938-47 86.7%

French unions and the Matignon agreements

French union membership rose from under 900,000 in 1935 to over 4,500,000 in 1937. The Popular Front’s victory in the elections of June 1936 precipitated a massive strike wave and the occupation of factories and workplaces throughout France. Remembered in movie, song and legend, the factory occupations were a nearly spontaneous uprising of French workers that brought France’s economy to a halt. Contemporaries were struck by the extraordinarily cheerful feelings that prevailed, the “holiday feeling” and sense that the strikes were a new sort of non-violent revolution that would overturn hierarchy and replace capitalist authoritarianism with true social democracy (Phillippe and Dubief, 1993: 307-8). After Blum assumed office, he brokered the Matignon agreements, named after the premier’s official residence in Paris. Union leaders and heads of France’s leading employer associations agreed to end the strikes and occupations in exchange for wage increases of around 15 percent, a 40 hour workweek, annual vacations, and union recognition. Codified in statute by the Popular Front government, French unions gained new rights and protections from employer repression. Only then did workers flock into unions. In a few weeks, French unions gained four million members with the fastest growth in the new industries of the second industrial revolution. Unions in metal fabrication and chemicals grew by 1,450 percent and 4,000 percent respectively (Magraw, 1992: 2, 287-88).

French union leader Léon Jouhaux hailed the Matignon agreements as “the greatest victory of the workers’ movement.” It included lasting gains, including annual vacations and shorter workweeks. But Simone Weil described the strikers of May 1936 as “soldiers on leave,” and they were soon returned to work. Regrouping, employers discharged union activists and attacked the precarious unity of the Popular Front government. Fighting an uphill battle against renewed employer resistance, the Popular Front government fell before it could build a new system of cooperative industrial relations. Contained, French unions were unable to maintain their momentum towards industrial democracy. Membership fell by a third in 1937-39.

The National Industrial Recovery Act

A different union paradigm was developed in the United States. Rather than vehicles for a democratic revolution, the New Deal sought to integrate organized labor into a reformed capitalism that recognized capitalist hierarchy in the workplace, using unions only to promote macroeconomic stabilization by raising wages and consumer spending (Brinkley, 1995). Included as part of a program for economic recovery was section 7(a) of the National Industrial Recovery Act (NIRA) giving “employees . . . the right to organize and bargain collectively through representatives of their own choosing . . . free from the interference, restraint, or coercion of employers.” AFL-leader William Green pronounced this a “charter of industrial freedom” and workers rushed into unions in a wave unmatched since the Knights of Labor in 1886. As with the KOL, the greatest increase came among the unskilled. Coal miners, southern textile workers, northern apparel workers, Ohio tire makers, Detroit automobile workers, aluminum, lumber and sawmill workers all rushed into unions. For the first time in fifty years, American unions gained a foothold in mass production industries.

AFL’s lack of enthusiasm

Promises of state support brought common laborers into unions. But once there, the new unionists received little help from aging AFL leaders. Fearing that the new unionists’ impetuous zeal and militant radicalism would provoke repression, AFL leaders tried to scatter the new members among contending craft unions with archaic craft jurisdictions. The new unionists were swept up in the excitement of unity and collective action but a half-century of experience had taught the AFL’s leadership to fear such enthusiasms.

The AFL dampened the union boom of 1933-34, but, again, the larger problem was not with the AFL’s flawed tactics but with its lack of political leverage. Doing little to enforce the promises of Section 7(a), the Federal government left employers free to ignore the law. Some flatly prohibited union organization; others formally honored the law but established anemic employee representation plans while refusing to deal with independent unions (Irons, 2000). By 1935 almost as many industrial establishments had employer-dominated employee- representation plans (27 percent) as had unions (30 percent). The greatest number had no labor organization at all (43 percent).

Birth of the CIO

Implacable management resistance and divided leadership killed the early New Deal union surge. It died even before the NIRA was ruled unconstitutional in 1935. Failure provoked rebellion within the AFL. Led by John L. Lewis of the United Mine Workers, eight national unions launched a campaign for industrial organization as the Committee for Industrial Organization. After Lewis punched Carpenter’s Union leader William L Hutcheson on the floor of the AFL convention in 1935, the Committee became an independent Congress of Industrial Organization (CIO). Including many Communist activists, CIO committees fanned out to organize workers in steel, automobiles, retail trade, journalism and other industries. Building effectively on local rank and file militancy, including sitdown strikes in automobiles, rubber, and other industries, the CIO quickly won contracts from some of the strongest bastions of the open shop, including United States Steel and General Motors (Zieger, 1995).

The Wagner Act

Creative strategy and energetic organizing helped. But the CIO owed its lasting success to state support. After the failure of the NIRA, New Dealers sought another way to strengthen labor as a force for economic stimulus. This led to the enactment in 1935 of the National Labor Relations Act, also known as the “Wagner Act.” The Wagner Act established a National Labor Relations Board charged to enforce employees’ “right to self-organization, to form, join, or assist labor organizations to bargain collectively through representatives of their own choosing and to engage in concerted activities for the purpose of collective bargaining or other mutual aid or protection.” It provided for elections to choose union representation and required employers to negotiate “in good faith” with their workers’ chosen representatives. Shifting labor conflict from strikes to elections and protecting activists from dismissal for their union work, the Act lowered the cost to individual workers of supporting collective action. It also put the Federal government’s imprimatur on union organization.

Crucial role of rank-and-file militants and state government support

Appointed by President Roosevelt, the first NLRB was openly pro-union, viewing the Act’s preamble as mandate to promote organization. By 1945 the Board had supervised 24,000 union elections involving some 6,000,000 workers, leading to the unionization of nearly 5,000,000 workers. Still, the NLRB was not responsible for the period’s union boom. The Wagner Act had no direct role in the early CIO years because it was ignored for two years until its constitutionality was established by the Supreme Court in National Labor Relations Board v. Jones and Laughlin Steel Company (1937). Furthermore, the election procedure’s gross contribution of 5,000,000 members was less than half of the period’s net union growth of 11,000,000 members. More important than the Wagner Act were crucial union victories over prominent open shop employers in cities like Akron, Ohio, Flint, Michigan, and among Philadelphia-area metal workers. Dedicated rank-and-file militants and effective union leadership were crucial in these victories. As important was the support of pro-New Deal local and state governments. The Roosevelt landslides of 1934 and 1936 brought to office liberal Democratic governors and mayors who gave crucial support to the early CIO. Placing a right to collective bargaining above private property rights, liberal governors and other elected officials in Michigan, Ohio, Pennsylvania and elsewhere refused to send police to evict sit-down strikers who had seized control of factories. This state support allowed the minority of workers who actively supported unionization to use force to overcome the passivity of the majority of workers and the opposition of the employers. The Open Shop of the 1920s was not abandoned; it was overwhelmed by an aggressive, government-backed labor movement (Gall, 1999; Harris, 2000).

World War II

Federal support for union organization was also crucial during World War II. Again, war helped unions both by eliminating unemployment and because state officials supported unions to gain support for the war effort. Established to minimize labor disputes that might disrupt war production, the National War Labor Board instituted a labor truce where unions exchanged a no-strike pledge for employer recognition. During World War II, employers conceded union security and “maintenance of membership” rules requiring workers to pay their union dues. Acquiescing to government demands, employers accepted the institutionalization of the American labor movement, guaranteeing unions a steady flow of dues to fund an expanded bureaucracy, new benefit programs, and even to raise funds for political action. After growing from 3.5 to 10.2 million members between 1935 and 1941, unions added another 4 million members during the war. “Maintenance of membership” rules prevented free riders even more effectively than had the factory takeovers and violence of the late-1930s. With millions of members and money in the bank, labor leaders like Sidney Hillman and Phillip Murray had the ear of business leaders and official Washington. Large, established, and respected: American labor had made it, part of a reformed capitalism committed to both property and prosperity.

Even more than the First World War, World War Two promoted unions and social change. A European civil war, the war divided the continent not only between warring countries but within countries between those, usually on the political right, who favored fascism over liberal parliamentary government and those who defended democracy. Before the war, left and right contended over the appeasement of Nazi Germany and fascist Italy; during the war, many businesses and conservative politicians collaborated with the German occupation against a resistance movement dominated by the left. Throughout Europe, victory over Germany was a triumph for labor that led directly to the entry into government of socialists and Communists.

Successes and Failures after World War II

Union membership exploded during and after the war, nearly doubling between 1938 and 1946. By 1947, unions had enrolled a majority of nonagricultural workers in Scandinavia, Australia, and Italy, and over 40 percent in most other European countries (see Table 1). Accumulated depression and wartime grievances sparked a post- war strike wave that included over 6 million strikers in France in 1948, 4 million in Italy in 1949 and 1950, and 5 million in the United States in 1946. In Europe, popular unrest led to a dramatic political shift to the left. The Labor Party government elected in the United Kingdom in 1945 established a new National Health Service, and nationalized mining, the railroads, and the Bank of England. A center-left post-war coalition government in France expanded the national pension system and nationalized the Bank of France, Renault, and other companies associated with the wartime Vichy regime. Throughout Europe, the share of national income devoted to social services jumped dramatically, as did the share of income going to the working classes.

Europeans unions and the state after World War II

Unions and the political left were stronger everywhere throughout post-war Europe, but in some countries labor’s position deteriorated quickly. In France, Italy, and Japan, the popular front uniting Communists, socialists, and bourgeois liberals dissolved, and labor’s management opponents recovered state support, with the onset of the Cold War. In these countries, union membership dropped after 1947 and unions remained on the defensive for over a decade in a largely adversarial industrial relations system. Elsewhere, notably in countries with weak Communist movements, such as in Scandinavia but also in Austria, Germany, and the Netherlands, labor was able to compel management and state officials to accept strong and centralized labor movements as social partners. In these countries, stable industrial relations allowed cooperation between management and labor to raise productivity and to open new markets for national companies. High-union-density and high-union-centralization allowed Scandinavian and German labor leaders to negotiate incomes policies with governments and employers restraining wage inflation in exchange for stable employment, investment, and wages linked to productivity growth. Such policies could not be instituted in countries with weaker and less centralized labor movements, including France, Italy, Japan, the United Kingdom and the United States because their unions had not been accepted as bargaining partners by management and they lacked the centralized authority to enforce incomes policies and productivity bargains (Alvarez, Garrett, and Lange, 1992).

Europe since the 1960s

Even where European labor was the weakest, in France or Italy in the 1950s, unions were stronger than before World War II. Working with entrenched socialist and labor political parties, European unions were able to maintain high wages, restrictions on managerial autonomy, and social security. The wave of popular unrest in the late 1960s and early 1970s would carry most European unions to new heights, briefly bringing membership to over 50 percent of the labor force in the United Kingdom and in Italy, and bringing socialists into the government in France, Germany, Italy, and the United Kingdom. Since 1980, union membership has declined some and there has been some retrenchment in the welfare state. But the essentials of European welfare states and labor relations have remained (Western, 1997; Golden and Pontusson, 1992).

Unions begin to decline in the US

It was after World War II that American Exceptionalism became most valid, when the United States emerged as the advanced, capitalist democracy with the weakest labor movement. The United States was the only advanced capitalist democracy where unions went into prolonged decline right after World War II. At 35 percent, the unionization rate in 1945 was the highest in American history, but even then it was lower than in most other advanced capitalist economies. It has been falling since. The post-war strike wave, including three million strikers in 1945 and five million in 1946, was the largest in American history but it did little to enhance labor’s political position or bargaining leverage. Instead, it provoked a powerful reaction among employers and others suspicious of growing union power. A concerted drive by the CIO to organize the South, “Operation Dixie,” failed dismally in 1946. Unable to overcome private repression, racial divisions, and the pro-employer stance of southern local and state governments, the CIO’s defeat left the South as a nonunion, low-wage domestic enclave and a bastion of anti- union politics (Griffith, 1988). Then, in 1946, a conservative Republican majority was elected to Congress, dashing hopes for a renewed, post-war New Deal.

The Taft-Hartley Act and the CIO’s Expulsion of Communists

Quickly, labor’s wartime dreams turned to post-war nightmares. The Republican Congress amended the Wagner Act, enacting the Taft-Hartley Act in 1947 to give employers and state officials new powers against strikers and unions. The law also required union leaders to sign a non-Communist affidavit as a condition for union participation in NLRB-sponsored elections. This loyalty oath divided labor during a time of weakness. With its roots in radical politics and an alliance of convenience between Lewis and the Communists, the CIO was torn by the new Red Scare. Hoping to appease the political right, the CIO majority in 1949 expelled ten Communist-led unions with nearly a third of the organization’s members. This marked the end of the CIO’s expansive period. Shorn of its left, the CIO lost its most dynamic and energetic organizers and leaders. Worse, it plunged the CIO into a civil war; non-Communist affiliates raided locals belonging to the “communist-led” unions fatally distracting both sides from the CIO’s original mission to organize the unorganized and empower the dispossessed. By breaking with the Communists, the CIO’s leadership signaled that it had accepted its place within a system of capitalist hierarchy. Little reason remained for the CIO to remain independent. In 1955 it merged with the AFL to form the AFL-CIO.

The Golden Age of American Unions

Without the revolutionary aspirations now associated with the discredited Communists, America’s unions settled down to bargain over wages and working conditions without challenging such managerial prerogatives as decisions about prices, production, and investment. Some labor leaders, notably James Hoffa of the Teamsters but also local leaders in construction and service trades, abandoned all higher aspirations to use their unions for purely personal financial gain. Allying themselves with organized crime, they used violence to maintain their power over employers and their own rank-and-file membership. Others, including former-CIO leaders, like Walter Reuther of the United Auto Workers, continued to push the envelope of legitimate bargaining topics, building challenges to capitalist authority at the workplace. But even the UAW was unable to force major managerial prerogatives onto the bargaining table.

The quarter century after 1950 formed a ‘golden age’ for American unions. Established unions found a secure place at the bargaining table with America’s leading firms in such industries as autos, steel, trucking, and chemicals. Contracts were periodically negotiated providing for the exchange of good wages for cooperative workplace relations. Rules were negotiated providing a system of civil authority at work, with negotiated regulations for promotion and layoffs, and procedures giving workers opportunities to voice grievances before neutral arbitrators. Wages rose steadily, by over 2 percent per year and union workers earned a comfortable 20 percent more than nonunion workers of similar age, experience and education. Wages grew faster in Europe but American wages were higher and growth was rapid enough to narrow the gap between rich and poor, and between management salaries and worker wages. Unions also won a growing list of benefit programs, medical and dental insurance, paid holidays and vacations, supplemental unemployment insurance, and pensions. Competition for workers forced many nonunion employers to match the benefit packages won by unions, but unionized employers provided benefits worth over 60 percent more than were given nonunion workers (Freeman and Medoff, 1984; Hirsch and Addison, 1986).

Impact of decentralized bargaining in the US

In most of Europe, strong labor movements limit the wage and benefit advantages of union membership by forcing governments to extend union gains to all workers in an industry regardless of union status. By compelling nonunion employers to match union gains, this limited the competitive penalty borne by unionized firms. By contrast, decentralized bargaining and weak unions in the United States created large union wage differentials that put unionized firms at a competitive disadvantage, encouraging them to seek out nonunion labor and localities. A stable and vocal workforce with more experience and training did raise unionized firms’ labor productivity by 15 percent or more above the level of nonunion firms and some scholars have argued that unionized workers earn much of their wage gain. Others, however, find little productivity gain for unionized workers after account is taken of greater use of machinery and other nonlabor inputs by unionized firms (compare Freeman and Medoff, 1984 and Hirsch and Addison, 1986). But even unionized firms with higher labor productivity were usually more conscious of the wages and benefits paid to union worker than they were of unionization’s productivity benefits.

Unions and the Civil Rights Movement

Post-war unions remained politically active. European unions were closely associated with political parties, Communists in France and Italy, socialists or labor parties elsewhere. In practice, notwithstanding revolutionary pronouncements, even the Communist’s political agenda came to resemble that of unions in the United States, liberal reform including a commitment to full employment and the redistribution of income towards workers and the poor (Boyle, 1998). Golden age unions have also been at the forefront of campaigns to extend individual rights. The major domestic political issue of the post-war United States, civil rights, was troubling for many unions because of the racist provisions in their own practice. Nonetheless, in the 1950s and 1960s, the AFL-CIO strongly supported the civil rights movement, funded civil rights organizations and lobbied in support of civil rights legislation. The AFL-CIO pushed unions to open their ranks to African-American workers, even at the expense of losing affiliates in states like Mississippi. Seizing the opportunity created by the civil rights movement, some unions gained members among nonwhites. The feminist movement of the 1970s created new challenges for the masculine and sometimes misogynist labor movement. But, here too, the search for members and a desire to remove sources of division eventually brought organized labor to the forefront. The AFL-CIO supported the Equal Rights Amendment and began to promote women to leadership positions.

Shift of unions to the public sector

In no other country have women and members of racial minorities assumed such prominent positions in the labor movement as they have in the United States. The movement of African-American and women to leadership positions in the late-twentieth century labor movement was accelerated by a shift in the membership structure of the United States union movement. Maintaining their strength in traditional, masculine occupations in manufacturing, construction, mining, and transportation, European unions remained predominantly male. Union decline in these industries combined with growth in heavily female public sector employments in the United States led to the femininization of the American labor movement. Union membership began to decline in the private sector in the United States immediately after World War II. Between 1953 and 1983, for example, the unionization rate fell from 42 percent to 28 percent in manufacturing, by nearly half in transportation, and by over half in construction and mining (see Table 4). By contrast, after 1960, public sector workers won new opportunities to form unions. Because women and racial minorities form a disproportionate share of these public sector workers, increasing union membership there has changed the American labor movement’s racial and gender composition. Women comprised only 19 percent of American union members in the mid-1950s but their share rose to 40 percent by the late 1990s. By then, the most unionized workers were no longer the white male skilled craftsmen of old. Instead, they were nurses, parole officers, government clerks, and most of all, school teachers.

Union Collapse and Union Avoidance in the US

Outside the United States, unions grew through the 1970s and, despite some decline since the 1980s, European and Canadian unions remain large and powerful. The United States is different. Union decline since World War II has brought the United States private-sector labor movement down to early twentieth century levels. As a share of the nonagricultural labor force, union membership fell from its 1945 peak of 35 percent down to under 30 percent in the early 1970s. From there, decline became a general rout. In the 1970s, rising unemployment, increasing international competition, and the movement of industry to the nonunion South and to rural areas undermined the bargaining position of many American unions leaving them vulnerable to a renewed management offensive. Returning to pre-New Deal practices, some employers established new welfare and employee representation programs, hoping to lure worker away from unions (Heckscher, 1987; Jacoby, 1997). Others returned to pre-New Deal repression. By the early 1980s, union avoidance had become an industry. Anti-union consultants and lawyers openly counseled employers how to use labor law to evade unions. Findings of employers’ unfair labor practices in violation of the Wagner Act tripled in the 1970s; by the 1980s, the NLRB reinstated over 10,000 workers a year who were illegally discharged for union activity, nearly one for every twenty who voted for a union in an NLRB election (Weiler, 1983). By the 1990s, the unionization rate in the United States fell to under 14 percent, including only 9 percent of the private sector workers and 37 percent of those in the public sector. Unions now have minimal impact on wages or working conditions for most American workers.

Nowhere else have unions collapsed as in the United States. With a unionization rate dramatically below that of other countries, including Canada, the United States has achieved exceptional status (see Table 7). There remains great interest in unions among American workers; where employers do not resist, unions thrive. In the public sector and in some private employers where workers have free choice to join a union, they are as likely as they ever were, and as likely as workers anywhere. In the past, as after 1886 and in the 1920s, when American employers broke unions, they revived when a government committed to workplace democracy sheltered them from employer repression. If we see another such government, we may yet see another union revival.

Table 7
Union Membership Rates for the United States and Six Other Leading Industrial Economies, 1970 to 1990

1970 1980 1990
U.S.: Unionization Rate: All industries 30.0 24.7 17.6
U.S.: Unionization Rate: Manufacturing 41.0 35.0 22.0
U.S.: Unionization Rate: Financial services 5.0 4.0 2.0
Six Countries: Unionization Rate: All industries 37.1 39.7 35.3
Six Countries: Unionization Rate: Manufacturing 38.8 44.0 35.2
Five Countries: Unionization Rate: Financial services 23.9 23.8 24.0
Ratio: U.S./Six Countries: All industries 0.808 0.622 0.499
Ratio: U.S./Six Countries: Manufacturing 1.058 0.795 0.626
Ratio: U.S./Five Countries: Financial services 0.209 0.168 0.083

Note: The unionization rate reported is the number of union members out of 100 workers in the specified industry. The ratio shown is the unionization rate for the United States divided by the unionization rate for the other countries. The six countries are Canada, France, Germany, Italy, Japan, and the United Kingdom. Data on union membership in financial services in France are not available.

Source: Visser (1991): 110.

References

Alvarez, R. Michael, Geoffrey Garrett and Peter Lange. “Government Partisanship, Labor Organization, and Macroeconomic Performance,” American Political Science Review 85 (1992): 539-556.

Ansell, Christopher K. Schism and Solidarity in Social Movements: The Politics of Labor in the French Third Republic. Cambridge: Cambridge University Press, 2001.

Arnesen, Eric, Brotherhoods of Color: Black Railroad Workers and the Struggle for Equality. Cambridge, MA: Harvard University Press, 2001.

Bain, George S., and Farouk Elsheikh. Union Growth and the Business Cycle: An Econometric Analysis. Oxford: Basil Blackwell, 1976.

Bain, George S. and Robert Price. Profiles of Union Growth: A Comparative Statistical Portrait of Eight Countries. Oxford: Basil Blackwell, 1980.

Bernard, Phillippe and Henri Dubief. The Decline of the Third Republic, 1914-1938. Cambridge: Cambridge University Press, 1993.

Blewett, Mary H. Men, Women, and Work: Class, Gender and Protest in the New England Shoe Industry, 1780-1910. Urbana, IL: University of Illinois Press, 1988.

Boyle, Kevin, editor. Organized Labor and American Politics, 1894-1994: The Labor-Liberal Alliance. Albany, NY: State University of New York Press, 1998.

Brinkley, Alan. The End of Reform: New Deal Liberalism in Recession and War. New York: Alfred A. Knopf, 1995.

Brody, David. Workers in Industrial America: Essays on the Twentieth-Century Struggle. New York: Oxford University Press, 1985.

Cazals, Rémy. Avec les ouvriers de Mazamet dans la grève et l’action quotidienne, 1909-1914. Paris: Maspero, 1978.

Cohen, Lizabeth. Making A New Deal: Industrial Workers in Chicago, 1919-1939. Cambridge: Cambridge University Press, 1990.

Cronin, James E. Industrial Conflict in Modern Britain. London: Croom Helm, 1979.

Cronin, James E. “Labor Insurgency and Class Formation.” In Work, Community, and Power: The Experience of Labor in Europe and America, 1900-1925, edited by James E. Cronin and Carmen Sirianni. Philadelphia: Temple University Press, 1983. .

Cronin, James E. and Carmen Sirianni, editors. Work, Community, and Power: The Experience of Labor in Europe and America, 1900-1925. Philadelphia: Temple University Press, 1983.

Dawley, Alan. Class and Community: The Industrial Revolution in Lynn. Cambridge, MA: Harvard University Press, 1976.

Ely, James W., Jr. The Guardian of Every Other Right: A Constitutional History of Property Rights. New York: Oxford, 1998.

Fink, Leon. Workingmen’s Democracy: The Knights of Labor and American Politics. Urbana, IL: University of Illinois Press, 1983.

Fink, Leon. “The New Labor History and the Powers of Historical Pessimism: Consensus, Hegemony, and the Case of the Knights of Labor.” Journal of American History 75 (1988): 115-136.

Foner, Philip S. Organized Labor and the Black Worker, 1619-1973. New York: International Publishers, 1974.

Foner, Philip S. Women and the American Labor Movement: From Colonial Times to the Eve of World War I. New York: Free Press, 1979.

Frank, Dana. Purchasing Power: Consumer Organizing, Gender, and the Seattle Labor Movement, 1919- 1929. Cambridge: Cambridge University Press, 1994.

Freeman, Richard and James Medoff. What Do Unions Do? New York: Basic Books, 1984.

Friedman, Gerald. “Dividing Labor: Urban Politics and Big-City Construction in Late-Nineteenth Century America.” In Strategic Factors in Nineteenth-Century American Economic History, edited by Claudia Goldin and Hugh Rockoff, 447-64. Chicago: University of Chicago Press, 1991.

Friedman, Gerald. “Revolutionary Syndicalism and French Labor: The Rebels Behind the Cause.” French Historical Studies 20 (Spring 1997).

Friedman, Gerald. State-Making and Labor Movements: France and the United States 1876-1914. Ithaca, NY: Cornell University Press, 1998.

Friedman, Gerald. “New Estimates of United States Union Membership, 1880-1914.” Historical Methods 32 (Spring 1999): 75-86.

Friedman, Gerald. “The Political Economy of Early Southern Unionism: Race, Politics, and Labor in the South, 1880-1914.” Journal of Economic History 60, no. 2 (2000): 384-413.

Friedman, Gerald. “The Sanctity of Property in American Economic History” (manuscript, University of Massachusetts, July 2001).

Gall, Gilbert. Pursuing Justice: Lee Pressman, the New Deal, and the CIO. Albany, NY: State University of New York Press, 1999.

Gamson, William A. The Strategy of Social Protest. Homewood, IL: Dorsey Press, 1975.

Geary, Richard. European Labour Protest, 1848-1939. New York: St. Martin’s Press, 1981.

Golden, Miriam and Jonas Pontusson, editors. Bargaining for Change: Union Politics in North America and Europe. Ithaca, NY: Cornell University Press, 1992.

Griffith, Barbara S. The Crisis of American Labor: Operation Dixie and the Defeat of the CIO. Philadelphia: Temple University Press, 1988.

Harris, Howell John. Bloodless Victories: The Rise and Fall of the Open Shop in the Philadelphia Metal Trades, 1890-1940. Cambridge: Cambridge University Press, 2000.

Hattam, Victoria C. Labor Visions and State Power: The Origins of Business Unionism in the United States. Princeton: Princeton University Press, 1993.

Heckscher, Charles C. The New Unionism: Employee Involvement in the Changing Corporation. New York: Basic Books, 1987.

Hirsch, Barry T. and John T. Addison. The Economic Analysis of Unions: New Approaches and Evidence. Boston: Allen and Unwin, 1986.

Hirschman, Albert O. Exit, Voice and Loyalty: Responses to Decline in Firms, Organizations, and States. Cambridge, MA, Harvard University Press, 1970.

Hirschman, Albert O. Shifting Involvements: Private Interest and Public Action. Princeton: Princeton University Press, 1982.

Hobsbawm, Eric J. Labouring Men: Studies in the History of Labour. London: Weidenfeld and Nicolson, 1964.

Irons, Janet. Testing the New Deal: The General Textile Strike of 1934 in the American South. Urbana, IL: University of Illinois Press, 2000.

Jacoby, Sanford. Modern Manors: Welfare Capitalism Since the New Deal. Princeton: Princeton University Press, 1997.

Katznelson, Ira and Aristide R. Zolberg, editors. Working-Class Formation: Nineteenth-Century Patterns in Western Europe and the United States. Princeton: Princeton University Press, 1986. Kocka, Jurgen. “Problems of Working-Class Formation in Germany: The Early Years, 1800-1875.” In Working- Class Formation: Nineteenth-Century Patterns in Western Europe and the United States, edited by Ira Katznelson and Aristide R. Zolberg, 279-351. Princeton: Princeton University Press, 1986. Letwin, Daniel. The Challenge of Interracial Unionism: Alabama Coal Miners, 1878-1921. Chapel Hill: University of North Carolina Press, 1998. Maddison, Angus. Dynamic Forces in Capitalist Development: A Long-Run Comparative View. Oxford: Oxford University Press, 1991. Magraw, Roger. A History of the French Working Class, two volumes. London: Blackwell, 1992. Milkman, Ruth. Women, Work, and Protest: A Century of United States Women’s Labor. Boston: Routledge and Kegan Paul, 1985.

Montgomery, David. The Fall of the House of Labor: The Workplace, the State, and American Labor Activism, 1865-1920. Cambridge: Cambridge University Press, 1987.

Mullin, Debbie Dudley. “The Porous Umbrella of the AFL: Evidence From Late Nineteenth-Century State Labor Bureau Reports on the Establishment of American Unions.” Ph.D. diss., University of Virginia, 1993.

Nolan, Mary. Social Democracy and Society: Working-Class Radicalism in Dusseldorf, 1890-1920. Cambridge: Cambridge University Press, 1981.

Olson, Mancur. The Logic of Collective Action: Public Goods and the Theory of Groups. Cambridge, MA: Harvard University Press, 1971.

Perlman, Selig. A Theory of the Labor Movement. New York: MacMillan, 1928.

Rachleff, Peter J. Black Labor in the South, 1865-1890. Philadelphia: Temple University Press, 1984.

Roediger, David. The Wages of Whiteness: Race and the Making of the American Working Class. London: Verso, 1991.

Scott, Joan. The Glassworkers of Carmaux: French Craftsmen in Political Action in a Nineteenth-Century City. Cambridge, MA: Harvard University Press, 1974.

Sewell, William H. Jr. Work and Revolution in France: The Language of Labor from the Old Regime to 1848. Cambridge: Cambridge University Press, 1980.

Shorter, Edward and Charles Tilly. Strikes in France, 1830-1968. Cambridge: Cambridge University Press, 1974.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: MIT Press, 1990.

Thompson, Edward P. The Making of the English Working Class. New York: Vintage, 1966.

Troy, Leo. Distribution of Union Membership among the States, 1939 and 1953. New York: National Bureau of Economic Research, 1957.

United States, Bureau of the Census. Census of Occupations, 1930. Washington, DC: Government Printing Office, 1932.

Visser, Jelle. European Trade Unions in Figures. Boston: Kluwer, 1989.

Voss, Kim. The Making of American Exceptionalism: The Knights of Labor and Class Formation in the Nineteenth Century. Ithaca, NY: Cornell University Press, 1993.

Ware, Norman. The Labor Movement in the United States, 1860-1895: A Study in Democracy. New York: Vintage, 1929.

Washington, Booker T. “The Negro and the Labor Unions.” Atlantic Monthly (June 1913).

Weiler, Paul. “Promises to Keep: Securing Workers Rights to Self-Organization Under the NLRA.” Harvard Law Review 96 (1983).

Western, Bruce. Between Class and Market: Postwar Unionization in the Capitalist Democracies. Princeton: Princeton University Press, 1997.

Whatley, Warren. “African-American Strikebreaking from the Civil War to the New Deal.” Social Science History 17 (1993), 525-58.

Wilentz, Robert Sean. Chants Democratic: New York City and the Rise of the American Working Class, 1788-1850. Oxford: Oxford University Press, 1984.

Wolman, Leo. Ebb and Flow in Trade Unionism. New York: National Bureau of Economic Research, 1936.

Zieger, Robert. The CIO, 1935-1955. Chapel Hill: University of North Carolina Press, 1995.

Zolberg, Aristide. “Moments of Madness.” Politics and Society 2 (Winter 1972): 183-207. 60

Citation: Friedman, Gerald. “Labor Unions in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/labor-unions-in-the-united-states/

Life Insurance in the United States through World War I

Sharon Ann Murphy

The first American life insurance enterprises can be traced back to the late colonial period. The Presbyterian Synods in Philadelphia and New York set up the Corporation for Relief of Poor and Distressed Widows and Children of Presbyterian Ministers in 1759; the Episcopalian ministers organized a similar fund in 1769. In the half century from 1787 to 1837, twenty-six companies offering life insurance to the general public opened their doors, but they rarely survived more than a couple of years and sold few policies [Figures 1 and 2]. The only early companies to experience much success in this line of business were the Pennsylvania Company for Insurances on Lives and Granting Annuities (chartered 1812), the Massachusetts Hospital Life Insurance Company (1818), the Baltimore Life Insurance Company (1830), the New York Life Insurance and Trust Company (1830), and the Girard Life Insurance, Annuity and Trust Company of Pennsylvania (1836). [See Table 1.]

Despite this tentative start, the life insurance industry did make some significant strides beginning in the 1830s [Figure 2]. Life insurance in force (the total death benefit payable on all existing policies) grew steadily from about $600,000 in 1830 to just under $5 million a decade later, with New York Life and Trust policies accounting for more than half of this latter amount. Over the next five years insurance in force almost tripled to $14.5 million before surging by 1850 to just under $100 million of life insurance spread among 48 companies. The top three companies – the Mutual Life Insurance Company of New York (1842), the Mutual Benefit Life Insurance Company of New Jersey (1845), and the Connecticut Mutual Life Insurance Company (1846) – accounted for more than half of this amount. The sudden success of life insurance during the 1840s can be attributed to two main developments – changes in legislation impacting life insurance and a shift in the corporate structure of companies towards mutualization.

Married Women’s Acts

Life insurance companies targeted women and children as the main beneficiaries of insurance, despite the fact that the majority of women were prevented by law from gaining the protection offered in the unfortunate event of their husband’s death. The first problem was that companies strictly adhered to the common law idea of insurable interest which required that any person taking out insurance on the life of another have a specific monetary interest in that person’s continued life; “affection” (i.e. the relationship of husband and wife or parent and child) was not considered adequate evidence of insurable interest. Additionally, married women could not enter into contracts on their own and therefore could not take out life insurance policies either on themselves (for the benefit of their children or husband) or directly on their husbands (for their own benefit). One way around this problem was for the husband to take out the policy on his own life and assign his wife or children as the beneficiaries. This arrangement proved to be flawed, however, since the policy was considered part of the husband’s estate and therefore could be claimed by any creditors of the insured.

New York’s 1840 Law

This dilemma did not pass unnoticed by promoters of life insurance who viewed it as one of the main stumbling blocks to the growth of the industry. The New York Life and Trust stood at the forefront of a campaign to pass a state law enabling women to procure life insurance policies protected from the claims of creditors. The law, which passed the New York state legislature on April 1, 1840, accomplished four important tasks. First, it established the right of a woman to enter into a contract of insurance on the life of her husband “by herself and in her name, or in the name of any third person, with his assent, as her trustee.” Second, that insurance would be “free from the claims of the representatives of her husband, or of any of his creditors” unless the annual premiums on the policy exceeded $300 (approximately the premium required to take out the maximum $10,000 policy on the life of a 40 year old). Third, in the event of the wife predeceasing the husband, the policy reverted to the children who were granted the same protection from creditors. Finally, as the law was interpreted by both companies and the courts, wives were not required to prove their monetary interest in the life of the insured, establishing for the first time an instance of insurable interest independent of pecuniary interest in the life of another.

By December of 1840, Maryland had enacted an identical law – copied word for word from the New York statute. The Massachusetts legislation of 1844 went one step further by protecting from the claims of creditors all policies procured “for the benefit of a married woman, whether effected by her, her husband, or any other person.” The 1851 New Jersey law was the most stringent, limiting annual premiums to only $100. In those states where a general law did not exist, new companies often had the New York law inserted into their charter, with these provisions being upheld by the state courts. For example, the Connecticut Mutual Life Insurance Company (1846), the North Carolina Mutual Life Insurance Company (1849), and the Jefferson Life Insurance Company of Cincinnati, Ohio (1850) all provided this protection in their charters despite the silence of their respective states on the issue.

Mutuality

The second important development of the 1840s was the emergence of mutual life insurance companies in which any annual profits were redistributed to the policyholders rather than to stockholders. Although mutual insurance was not a new concept – the Society for Equitable Assurances on Lives and Survivorships of London had been operating under the mutual plan since its establishment in 1762 and American marine and fire companies were commonly organized as mutuals – the first American mutual life companies did not begin issuing policies until the early 1840s. The main impetus for this shift to mutualization was the panic of 1837 and the resulting financial crisis, which combined to dampen the enthusiasm of investors for projects ranging from canals and railroads to banks and insurance companies. Between 1838 and 1846, only one life insurance company was able to raise the capital essential for organization on a stock basis. On the other hand, mutuals required little initial capital, relying instead on the premium payments from high-volume sales to pay any death claims. The New England Mutual Life Insurance Company (1835) issued its first policy in 1844 and the Mutual Life Insurance Company of New York (1842) began operation in 1843; at least fifteen more mutuals were chartered by 1849.

Aggressive Marketing

In order to achieve the necessary sales volume, mutual companies began to aggressively promote life insurance through advertisements, editorials, pamphlets, and soliciting agents. These marketing tactics broke with the traditionally staid practices of banks and insurance companies whereby advertisements generally had provided only the location of the local office and agents passively had accepted applications from customers who inquired directly at their office.

Advantages of Mutuality

The mutual marketing campaigns not only advanced life insurance in general but mutuality in particular, which held widespread appeal for the public at large. Policyholders who could not afford to own stock in a proprietary insurance company could now share in the financial success of the mutual companies, with any annual profits (the excess of invested premium income over death payments) being redistributed to the policyholders, often in the form of reduced premium payments. The rapid success of life insurance during the late 1840s, as seen in Figure 3, thus can be attributed both to this active marketing as well as to the appeal of mutual insurance itself.

Regulation and Stagnation after 1849

While many of these companies operated on a sound financial basis, the ease of formation opened the field to several fraudulent or fiscally unsound companies. Stock institutions, concerned both for the reputation of life insurance in general as well as with self-preservation, lobbied the New York state legislature for a law to limit the operation of mutual companies. On April 10, 1849 the legislature passed a law requiring all new insurance companies either incorporating or planning to do business in New York to possess $100,000 of capital stock. Two years later, the legislature passed a more stringent law obligating all life insurance companies to deposit $100,000 with the Comptroller of New York. While this capital requirement was readily met by most stock companies and by the more established New York-based mutual companies, it effectively dampened the movement toward mutualization until the 1890s. Additionally, twelve out-of-state companies ceased doing business in New York altogether, leaving only the New England Mutual and the Mutual Benefit of New Jersey to compete with the New York companies in one of the largest markets. These laws were also largely responsible for the decade-long stagnation in insurance sales beginning in 1849 [Figure 3].

The Civil War and Its Aftermath

By the end of the 1850s life insurance sales again began to increase, climbing to almost $200 million by 1862 before tripling to just under $600 million by the end of the Civil War; life insurance in force peaked at $2 billion in 1871 [Figures 3 and 4]. Several factors contributed to this renewed success. First, the establishment of insurance departments in Massachusetts (1856) and New York (1859) to oversee the operation of fire, marine, and life insurance companies stimulated public confidence in the financial soundness of the industry. Additionally, in 1861 the Massachusetts legislature passed a non-forfeiture law, which forbade companies from terminating policies for lack of premium payment. Instead, the law stipulated that policies be converted to term life policies and that companies pay any death claims that occurred during this term period [term policies are issued only for a stipulated number of years, require reapplication on a regular basis, and consequently command significantly lower annual premiums which rise rapidly with age]. This law was further strengthened in 1880 when Massachusetts mandated that policyholders have the additional option of receiving a cash surrender value for a forfeited policy.

The Civil War was another factor in this resurgence. Although the industry had no experience with mortality during war – particularly a war on American soil – and most policies contained clauses that voided them in the case of military service, several major companies decided to ensure war risks for an additional premium rate of 2% to 5%. While most companies just about broke even on these soldiers’ policies, the goodwill and publicity engendered with the payment of each death claim combined with a generally heightened awareness of mortality to greatly increase interest in life insurance. In the immediate postbellum period, investment in most industries increased dramatically and life insurance was no exception. Whereas only 43 companies existed on the eve of the war, the newfound popularity of life insurance resulted in the establishment of 107 companies between 1865 and 1870 [Figure 1].

Tontines

The other major innovation in life insurance occurred in 1867 when the Equitable Life Assurance Society (1859) began issuing tontine or deferred dividend policies. While a portion of each premium payment went directly towards an ordinary insurance policy, another portion was deposited in an investment fund with a set maturity date (usually 10, 15, or 20 years) and a restricted group of participants. The beneficiaries of deceased policyholders received only the face value of the standard life component while participants who allowed their policy to lapse either received nothing or only a small cash surrender value. At the end of the stipulated period, the dividends that had accumulated in the fund were divided among the remaining participants. Agents often promoted these policies with inflated estimates of future returns – and always assured the potential investor that he would be a beneficiary of the high lapse rate and not one of the lapsing participants. Estimates indicate that approximately two-thirds of all life insurance policies in force in 1905 – at the height of the industry’s power – were deferred dividend plans.

Reorganization and Innovation

The success and profitability of life insurance companies bred stiff competition during the 1860s; the resulting market saturation and a general economic downtown combined to push the industry into a severe depression during the 1870s. While the more well-established companies such as the Mutual Life Insurance Company of New York, the New York Life Insurance Company (1843), and the Equitable Life Assurance Society were strong enough to weather the depression with few problems, most of the new corporations organized during the 1860s were unable to survive the downturn. All told, 98 life insurance companies went out of business between 1868 and 1877, with 46 ceasing operations during the depression years of 1871 to 1874 [Figure 1]. Of these, 32 failed outright, resulting in $35 million of losses for policyholders. It was 1888 before the amount of insurance in force surpassed that of its peak in 1870 [Figure 4].

Assessment and Fraternal Insurance Companies

Taking advantage of these problems within the industry were numerous assessment and fraternal benefit societies. Assessment or cooperative companies, as they were sometimes called, were associations in which each member was assessed a flat fee to provide the death benefit when another member died rather than paying an annual premium. The two main problems with these organizations were the uncertain number of assessments each year and the difficulty of maintaining membership levels. As members aged and death rates rose, the assessment societies found it difficult to recruit younger members willing to take on the increasing risks of assessments. By the turn of the century, most assessment companies had collapsed or reorganized as mutual companies.

Fraternal organizations were voluntary associations of people affiliated through ethnicity, religion, profession, or some other tie. Although fraternal societies had existed throughout the history of the United States, it was only in the postbellum era that they mushroomed in number and emerged as a major provider of life insurance, mainly for working-class Americans. While many fraternal societies initially issued insurance on an assessment basis, most soon switched to mutual insurance. By the turn of the century, the approximately 600 fraternal societies in existence provided over $5 billion in life insurance to their members, making them direct competitors of the major stock and mutual companies. Just 5 years later, membership was over 6 million with $8 billion of insurance in force [Figure 4].

Industrial Life Insurance

For the few successful life insurance companies organized during the 1860s and 1870s, innovation was the only means of avoiding failure. Aware that they could not compete with the major companies in a tight market, these emerging companies concentrated on markets previously ignored by the larger life insurance organizations – looking instead to the example of the fraternal benefit societies. Beginning in the mid-1870s, companies such as the John Hancock Company (1862), the Metropolitan Life Insurance Company (1868), and the Prudential Insurance Company of America (1875) started issuing industrial life insurance. Industrial insurance, which began in England in the late 1840s, targeted lower income families by providing policies in amounts as small as $100, as opposed to the thousands of dollars normally required for ordinary insurance. Premiums ranging from $0.05 to $0.65 were collected on a weekly basis, often by agents coming door-to-door, instead of on an annual, semi-annual, or quarterly basis by direct remittance to the company. Additionally, medical examinations were often not required and policies could be written to cover all members of the family instead of just the main breadwinner. While the number of policies written skyrocketed to over 51 million by 1919, industrial insurance remained only a fraction of the amount of life insurance in force throughout the period [Figures 4 and 5].

International Expansion

The major life insurance companies also quickly expanded into the global market. While numerous firms ventured abroad as early as the 1860s and 1870s, the most rapid international growth occurred between 1885 and 1905. By 1900, the Equitable was providing insurance in almost 100 nations and territories, the New York Life in almost 50 and the Mutual in about 20. The international premium income (excluding Canada) of these Big Three life insurance companies amounted to almost $50 million in 1905, covering over $1 billion of insurance in force.

The Armstrong Committee Investigation

In response to a multitude of newspaper articles portraying extravagant spending and political payoffs by executives at the Equitable Life Assurance Society – all at the expense of their policyholders – Superintendent Francis Hendricks of the New York Insurance Department reluctantly conducted an investigation of the company in 1905. His report substantiated these allegations and prompted the New York legislature to create a special committee, known as the Armstrong Committee, to examine the conduct of all life insurance companies operating within the state. Appointed chief counsel of the investigation was future United States Supreme Court Chief Justice Charles Evans Hughes. Among the abuses uncovered by the committee were interlocking directorates, the creation of subsidiary financial institutions to evade restrictions on investments, the use of proxy voting to frustrate policyholder control of mutuals, unlimited company expenses, tremendous spending for lobbying activities, rebating (the practice of returning to a new client a portion of their first premium payment as an incentive to take out a policy), the encouragement of policy lapses, and the condoning of “twisting” (a practice whereby agents misrepresented and libeled rival firms in order to convince a policyholder to sacrifice their existing policy and replace it with one from that agent). Additionally, the committee severely chastised the New York Insurance Department for permitting such malpractice to occur and recommended the enactment of a wide array of reform measures. These revelations induced numerous other states to conduct their own investigations, including New Jersey, Massachusetts, Ohio, Missouri, Wisconsin, Tennessee, Kentucky, Minnesota, and Nebraska.

New Regulations

In 1907, the New York legislature responded to the committee’s report by issuing a series of strict regulations specifying acceptable investments, limiting lobbying practices and campaign contributions, democratizing management through the elimination of proxy voting, standardizing policy forms, and limiting agent activities including rebating and twisting. Most devastating to the industry, however, were the prohibition of deferred dividend policies and the requirement of regular dividend payments to policyholders. Nineteen other states followed New York’s lead in adopting similar legislation but the dominance of New York in the insurance industry enabled it to assert considerable influence over a large percentage of the industry. The state invoked the Appleton Rule, a 1901 administrative rule devised by New York Deputy Superintendent of Insurance Henry D. Appleton that required life insurance companies to comply with New York legislation both in New York and in all other states in which they conducted business, as a condition of doing business in New York. As the Massachusetts insurance commissioner immediately recognized, “In a certain sense [New York’s] supervision will be a national supervision, as its companies do business in all the states.” The rule was officially incorporated into New York’s insurance laws in 1939 and remained both in effect and highly effective until the 1970s.

Continued Growth in the Early Twentieth Century

The Armstrong hearings and the ensuing legislation renewed public confidence in the safety of life insurance, resulting in a surge of new company organizations not seen since the 1860s. Whereas only 106 companies existed in 1904, another 288 were established in the ten years from 1905 to 1914 [Figure 1]. Life insurance in force likewise rose rapidly, increasing from $20 billion on the eve of the hearings to almost $46 billion by the end of World War I, with the share insured by the fraternal and assessment societies decreasing from 40% to less than a quarter [Figure 5].

Group Insurance

One major innovation to occur during these decades was the development of group insurance. In 1911 the Equitable Life Assurance Society wrote a policy covering the 125 employees of the Pantasote Leather Company, requiring neither individual applications nor medical examinations. The following year, the Equitable organized a group department to promote this new product and soon was insuring the employees of Montgomery Ward Company. By 1919, 29 companies wrote group policies, which amounted to over a half billion dollars worth of life insurance in force.

War Risk Insurance

Not included in Figure 5 is the War Risk insurance issued by the United States government during World War I. Beginning in April 1917, all active military personnel received a $4,500 insurance policy payable by the federal government in the case of death or disability. In October of the same year, the government began selling low-cost term life and disability insurance, without medical examination, to all active members of the military. War Risk insurance proved to be extremely popular during the war, reaching over $40 billion of life insurance in force by 1919. In the aftermath of the war, these term policies quickly declined to under $3 billion of life insurance in force, with many servicemen turning instead to the whole life policies offered by the stock and mutual companies. As was the case after the Civil War, life insurance sales rose dramatically after World War I, peaking at $117 billion of insurance in force in 1930. By the eve of the Great Depression there existed over 120 million life insurance policies – approximately equivalent to one policy for every man, woman, and child living in the United States at that time.

(Sharon Ann Murphy is a Ph.D. Candidate at the Corcoran Department of History, University of Virginia.)

References and Further Reading

Buley, R. Carlyle. The American Life Convention, 1906-1952: A Study in the History of Life Insurance. New York: Appleton-Century-Crofts, Inc., 1953.

Grant, H. Roger. Insurance Reform: Consumer Action in the Progressive Era. Ames, Iowa: Iowa State University Press, 1988.

Keller, Morton. The Life Insurance Enterprise, 1885-1910: A Study in the Limits of Corporate Power. Cambridge, MA: Belknap Press, 1963.

Kimball, Spencer L. Insurance and Public Policy: A Study in the Legal Implications of Social and Economic Public Policy, Based on Wisconsin Records 1835-1959. Madison, WI: University of Wisconsin Press, 1960.

Merkel, Philip L. “Going National: The Life Insurance Industry’s Campaign for Federal Regulation after the Civil War.” Business History Review 65 (Autumn 1991): 528-553.

North, Douglass. “Capital Accumulation in Life Insurance between the Civil War and the Investigation of 1905.” In Men in Business: Essays on the Historical Role of the Entrepreneur, edited by William Miller, 238-253. New York: Harper & Row Publishers, 1952.

Ransom, Roger L., and Richard Sutch. “Tontine Insurance and the Armstrong Investigation: A Case of Stifled Innovation, 1868-1905.” Journal of Economic History 47, no. 2 (June 1987): 379-390.

Stalson, J. Owen. Marketing Life Insurance: Its History in America. Cambridge, MA: Harvard University Press, 1942.

Table 1

Early American Life Insurance Companies, 1759-1844

Company Year Chartered Terminated Insurance in Force in 1840
Corp. for the Relief of Poor and Distressed Widows and Children of Presbyterian Ministers (Presbyterian Ministers Fund) 1759
Corporation for the Relief of the Widows and Children of Clergymen in the Communion of the Church of England in America (Episcopal Ministers Fund) 1769
Insurance Company of the State of Pennsylvania 1794 1798
Insurance Company of North America, PA 1794 1798
United Insurance Company, NY 1798 1802
New York Insurance Company 1798 1802
Pennsylvania Company for Insurances on Lives and Granting Annuities 1812 1872* 691,000
New York Mechanics Life & Fire 1812 1813
Dutchess County Fire, Marine & Life, NY 1814 1818
Massachusetts Hospital Life Insurance Company 1818 1867* 342,000
Union Insurance Company, NY 1818 1840
Aetna Insurance Company (mainly fire insurance; separate life company chartered in 1853) 1820 1853
Farmers Loan & Trust Company, NY 1822 1843
Baltimore Life Insurance Company 1830 1867 750,000 (est.)
New York Life Insurance & Trust Company 1830 1865* 2,880,000
Lawrenceburg Insurance Company 1832 1836
Mississippi Insurance Company 1833 1837
Protection Insurance Company, Mississippi 1833 1837
Ohio Life Ins. & Trust Co. (life policies appear to have been reinsured with New York Life & Trust in the late 1840s) 1834 1857 54,000
New England Mutual Life Insurance Company, Massachusetts (did not begin issuing policies until 1844) 1835 0
Ocean Mutual, Louisiana 1835 1839
Southern Life & Trust, Alabama 1836 1840
American Life Insurance & Trust Company, Baltimore 1836 1840
Girard Life Insurance, Annuity & Trust Company, Pennsylvania 1836 1894 723,000
Missouri Life & Trust 1837 1841
Missouri Mutual 1837 1841
Globe Life Insurance, Trust & Annuity Company, Pennsylvania 1837 1857
Odd Fellow Life Insurance and Trust Company, Pennsylvania 1840 1857
National of Pennsylvania 1841 1852
Mutual Life Insurance Company of New York 1842
New York Life Insurance Company 1843
State Mutual Life Assurance Company, Massachusetts 1844

*Date company ceased writing life insurance.

Citation: Murphy, Sharon. “Life Insurance in the United States through World War I”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2002. URL http://eh.net/encyclopedia/life-insurance-in-the-united-states-through-world-war-i/

Hours of Work in U.S. History

Robert Whaples, Wake Forest University

In the 1800s, many Americans worked seventy hours or more per week and the length of the workweek became an important political issue. Since then the workweek’s length has decreased considerably. This article presents estimates of the length of the historical workweek in the U.S., describes the history of the shorter-hours “movement,” and examines the forces that drove the workweek’s decline over time.

Estimates of the Length of the Workweek

Measuring the length of the workweek (or workday or workyear) is a difficult task, full of ambiguities concerning what constitutes work and who is to be considered a worker. Estimating the length of the historical workweek is even more troublesome. Before the Civil War most Americans were employed in agriculture and most of these were self-employed. Like self-employed workers in other fields, they saw no reason to record the amount of time they spent working. Often the distinction between work time and leisure time was blurry. Therefore, estimates of the length of the typical workweek before the mid-1800s are very imprecise.

The Colonial Period

Based on the amount of work performed — for example, crops raised per worker — Carr (1992) concludes that in the seventeenth-century Chesapeake region, “for at least six months of the year, an eight to ten-hour day of hard labor was necessary.” This does not account for other required tasks, which probably took about three hours per day. This workday was considerably longer than for English laborers, who at the time probably averaged closer to six hours of heavy labor each day.

The Nineteenth Century

Some observers believe that most American workers adopted the practice of working from “first light to dark” — filling all their free hours with work — throughout the colonial period and into the nineteenth century. Others are skeptical of such claims and argue that work hours increased during the nineteenth century — especially its first half. Gallman (1975) calculates “changes in implicit hours of work per agricultural worker” and estimates that hours increased 11 to 18 percent from 1800 to 1850. Fogel and Engerman (1977) argue that agricultural hours in the North increased before the Civil War due to the shift into time-intensive dairy and livestock. Weiss and Craig (1993) find evidence suggesting that agricultural workers also increased their hours of work between 1860 and 1870. Finally, Margo (2000) estimates that “on an economy-wide basis, it is probable that annual hours of work rose over the (nineteenth) century, by around 10 percent.” He credits this rise to the shift out of agriculture, a decline in the seasonality of labor demand and reductions in annual periods of nonemployment. On the other hand, it is clear that working hours declined substantially for one important group. Ransom and Sutch (1977) and Ng and Virts (1989) estimate that annual labor hours per capita fell 26 to 35 percent among African-Americans with the end of slavery.

Manufacturing Hours before 1890

Our most reliable estimates of the workweek come from manufacturing, since most employers required that manufacturing workers remain at work during precisely specified hours. The Census of Manufactures began to collect this information in 1880 but earlier estimates are available. Much of what is known about average work hours in the nineteenth century comes from two surveys of manufacturing hours taken by the federal government. The first survey, known as the Weeks Report, was prepared by Joseph Weeks as part of the Census of 1880. The second was prepared in 1893 by Commissioner of Labor Carroll D. Wright, for the Senate Committee on Finance, chaired by Nelson Aldrich. It is commonly called the Aldrich Report. Both of these sources, however, have been criticized as flawed due to problems such as sample selection bias (firms whose records survived may not have been typical) and unrepresentative regional and industrial coverage. In addition, the two series differ in their estimates of the average length of the workweek by as much as four hours. These estimates are reported in Table 1. Despite the previously mentioned problems, it seems reasonable to accept two important conclusions based on these data — the length of the typical manufacturing workweek in the 1800s was very long by modern standards and it declined significantly between 1830 and 1890.

Table 1
Estimated Average Weekly Hours Worked in Manufacturing, 1830-1890

Year Weeks Report Aldrich Report
1830 69.1
1840 67.1 68.4
1850 65.5 69.0
1860 62.0 66.0
1870 61.1 63.0
1880 60.7 61.8
1890 60.0

Sources: U.S. Department of Interior (1883), U.S. Senate (1893)
Note: Atack and Bateman (1992), using data from census manuscripts, estimate average weekly hours to be 60.1 in 1880 — very close to Weeks’ contemporary estimate. They also find that the summer workweek was about 1.5 hours longer than the winter workweek.

Hours of Work during the Twentieth Century

Because of changing definitions and data sources there does not exist a consistent series of workweek estimates covering the entire twentieth century. Table 2 presents six sets of estimates of weekly hours. Despite differences among the series, there is a fairly consistent pattern, with weekly hours falling considerably during the first third of the century and much more slowly thereafter. In particular, hours fell strongly during the years surrounding World War I, so that by 1919 the eight-hour day (with six workdays per week) had been won. Hours fell sharply at the beginning of the Great Depression, especially in manufacturing, then rebounded somewhat and peaked during World War II. After World War II, the length of the workweek stabilized around forty hours. Owen’s nonstudent-male series shows little trend after World War II, but the other series show a slow, but steady, decline in the length of the average workweek. Greis’s two series are based on the average length of the workyear and adjust for paid vacations, holidays and other time-off. The last column is based on information reported by individuals in the decennial censuses and in the Current Population Survey of 1988. It may be the most accurate and representative series, as it is based entirely on the responses of individuals rather than employers.

Table 2
Estimated Average Weekly Hours Worked, 1900-1988

Year Census of Manu-facturing JonesManu-

facturing

OwenNonstudent Males GreisManu-

facturing

GreisAll Workers Census/CPS All Workers
1900 59.6* 55.0 58.5
1904 57.9 53.6 57.1
1909 56.8 (57.3) 53.1 55.7
1914 55.1 (55.5) 50.1 54.0
1919 50.8 (51.2) 46.1 50.0
1924 51.1* 48.8 48.8
1929 50.6 48.0 48.7
1934 34.4 40.6
1940 37.6 42.5 43.3
1944 44.2 46.9
1947 39.2 42.4 43.4 44.7
1950 38.7 41.1 42.7
1953 38.6 41.5 43.2 44.0
1958 37.8* 40.9 42.0 43.4
1960 41.0 40.9
1963 41.6 43.2 43.2
1968 41.7 41.2 42.0
1970 41.1 40.3
1973 40.6 41.0
1978 41.3* 39.7 39.1
1980 39.8
1988 39.2

Sources: Whaples (1990a), Jones (1963), Owen (1976, 1988), and Greis (1984). The last column is based on the author’s calculations using Coleman and Pencavel’s data from Table 4 (below).
* = these estimates are from one year earlier than the year listed.
(The figures in parentheses in the first column are unofficial estimates but are probably more precise, as they better estimate the hours of workers in industries with very long workweeks.)

Hours in Other Industrial Sectors

Table 3 compares the length of the workweek in manufacturing to that in other industries for which there is available information. (Unfortunately, data from the agricultural and service sectors are unavailable until late in this period.) The figures in Table 3 show that the length of the workweek was generally shorter in the other industries — sometimes considerably shorter. For example, in 1910 anthracite coalminers’ workweeks were about forty percent shorter than the average workweek among manufacturing workers. All of the series show an overall downward trend.

Table 3
Estimated Average Weekly Hours Worked, Other Industries

Year Manufacturing Construction Railroads Bituminous Coal Anthracite Coal
1850s about 66 about 66
1870s about 62 about 60
1890 60.0 51.3
1900 59.6 50.3 52.3 42.8 35.8
1910 57.3 45.2 51.5 38.9 43.3
1920 51.2 43.8 46.8 39.3 43.2
1930 50.6 42.9 33.3 37.0
1940 37.6 42.5 27.8 27.2
1955 38.5 37.1 32.4 31.4

Sources: Douglas (1930), Jones (1963), Licht (1983), and Tables 1 and 2.
Note: The manufacturing figures for the 1850s and 1870s are approximations based on averaging numbers from the Weeks and Aldrich reports from Table 1. The early estimates for the railroad industry are also approximations.

Recent Trends by Race and Gender

Some analysts, such as Schor (1992) have argued that the workweek increased substantially in the last half of the twentieth century. Few economists accept this conclusion, arguing that it is based on the use of faulty data (public opinion surveys) and unexplained methods of “correcting” more reliable sources. Schor’s conclusions are contradicted by numerous studies. Table 4 presents Coleman and Pencavel’s (1993a, 1993b) estimates of the average workweek of employed people — disaggregated by race and gender. For all four groups the average length of the workweek has dropped since 1950. Although median weekly hours were virtually constant for men, the upper tail of the hours distribution fell for those with little schooling and rose for the well-educated. In addition, Coleman and Pencavel also find that work hours declined for young and older men (especially black men), but changed little for white men in their prime working years. Women with relatively little schooling were working fewer hours in the 1980s than in 1940, while the reverse is true of well-educated women.

Table 4
Estimated Average Weekly Hours Worked, by Race and Gender, 1940-1988

Year White Men Black Men White Women Black Women
1940 44.1 44.5 40.6 42.2
1950 43.4 42.8 41.0 40.3
1960 43.3 40.4 36.8 34.7
1970 43.1 40.2 36.1 35.9
1980 42.9 39.6 35.9 36.5
1988 42.4 39.6 35.5 37.2

Source: Coleman and Pencavel (1993a, 1993b)

Broader Trends in Time Use, 1880 to 2040

In 1880 a typical male household head had very little leisure time — only about 1.8 hours per day over the course of a year. However, as Fogel’s (2000) estimates in Table 5 show, between 1880 and 1995 the amount of work per day fell nearly in half, allowing leisure time to more than triple. Because of the decline in the length of the workweek and the declining portion of a lifetime that is spent in paid work (due largely to lengthening periods of education and retirement) the fraction of the typical American’s lifetime devoted to work has become remarkably small. Based on these trends Fogel estimates that four decades from now less than one-fourth of our discretionary time (time not needed for sleep, meals, and hygiene) will be devoted to paid work — over three-fourths will be available for doing what we wish.

Table 5
Division of the Day for the Average Male Household Head over the Course of a Year, 1880 and 1995

Activity 1880 1995
Sleep 8 8
Meals and hygiene 2 2
Chores 2 2
Travel to and from work 1 1
Work 8.5 4.7
Illness .7 .5
Left over for leisure activities 1.8 5.8

Source: Fogel (2000)

Table 6
Estimated Trend in the Lifetime Distribution of Discretionary Time, 1880-2040

Activity 1880 1995 2040
Lifetime Discretionary Hours 225,900 298,500 321,900
Lifetime Work Hours 182,100 122,400 75,900
Lifetime Leisure Hours 43,800 176,100 246,000

Source: Fogel (2000)
Notes: Discretionary hours exclude hours used for sleep, meals and hygiene. Work hours include paid work, travel to and from work, and household chores.

Postwar International Comparisons

While hours of work have decreased slowly in the U.S. since the end of World War II, they have decreased more rapidly in Western Europe. Greis (1984) calculates that annual hours worked per employee fell from 1908 to 1704 in the U.S. between 1950 and 1979, a 10.7 percent decrease. This compares to a 21.8 percent decrease across a group of twelve Western European countries, where the average fell from 2170 hours to 1698 hours between 1950 and 1979. Perhaps the most precise way of measuring work hours is to have individuals fill out diaries on their day-to-day and hour-to-hour time use. Table 7 presents an international comparison of average work hours both inside and outside of the workplace, by adult men and women — averaging those who are employed with those who are not. (Juster and Stafford (1991) caution, however, that making these comparisons requires a good deal of guesswork.) These numbers show a significant drop in total work per week in the U.S. between 1965 and 1981. They also show that total work by men and women is very similar, although it is divided differently. Total work hours in the U.S. were fairly similar to those in Japan, but greater than in Denmark, while less than in the USSR.

Table 7
Weekly Work Time in Four Countries, Based on Time Diaries, 1960s-1980s

Activity US USSR (Pskov)
Men Women Men Women
1965 1981 1965 1981 1965 1981 1965 1981
Total Work 63.1 57.8 60.9 54.4 64.4 65.7 75.3 66.3
Market Work 51.6 44.0 18.9 23.9 54.6 53.8 43.8 39.3
Commuting 4.8 3.5 1.6 2.0 4.9 5.2 3.7 3.4
Housework 11.5 13.8 41.8 30.5 9.8 11.9 31.5 27.0
Activity Japan Denmark
Men Women Men Women
1965 1985 1965 1985 1964 1987 1964 1987
Total Work 60.5 55.5 64.7 55.6 45.4 46.2 43.4 43.9
Market Work 57.7 52.0 33.2 24.6 41.7 33.4 13.3 20.8
Commuting 3.6 4.5 1.0 1.2 n.a n.a n.a n.a
Housework 2.8 3.5 31.5 31.0 3.7 12.8 30.1 23.1

Source: Juster and Stafford (1991)

The Shorter Hours “Movement” in the U.S.

The Colonial Period

Captain John Smith, after mapping New England’s coast, came away convinced that three days’ work per week would satisfy any settler. Far from becoming a land of leisure, however, the abundant resources of British America and the ideology of its settlers, brought forth high levels of work. Many colonial Americans held the opinion that prosperity could be taken as a sign of God’s pleasure with the individual, viewed work as inherently good and saw idleness as the devil’s workshop. Rodgers (1978) argues that this work ethic spread and eventually reigned supreme in colonial America. The ethic was consistent with the American experience, since high returns to effort meant that hard work often yielded significant increases in wealth. In Virginia, authorities also transplanted the Statue of Artificers, which obliged all Englishmen (except the gentry) to engage in productive activity from sunrise to sunset. Likewise, a 1670 Massachusetts law demanded a minimum ten-hour workday, but it is unlikely that these laws had any impact on the behavior of most free workers.

The Revolutionary War Period

Roediger and Foner (1989) contend that the Revolutionary War era brought a series of changes that undermined support for sun-to-sun work. The era’s republican ideology emphasized that workers needed free time, away from work, to participate in democracy. Simultaneously, the development of merchant capitalism meant that there were, for the first time, a significant number of wageworkers. Roediger and Foner argue that reducing labor costs was crucial to the profitability of these workers’ employers, who reduced costs by squeezing more work from their employees — reducing time for meals, drink and rest and sometimes even rigging the workplace’s official clock. Incensed by their employers’ practice of paying a flat daily wage during the long summer shift and resorting to piece rates during short winter days, Philadelphia’s carpenters mounted America’s first ten-hour-day strike in May 1791. (The strike was unsuccessful.)

1820s: The Shorter Hours Movement Begins

Changes in the organization of work, with the continued rise of merchant capitalists, the transition from the artisanal shop to the early factory, and an intensified work pace had become widespread by about 1825. These changes produced the first extensive, aggressive movement among workers for shorter hours, as the ten-hour movement blossomed in New York City, Philadelphia and Boston. Rallying around the ten-hour banner, workers formed the first city-central labor union in the U.S., the first labor newspaper, and the first workingmen’s political party — all in Philadelphia — in the late 1820s.

Early Debates over Shorter Hours

Although the length of the workday is largely an economic decision arrived at by the interaction of the supply and demand for labor, advocates of shorter hours and foes of shorter hours have often argued the issue on moral grounds. In the early 1800s, advocates argued that shorter work hours improved workers’ health, allowed them time for self-improvement and relieved unemployment. Detractors countered that workers would abuse leisure time (especially in saloons) and that long, dedicated hours of work were the path to success, which should not be blocked for the great number of ambitious workers.

1840s: Early Agitation for Government Intervention

When Samuel Slater built the first textile mills in the U.S., “workers labored from sun up to sun down in summer and during the darkness of both morning and evening in the winter. These hours ? only attracted attention when they exceeded the common working day of twelve hours,” according to Ware (1931). During the 1830s, an increased work pace, tighter supervision, and the addition of about fifteen minutes to the work day (partly due to the introduction of artificial lighting during winter months), plus the growth of a core of more permanent industrial workers, fueled a campaign for a shorter workweek among mill workers in Lowell, Massachusetts, whose workweek averaged about 74 hours. This agitation was led by Sarah Bagley and the New England Female Labor Reform Association, which, beginning in 1845, petitioned the state legislature to intervene in the determination of hours. The petitions were followed by America’s first-ever examination of labor conditions by a governmental investigating committee. The Massachusetts legislature proved to be very unsympathetic to the workers’ demands, but similar complaints led to the passage of laws in New Hampshire (1847) and Pennsylvania (1848), declaring ten hours to be the legal length of the working day. However, these laws also specified that a contract freely entered into by employee and employer could set any length for the workweek. Hence, these laws had little impact. Legislation passed by the federal government had a more direct, though limited effect. On March 31, 1840, President Martin Van Buren issued an executive order mandating a ten-hour day for all federal employees engaged in manual work.

1860s: Grand Eight Hours Leagues

As the length of the workweek gradually declined, political agitation for shorter hours seems to have waned for the next two decades. However, immediately after the Civil War reductions in the length of the workweek reemerged as an important issue for organized labor. The new goal was an eight-hour day. Roediger (1986) argues that many of the new ideas about shorter hours grew out of the abolitionists’ critique of slavery — that long hours, like slavery, stunted aggregate demand in the economy. The leading proponent of this idea, Ira Steward, argued that decreasing the length of the workweek would raise the standard of living of workers by raising their desired consumption levels as their leisure expanded, and by ending unemployment. The hub of the newly launched movement was Boston and Grand Eight Hours Leagues sprang up around the country in 1865 and 1866. The leaders of the movement called the meeting of the first national organization to unite workers of different trades, the National Labor Union, which met in Baltimore in 1867. In response to this movement, eight states adopted general eight-hour laws, but again the laws allowed employer and employee to mutually consent to workdays longer than the “legal day.” Many critics saw these laws and this agitation as a hoax, because few workers actually desired to work only eight hours per day at their original hourly pay rate. The passage of the state laws did foment action by workers — especially in Chicago where parades, a general strike, rioting and martial law ensued. In only a few places did work hours fall after the passage of these laws. Many become disillusioned with the idea of using the government to promote shorter hours and by the late 1860s, efforts to push for a universal eight-hour day had been put on the back burner.

The First Enforceable Hours Laws

Despite this lull in shorter-hours agitation, in 1874, Massachusetts passed the nation’s first enforceable ten-hour law. It covered only female workers and became fully effective by 1879. This legislation was fairly late by European standards. Britain had passed its first effective Factory Act, setting maximum hours for almost half of its very young textile workers, in 1833.

1886: Year of Dashed Hopes

In the early 1880s organized labor in the U.S. was fairly weak. In 1884, the short-lived Federation of Organized Trades and Labor Unions (FOTLU) fired a “shot in the dark.” During its final meeting, before dissolving, the Federation “ordained” May 1, 1886 as the date on which workers would cease working beyond eight hours per day. Meanwhile, the Knights of Labor, which had begun as a secret fraternal society and evolved a labor union, began to gain strength. It appears that many nonunionized workers, especially the unskilled, came to see in the Knights a chance to obtain a better deal from their employers, perhaps even to obtain the eight-hour day. FOTLU’s call for workers to simply walk off the job after eight hours beginning on May 1, plus the activities of socialist and anarchist labor organizers and politicians, and the apparent strength of the Knights combined to attract members in record numbers. The Knights mushroomed and its new membership demanded that their local leaders support them in attaining the eight-hour day. Many smelled victory in the air — the movement to win the eight-hour day became frenzied and the goal became “almost a religious crusade” (Grob, 1961).

The Knights’ leader, Terence Powderly, thought that the push for a May 1 general strike for eight-hours was “rash, short-sighted and lacking in system” and “must prove abortive” (Powderly, 1890). He offered no effective alternative plan but instead tried to block the mass action, issuing a “secret circular” condemning the use of strikes. Powderly reasoned that low incomes forced workmen to accept long hours. Workers didn’t want shorter hours unless their daily pay was maintained, but employers were unwilling and/or unable to offer this. Powderly’s rival, labor leader Samuel Gompers, agreed that “the movement of ’86 did not have the advantage of favorable conditions” (Gompers, 1925). Nelson (1986) points to divisions among workers, which probably had much to do with the failure in 1886 of the drive for the eight-hour day. Some insisted on eight hours with ten hours’ pay, but others were willing to accept eight hours with eight hours’ pay,

Haymarket Square Bombing

The eight-hour push of 1886 was, in Norman Ware’s words, “a flop” (Ware, 1929). Lack of will and organization among workers was undoubtedly important, but its collapse was aided by violence that marred strikes and political rallies in Chicago and Milwaukee. The 1886 drive for eight-hours literally blew up in organized labor’s face. At Haymarket Square in Chicago an anarchist bomb killed fifteen policemen during an eight-hour rally, and in Milwaukee’s Bay View suburb nine strikers were killed as police tried to disperse roving pickets. The public backlash and fear of revolution damned the eight-hour organizers along with the radicals and dampened the drive toward eight hours — although it is estimated that the strikes of May 1886 shortened the workweek for about 200,000 industrial workers, especially in New York City and Cincinnati.

The AFL’s Strategy

After the demise of the Knights of Labor, the American Federation of Labor (AFL) became the strongest labor union in the U.S. It held shorter hours as a high priority. The inside cover of its Proceedings carried two slogans in large type: “Eight hours for work, eight hours for rest, eight hours for what we will” and “Whether you work by the piece or work by the day, decreasing the hours increases the pay.” (The latter slogan was coined by Ira Steward’s wife, Mary.) In the aftermath of 1886, the American Federation of Labor adopted a new strategy of selecting each year one industry in which it would attempt to win the eight-hour day, after laying solid plans, organizing, and building up a strike fund war chest by taxing nonstriking unions. The United Brotherhood of Carpenters and Joiners was selected first and May 1, 1890 was set as a day of national strikes. It is estimated that nearly 100,000 workers gained the eight-hour day as a result of these strikes in 1890. However, other unions turned down the opportunity to follow the carpenters’ example and the tactic was abandoned. Instead, the length of the workweek continued to erode during this period, sometimes as the result of a successful local strike, more often as the result of broader economic forces.

The Spread of Hours Legislation

Massachusetts’ first hours law in 1874 set sixty hours per week as the legal maximum for women, in 1892 this was cut to 58, in 1908 to 56, and in 1911 to 54. By 1900, 26 percent of states had maximum hours laws covering women, children and, in some, adult men (generally only those in hazardous industries). The percentage of states with maximum hours laws climbed to 58 percent in 1910, 76 percent in 1920, and 84 percent in 1930. Steinberg (1982) calculates that the percent of employees covered climbed from 4 percent nationally in 1900, to 7 percent in 1910, and 12 percent in 1920 and 1930. In addition, these laws became more restrictive with the average legal standard falling from a maximum of 59.3 hours per week in 1900 to 56.7 in 1920. According to her calculations, in 1900 about 16 percent of the workers covered by these laws were adult men, 49 percent were adult women and the rest were minors.

Court Rulings

The banner years for maximum hours legislation were right around 1910. This may have been partly a reaction to the Supreme Court’s ruling upholding female-hours legislation in the Muller vs. Oregon case (1908). The Court’s rulings were not always completely consistent during this period, however. In 1898 the Court upheld a maximum eight-hour day for workmen in the hazardous industries of mining and smelting in Utah in Holden vs. Hardy. In Lochner vs. New York (1905), it rejected as unconstitutional New York’s ten-hour day for bakers, which was also adopted (at least nominally) out of concerns for safety. The defendant showed that mortality rates in baking were only slightly above average, and lower than those for many unregulated occupations, arguing that this was special interest legislation, designed to favor unionized bakers. Several state courts, on the other hand, supported laws regulating the hours of men in only marginally hazardous work. By 1917, in Bunting vs. Oregon, the Supreme Court seemingly overturned the logic of the Lochner decision, supporting a state law that required overtime payment for all men working long hours. The general presumption during this period was that the courts would allow regulation of labor concerning women and children, who were thought to be incapable of bargaining on an equal footing with employers and in special need of protection. Men were allowed freedom of contract unless it could be proven that regulating their hours served a higher good for the population at large.

New Arguments about Shorter Hours

During the first decades of the twentieth century, arguments favoring shorter hours moved away from Steward’s line that shorter hours increased pay and reduced unemployment to arguments that shorter hours were good for employers because they made workers more productive. A new cadre of social scientists began to offer evidence that long hours produced health-threatening, productivity-reducing fatigue. This line of reasoning, advanced in the court brief of Louis Brandeis and Josephine Goldmark, was crucial in the Supreme Court’s decision to support state regulation of women’s hours in Muller vs. Oregon. Goldmark’s book, Fatigue and Efficiency (1912) was a landmark. In addition, data relating to hours and output among British and American war workers during World War I helped convince some that long hours could be counterproductive. Businessmen, however, frequently attacked the shorter hours movement as merely a ploy to raise wages, since workers were generally willing to work overtime at higher wage rates.

Federal Legislation in the 1910s

In 1912 the Federal Public Works Act was passed, which provided that every contract to which the U.S. government was a party must contain an eight-hour day clause. Three year later LaFollette’s Bill established maximum hours for maritime workers. These were preludes to the most important shorter-hours law enacted by Congress during this period — 1916’s Adamson Act, which was passed to counter a threatened nationwide strike, granted rail workers the basic eight hour day. (The law set eight hours as the basic workday and required higher overtime pay for longer hours.)

World War I and Its Aftermath

Labor markets became very tight during World War I as the demand for workers soared and the unemployment rate plunged. These forces put workers in a strong bargaining position, which they used to obtain shorter work schedules. The move to shorter hours was also pushed by the federal government, which gave unprecedented support to unionization. The federal government began to intervene in labor disputes for the first time, and the National War Labor Board “almost invariably awarded the basic eight-hour day when the question of hours was at issue” in labor disputes (Cahill, 1932). At the end of the war everyone wondered if organized labor would maintain its newfound power and the crucial test case was the steel industry. Blast furnace workers generally put in 84-hour workweeks. These abnormally long hours were the subject of much denunciation and a major issue in a strike that began in September 1919. The strike failed (and organized labor’s power receded during the 1920s), but four years later US Steel reduced its workday from twelve to eight hours. The move came after much arm-twisting by President Harding but its timing may be explained by immigration restrictions and the loss of immigrant workers who were willing to accept such long hours (Shiells, 1990).

The Move to a Five-day Workweek

During the 1920s agitation for shorter workdays largely disappeared, now that the workweek had fallen to about 50 hours. However, pressure arose to grant half-holidays on Saturday or Saturday off — especially in industries whose workers were predominantly Jewish. By 1927 at least 262 large establishments had adopted the five-day week, while only 32 had it by 1920. The most notable action was Henry Ford’s decision to adopt the five-day week in 1926. Ford employed more than half of the nation’s approximately 400,000 workers with five-day weeks. However, Ford’s motives were questioned by many employers who argued that productivity gains from reducing hours ceased beyond about forty-eight hours per week. Even the reformist American Labor Legislation Review greeted the call for a five-day workweek with lukewarm interest.

Changing Attitudes in the 1920s

Hunnicutt (1988) argues that during the 1920s businessmen and economists began to see shorter hours as a threat to future economic growth. With the development of advertising — the “gospel of consumption” — a new vision of progress was proposed to American workers. It replaced the goal of leisure time with a list of things to buy and business began to persuade workers that more work brought more tangible rewards. Many workers began to oppose further decreases in the length of the workweek. Hunnicutt concludes that a new work ethic arose as Americans threw off the psychology of scarcity for one of abundance.

Hours’ Reduction during the Great Depression

Then the Great Depression hit the American economy. By 1932 about half of American employers had shortened hours. Rather than slash workers’ real wages, employers opted to lay-off many workers (the unemployment rate hit 25 percent) and tried to protect the ones they kept on by the sharing of work among them. President Hoover’s Commission for Work Sharing pushed voluntary hours reductions and estimated that they had saved three to five million jobs. Major employers like Sears, GM, and Standard Oil scaled down their workweeks and Kellogg’s and the Akron tire industry pioneered the six-hour day. Amid these developments, the AFL called for a federally-mandated thirty-hour workweek.

The Black-Connery 30-Hours Bill and the NIRA

The movement for shorter hours as a depression-fighting work-sharing measure built such a seemingly irresistible momentum that by 1933 observers predicting that the “30-hour week was within a month of becoming federal law” (Hunnicutt, 1988). During the period after the 1932 election but before Franklin Roosevelt’s inauguration, Congressional hearings on thirty hours began, and less than one month into FDR’s first term, the Senate passed, 53 to 30, a thirty-hour bill authored by Hugo Black. The bill was sponsored in the House by William Connery. Roosevelt originally supported the Black-Connery proposals, but soon backed off, uneasy with a provision forbidding importation of goods produced by workers whose weeks were longer than thirty hours, and convinced by arguments of business that trying to legislate fewer hours might have disastrous results. Instead, FDR backed the National Industrial Recovery Act (NIRA). Hunnicutt argues that an implicit deal was struck in the NIRA. Labor leaders were persuaded by NIRA Section 7a’s provisions — which guaranteed union organization and collective bargaining — to support the NIRA rather than the Black-Connery Thirty-Hour Bill. Business, with the threat of thirty hours hanging over its head, fell raggedly into line. (Most historians cite other factors as the key to the NIRA’s passage. See Barbara Alexander’s article on the NIRA in this encyclopedia.) When specific industry codes were drawn up by the NIRA-created National Recovery Administration (NRA), shorter hours were deemphasized. Despite a plan by NRA Administrator Hugh Johnson to make blanket provisions for a thirty-five hour workweek in all industry codes, by late August 1933, the momentum toward the thirty-hour week had dissipated. About half of employees covered by NRA codes had their hours set at forty per week and nearly 40 percent had workweeks longer than forty hours.

The FSLA: Federal Overtime Law

Hunnicutt argues that the entire New Deal can be seen as an attempt to keep shorter-hours advocates at bay. After the Supreme Court struck down the NRA, Roosevelt responded to continued demands for thirty hours with the Works Progress Administration, the Wagner Act, Social Security, and, finally, the Fair Labor Standards Acts, which set a federal minimum wage and decreed that overtime beyond forty hours per week would be paid at one-and-a-half times the base rate in covered industries.

The Demise of the Shorter Hours’ Movement

As the Great Depression ended, average weekly work hours slowly climbed from their low reached in 1934. During World War II hours reached a level almost as high as at the end of World War I. With the postwar return of weekly work hours to the forty-hour level the shorter hours movement effectively ended. Occasionally organized labor’s leaders announced that they would renew the push for shorter hours, but they found that most workers didn’t desire a shorter workweek.

The Case of Kellogg’s

Offsetting isolated examples of hours reductions after World War II, there were noteworthy cases of backsliding. Hunnicutt (1996) has studied the case of Kellogg’s in great detail. In 1946, 87% of women and 71% of men working at Kellogg’s voted to return to the six-hour day, with the end of the war. Over the course of the next decade, however, the tide turned. By 1957 most departments had opted to switch to 8-hour shifts, so that only about one-quarter of the work force, mostly women, retained a six-hour shift. Finally, in 1985, the last department voted to adopt an 8-hour workday. Workers, especially male workers, began to favor additional money more than the extra two hours per day of free time. In interviews they explained that they needed the extra money to buy a wide range of consumer items and to keep up with the neighbors. Several men told about the friction that resulted when men spent too much time around the house: “The wives didn’t like the men underfoot all day.” “The wife always found something for me to do if I hung around.” “We got into a lot of fights.” During the 1950s, the threat of unemployment evaporated and the moral condemnation for being a “work hog” no longer made sense. In addition, the rise of quasi-fixed employment costs (such as health insurance) induced management to push workers toward a longer workday.

The Current Situation

As the twentieth century ended there was nothing resembling a shorter hours “movement.” The length of the workweek continues to fall for most groups — but at a glacial pace. Some Americans complain about a lack of free time but the vast majority seem content with an average workweek of roughly forty hours — channeling almost all of their growing wages into higher incomes rather than increased leisure time.

Causes of the Decline in the Length of the Workweek

Supply, Demand and Hours of Work

The length of the workweek, like other labor market outcomes, is determined by the interaction of the supply and demand for labor. Employers are torn by conflicting pressures. Holding everything else constant, they would like employees to work long hours because this means that they can utilize their equipment more fully and offset any fixed costs from hiring each worker (such as the cost of health insurance — common today, but not a consideration a century ago). On the other hand, longer hours can bring reduced productivity due to worker fatigue and can bring worker demands for higher hourly wages to compensate for putting in long hours. If they set the workweek too high, workers may quit and few workers will be willing to work for them at a competitive wage rate. Thus, workers implicitly choose among a variety of jobs — some offering shorter hours and lower earnings, others offering longer hours and higher earnings.

Economic Growth and the Long-Term Reduction of Work Hours

Historically employers and employees often agreed on very long workweeks because the economy was not very productive (by today’s standards) and people had to work long hours to earn enough money to feed, clothe and house their families. The long-term decline in the length of the workweek, in this view, has primarily been due to increased economic productivity, which has yielded higher wages for workers. Workers responded to this rise in potential income by “buying” more leisure time, as well as by buying more goods and services. In a recent survey, a sizeable majority of economic historians agreed with this view. Over eighty percent accepted the proposition that “the reduction in the length of the workweek in American manufacturing before the Great Depression was primarily due to economic growth and the increased wages it brought” (Whaples, 1995). Other broad forces probably played only a secondary role. For example, roughly two-thirds of economic historians surveyed rejected the proposition that the efforts of labor unions were the primary cause of the drop in work hours before the Great Depression.

Winning the Eight-Hour Day in the Era of World War I

The swift reduction of the workweek in the period around World War I has been extensively analyzed by Whaples (1990b). His findings support the consensus that economic growth was the key to reduced work hours. Whaples links factors such as wages, labor legislation, union power, ethnicity, city size, leisure opportunities, age structure, wealth and homeownership, health, education, alternative employment opportunities, industrial concentration, seasonality of employment, and technological considerations to changes in the average workweek in 274 cities and 118 industries. He finds that the rapid economic expansion of the World War I period, which pushed up real wages by more than 18 percent between 1914 and 1919, explains about half of the drop in the length of the workweek. The reduction of immigration during the war was important, as it deprived employers of a group of workers who were willing to put in long hours, explaining about one-fifth of the hours decline. The rapid electrification of manufacturing seems also to have played an important role in reducing the workweek. Increased unionization explains about one-seventh of the reduction, and federal and state legislation and policies that mandated reduced workweeks also had a noticeable role.

Cross-sectional Patterns from 1919

In 1919 the average workweek varied tremendously, emphasizing the point that not all workers desired the same workweek. The workweek exceeded 69 hours in the iron blast furnace, cottonseed oil, and sugar beet industries, but fell below 45 hours in industries such as hats and caps, fur goods, and women’s clothing. Cities’ averages also differed dramatically. In a few Midwestern steel mill towns average workweeks exceeded 60 hours. In a wide range of low-wage Southern cities they reached the high 50s, but in high-wage Western ports, like Seattle, the workweek fell below 45 hours.

Whaples (1990a) finds that among the most important city-level determinants of the workweek during this period were the availability of a pool of agricultural workers, the capital-labor ratio, horsepower per worker, and the amount of employment in large establishments. Hours rose as each of these increased. Eastern European immigrants worked significantly longer than others, as did people in industries whose output varied considerably from season to season. High unionization and strike levels reduced hours to a small degree. The average female employee worked about six and a half fewer hours per week in 1919 than did the average male employee. In city-level comparisons, state maximum hours laws appear to have had little affect on average work hours, once the influences of other factors have been taken into account. One possibility is that these laws were passed only after economic forces lowered the length of the workweek. Overall, in cities where wages were one percent higher, hours were about -0.13 to -0.05 percent lower. Again, this suggests that during the era of declining hours, workers were willing to use higher wages to “buy” shorter hours.

Annotated Bibliography

Perhaps the most comprehensive survey of the shorter hours movement in the U.S. is David Roediger and Philip Foner’s Our Own Time: A History of American Labor and the Working Day (1989). It contends that “the length of the working day has been the central issue for the American labor movement during its most vigorous periods of activity, uniting workers along lines of craft, gender, and ethnicity.” Critics argue that its central premise is flawed because workers have often been divided about the optimal length of the workweek. It explains the point of view of organized labor and recounts numerous historically important events and arguments, but does not attempt to examine in detail the broader economic forces that determined the length of the workweek. An earlier useful comprehensive work is Marion Cahill’s Shorter Hours: A Study of the Movement since the Civil War (1932).

Benjamin Hunnicutt’s Work Without End: Abandoning Shorter Hours for the Right to Work (1988) focuses on the period from 1920 to 1940 and traces the political, intellectual, and social “dialogues” that changed the American concept of progress from dreams of more leisure to an “obsession” with the importance of work and wage-earning. This work’s detailed analysis and insights are valuable, but it draws many of its inferences from what intellectuals said about shorter hours, rather than spending time on the actual decision makers — workers and employers. Hunnicutt’s Kellogg’s Six-Hour Day (1996), is important because it does exactly this — interviewing employees and examining the motives and decisions of a prominent employer. Unfortunately, it shows that one must carefully interpret what workers say on the subject, as they are prone to reinterpret their own pasts so that their choices can be more readily rationalized. (See EH.NET’s review: http://eh.net/book_reviews/kelloggs-six-hour-day/.)

Economists have given surprisingly little attention to the determinants of the workweek. The most comprehensive treatment is Robert Whaples’ “The Shortening of the American Work Week” (1990), which surveys estimates of the length of the workweek, the shorter hours movement, and economic theories about the length of the workweek. Its core is an extensive statistical examination of the determinants of the workweek in the period around World War I.

References

Atack, Jeremy and Fred Bateman. “How Long Was the Workday in 1880?” Journal of Economic History 52, no. 1 (1992): 129-160.

Cahill, Marion Cotter. Shorter Hours: A Study of the Movement since the Civil War. New York: Columbia University Press, 1932.

Carr, Lois Green. “Emigration and the Standard of Living: The Seventeenth Century Chesapeake.” Journal of Economic History 52, no. 2 (1992): 271-291.

Coleman, Mary T. and John Pencavel. “Changes in Work Hours of Male Employees, 1940-1988.” Industrial and Labor Relations Review 46, no. 2 (1993a): 262-283.

Coleman, Mary T. and John Pencavel. “Trends in Market Work Behavior of Women since 1940.” Industrial and Labor Relations Review 46, no. 4 (1993b): 653-676.

Douglas, Paul. Real Wages in the United States, 1890-1926. Boston: Houghton, 1930.

Fogel, Robert. The Fourth Great Awakening and the Future of Egalitarianism. Chicago: University of Chicago Press, 2000.

Fogel, Robert and Stanley Engerman. Time on the Cross: The Economics of American Negro Slavery. Boston: Little, Brown, 1974.

Gallman, Robert. “The Agricultural Sector and the Pace of Economic Growth: U.S. Experience in the Nineteenth Century.” In Essays in Nineteenth-Century Economic History: The Old Northwest, edited by David Klingaman and Richard Vedder. Athens, OH: Ohio University Press, 1975.

Goldmark, Josephine. Fatigue and Efficiency. New York: Charities Publication Committee, 1912.

Gompers, Samuel. Seventy Years of Life and Labor: An Autobiography. New York: Dutton, 1925.

Greis, Theresa Diss. The Decline of Annual Hours Worked in the United States, since 1947. Manpower and Human Resources Studies, no. 10, Wharton School, University of Pennsylvania, 1984.

Grob, Gerald. Workers and Utopia: A Study of Ideological Conflict in the American Labor Movement, 1865-1900. Evanston: Northwestern University Press, 1961.

Hunnicutt, Benjamin Kline. Work Without End: Abandoning Shorter Hours for the Right to Work. Philadelphia: Temple University Press, 1988.

Hunnicutt, Benjamin Kline. Kellogg’s Six-Hour Day. Philadelphia: Temple University Press, 1996.

Jones, Ethel. “New Estimates of Hours of Work per Week and Hourly Earnings, 1900-1957.” Review of Economics and Statistics 45, no. 4 (1963): 374-385.

Juster, F. Thomas and Frank P. Stafford. “The Allocation of Time: Empirical Findings, Behavioral Models, and Problems of Measurement.” Journal of Economic Literature 29, no. 2 (1991): 471-522.

Licht, Walter. Working for the Railroad: The Organization of Work in the Nineteenth Century. Princeton: Princeton University Press, 1983.

Margo, Robert. “The Labor Force in the Nineteenth Century.” In The Cambridge Economic History of the United States, Volume II, The Long Nineteenth Century, edited by Stanley Engerman and Robert Gallman, 207-243. New York: Cambridge University Press, 2000.

Nelson, Bruce. “‘We Can’t Get Them to Do Aggressive Work': Chicago’s Anarchists and the Eight-Hour Movement.” International Labor and Working Class History 29 (1986).

Ng, Kenneth and Nancy Virts. “The Value of Freedom.” Journal of Economic History 49, no. 4 (1989): 958-965.

Owen, John. “Workweeks and Leisure: An Analysis of Trends, 1948-1975.” Monthly Labor Review 99 (1976).

Owen, John. “Work-time Reduction in the United States and Western Europe.” Monthly Labor Review 111 (1988).

Powderly, Terence. Thirty Years of Labor, 1859-1889. Columbus: Excelsior, 1890.

Ransom, Roger and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. New York: Cambridge University Press, 1977.

Rodgers, Daniel. The Work Ethic in Industrial America, 1850-1920. Chicago: University of Chicago Press, 1978.

Roediger, David. “Ira Steward and the Antislavery Origins of American Eight-Hour Theory.” Labor History 27 (1986).

Roediger, David and Philip Foner. Our Own Time: A History of American Labor and the Working Day. New York: Verso, 1989.

Schor, Juliet B. The Overworked American: The Unexpected Decline in Leisure. New York: Basic Books, 1992.

Shiells, Martha Ellen, “Collective Choice of Working Conditions: Hours in British and U.S. Iron and Steel, 1890-1923.” Journal of Economic History 50, no. 2 (1990): 379-392.

Steinberg, Ronnie. Wages and Hours: Labor and Reform in Twentieth-Century America. New Brunswick, NJ: Rutgers University Press, 1982.

United States, Department of Interior, Census Office. Report on the Statistics of Wages in Manufacturing Industries, by Joseph Weeks, 1880 Census, Vol. 20. Washington: GPO, 1883.

United States Senate. Senate Report 1394, Fifty-Second Congress, Second Session. “Wholesale Prices, Wages, and Transportation.” Washington: GPO, 1893.

Ware, Caroline. The Early New England Cotton Manufacture: A Study of Industrial Beginnings. Boston: Houghton-Mifflin, 1931.

Ware, Norman. The Labor Movement in the United States, 1860-1895. New York: Appleton, 1929.

Weiss, Thomas and Lee Craig. “Agricultural Productivity Growth during the Decade of the Civil War.” Journal of Economic History 53, no. 3 (1993): 527-548.

Whaples, Robert. “The Shortening of the American Work Week: An Economic and Historical Analysis of Its Context, Causes, and Consequences.” Ph.D. dissertation, University of Pennsylvania, 1990a.

Whaples, Robert. “Winning the Eight-Hour Day, 1909-1919.” Journal of Economic History 50, no. 2 (1990b): 393-406.

Whaples, Robert. “Where Is There Consensus Among American Economic Historians? The Results of a Survey on Forty Propositions.” Journal of Economic History 55, no. 1 (1995): 139-154.

Citation: Whaples, Robert. “Hours of Work in U.S. History”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/hours-of-work-in-u-s-history/