EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Labor Unions in the United States

Gerald Friedman, University of Massachusetts at Amherst

Unions and Collective Action

In capitalist labor markets, which developed in the nineteenth-century in the United States and Western Europe, workers exchange their time and effort for wages. But even while laboring under the supervision of others, wage earners have never been slaves, because they have recourse from abuse. They can quit to seek better employment. Or they are free to join with others to take collective action, forming political movements or labor unions. By the end of the nineteenth century, labor unions and labor-oriented political parties had become major forces influencing wages and working conditions. This article explores the nature and development of labor unions in the United States. It reviews the growth and recent decline of the American labor movement and makes comparisons with the experience of foreign labor unions to clarify particular aspects of the history of labor unions in the United States.

Unions and the Free-Rider Problem

Quitting, exit, is straightforward, a simple act for individuals unhappy with their employment. By contrast, collective action, such as forming a labor union, is always difficult because it requires that individuals commit themselves to produce “public goods” enjoyed by all, including those who “free ride” rather than contribute to the group effort. If the union succeeds, free riders receive the same benefits as do activists; but if it fails, the activists suffer while those who remained outside lose nothing. Because individualist logic leads workers to “free ride,” unions cannot grow by appealing to individual self-interest (Hirschman, 1970; 1982; Olson, 1966; Gamson, 1975).

Union Growth Comes in Spurts

Free riding is a problem for all collective movements, including Rotary Clubs, the Red Cross, and the Audubon Society. But unionization is especially difficult because unions must attract members against the opposition of often-hostile employers. Workers who support unions sacrifice money and risk their jobs, even their lives. Success comes only when large numbers simultaneously follow a different rationality. Unions must persuade whole groups to abandon individualism to throw themselves into the collective project. Rarely have unions grown incrementally, gradually adding members. Instead, workers have joined unions en masse in periods of great excitement, attracted by what the French sociologist Emile Durkheim labeled “collective effervescence” or the joy of participating in a common project without regard for individual interest. Growth has come in spurts, short periods of social upheaval punctuated by major demonstrations and strikes when large numbers see their fellow workers publicly demonstrating a shared commitment to the collective project. Union growth, therefore, is concentrated in short periods of dramatic social upheaval; in the thirteen countries listed in Tables 1 and 2, 67 percent of growth comes in only five years, and over 90 percent in only ten years. As Table 3 shows, in these thirteen countries, unions grew by over 10 percent a year in years with the greatest strike activity but by less than 1 percent a year in the years with the fewest strikers (Friedman, 1999; Shorter and Tilly, 1974; Zolberg, 1972).

Table 1
Union Members per 100 Nonagricultural Workers, 1880-1985: Selected Countries

Year Canada US Austria Denmark France Italy Germany Netherlands Norway Sweden UK Australia Japan
1880 n.a. 1.8 n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a.
1900 4.6 7.5 n.a. 20.8 5.0 n.a. 7.0 n.a. 3.4 4.8 12.7 n.a. n.a.
1914 8.6 10.5 n.a. 25.1 8.1 n.a. 16.9 17.0 13.6 9.9 23.0 32.8 n.a.
1928 11.6 9.9 41.7 39.7 8.0 n.a. 32.5 26.0 17.4 32.0 25.6 46.2 n.a.
1939 10.9 20.7 n.a. 51.8 22.4 n.a. n.a. 32.5 57.0 53.6 31.6 39.2 n.a.
1947 24.6 31.4 64.6 55.9 40.0 n.a. 29.1 40.4 55.1 64.6 44.5 52.9 45.3
1950 26.3 28.4 62.3 58.1 30.2 49.0 33.1 43.0 58.4 67.7 44.1 56.0 46.2
1960 28.3 30.4 63.4 64.4 20.0 29.6 37.1 41.8 61.5 73.0 44.2 54.5 32.2
1975 35.6 26.4 58.5 66.6 21.4 50.1 38.2 39.1 60.5 87.2 51.0 54.7 34.4
1985 33.7 18.9 57.8 82.2 14.5 51.0 39.3 28.6 65.3 103.0 44.2 51.5 28.9

Note: This table shows the unionization rate, the share of nonagricultural workers belonging to unions, in different countries in different years, 1880-1985. Because union membership often includes unemployed and retired union members it may exceed the number of employed workers, giving a unionization rate of greater than 100 percent.

Table 2
Union Growth in Peak and Other Years

Country Years Membership Growth Share of Growth (%) Excess Growth (%)
Top 5 Years Top 10 Years All Years 5 Years 10 Years 5 Years 10 Years
Australia 83 720 000 1 230 000 3 125 000 23.0 39.4 17.0 27.3
Austria 52 5 411 000 6 545 000 3 074 000 176.0 212.9 166.8 194.4
Canada 108 855 000 1 532 000 4 028 000 21.2 38.0 16.6 28.8
Denmark 85 521 000 795 000 1 883 000 27.7 42.2 21.8 30.5
France 92 6 605 000 7 557 000 2 872 000 230.0 263.1 224.5 252.3
Germany 82 10 849 000 13 543 000 9 120 000 119.0 148.5 112.9 136.3
Italy 38 3 028 000 4 671 000 3 713 000 81.6 125.8 68.4 99.5
Japan 43 4 757 000 6 692 000 8 983 000 53.0 74.5 41.3 51.2
Netherlands 71 671 000 1 009 000 1 158 000 57.9 87.1 50.9 73.0
Norway 85 304 000 525 000 1 177 000 25.8 44.6 19.9 32.8
Sweden 99 633 000 1 036 000 3 859 000 16.4 26.8 11.4 16.7
UK 96 4 929 000 8 011 000 8 662 000 56.9 92.5 51.7 82.1
US 109 10 247 000 14 796 000 22 293 000 46.0 66.4 41.4 57.2
Total 1043 49 530 000 67 942 000 73 947 000 67.0 91.9 60.7 79.4

Note: This table shows that most union growth comes in a few years. Union membership growth (net of membership losses) has been calculated for each country for each year. Years were then sorted for each country according to membership growth. This table reports growth for each country for the five and the ten years with the fastest growth and compares this with total growth over all years for which data are available. Excess growth has been calculated as the difference between the share of growth in the top five or ten years and the share that would have come in these periods if growth had been distributed evenly across all years.

Note that years of rapid growth are not necessarily contiguous. There can be more growth in years of rapid growth than over the entire period. This is because some is temporary when years of rapid growth are followed by years of decline.

Sources: Bain and Price (1980): 39, Visser (1989)

Table 3
Impact of Strike Activity on Union Growth
Average Union Membership Growth in Years Sorted by Proportion of Workers Striking

Country Striker Rate Quartile Change
Lowest Third Second Highest
Australia 5.1 2.5 4.5 2.7 -2.4
Austria 0.5 -1.9 0.4 2.4 1.9
Canada 1.3 1.9 2.3 15.8 14.5
Denmark 0.3 1.1 3.0 11.3 11.0
France 0.0 2.1 5.6 17.0 17.0
Germany -0.2 0.4 1.3 20.3 20.5
Italy -2.2 -0.3 2.3 5.8 8.0
Japan -0.2 5.1 3.0 4.3 4.5
Netherlands -0.9 1.2 3.5 6.3 7.2
Norway 1.9 4.3 8.6 10.3 8.4
Sweden 2.5 3.2 5.9 16.9 14.4
UK 1.7 1.7 1.9 3.4 1.7
US -0.5 0.6 2.1 19.9 20.4
Total: Average 0.72 1.68 3.42 10.49 9.78

Note: This table shows that except in Australia unions grew fastest in years with large number of strikers. The proportion of workers striking was calculated for each country for each year as the number of strikers divided by the nonagricultural labor force. Years were then sorted into quartiles, each including one-fourth of the years, according to this striker rate statistic. The average annual union membership growth rate was then calculated for each quartile as the mean of the growth rate in each year in the quartile.

Rapid Union Growth Provokes a Hostile Reaction

These periods of rapid union growth end because social upheaval provokes a hostile reaction. Union growth leads employers to organize, to discover their own collective interests. Emulating their workers, they join together to discharge union activists, to support each other in strikes, and to demand government action against unions. This rising opposition ends periods of rapid union growth, beginning a new phase of decline followed by longer periods of stagnant membership. The weakest unions formed during the union surge succumb to the post-boom reaction; but if enough unions survive they leave a movement larger and broader than before.

Early Labor Unions, Democrats and Socialists

Guilds

Before modern labor unions, guilds united artisans and their employees. Craftsmen did the work of early industry, “masters” working beside “journeymen” and apprentices in small workplaces. Throughout the cities and towns of medieval Europe, guilds regulated production by setting minimum prices and quality, and capping wages, employment, and output. Controlled by independent craftsmen, “masters” who employed journeymen and trained apprentices, guilds regulated industry to protect the comfort and status of the masters. Apprentices and journeymen benefited from guild restrictions only when they advanced to master status.

Guild power was gradually undermined in the early-modern period. Employing workers outside the guild system, including rural workers and semiskilled workers in large urban workplaces, merchants transformed medieval industry. By the early 1800s, few could anticipate moving up to becoming a master artisan or owning their own establishment. Instead, facing the prospect of a lifetime of wage labor punctuated by periods of unemployment, some wage earners began to seek a collective regulation of their individual employment (Thompson, 1966; Scott, 1974; Dawley, 1976; Sewell, 1980; Wilentz, 1984; Blewett, 1988).

The labor movement within the broader movement for democracy

This new wage-labor regime led to the modern labor movement. Organizing propertyless workers who were laboring for capitalists, organized labor formed one wing of a broader democratic movement struggling for equality and for the rights of commoners (Friedman, 1998). Within the broader democratic movement for legal and political equality, labor fought the rise of a new aristocracy that controlled the machinery of modern industry just as the old aristocracy had monopolized land. Seen in this light, the fundamental idea of the labor movement, that employees should have a voice in the management of industry, is comparable to the demand that citizens should have a voice in the management of public affairs. Democratic values do not, by any means, guarantee that unions will be fair and evenhanded to all workers. In the United States, by reserving good jobs for their members, unions of white men sometimes contributed to the exploitation of women and nonwhites. Democracy only means that exploitation will be carried out at the behest of a political majority rather than at the say of an individual capitalist (Roediger, 1991; Arnesen, 2001; Foner, 1974; 1979; Milkman, 1985).

Craft unions’ strategy

Workers formed unions to voice their interests against their employers, and also against other workers. Rejecting broad alliances along class lines, alliances uniting workers on the basis of their lack of property and their common relationship with capitalists, craft unions followed a narrow strategy, uniting workers with the same skill against both the capitalists and against workers in different trades. By using their monopoly of knowledge of the work process to restrict access to the trade, craft unions could have a strong bargaining position that was enhanced by alliances with other craftsmen to finance long strikes. A narrow craft strategy was followed by the first successful unions throughout Europe and America, especially in small urban shops using technologies that still depended on traditional specialized skills, including printers, furniture makers, carpenters, gold beaters and jewelry makers, iron molders, engineers, machinists, and plumbers. Craft unions’ characteristic action was the small, local strike, the concerted withdrawal of labor by a few workers critical to production. Typically, craft unions would present a set of demands to local employers on a “take-it-or-leave-it” basis; either the employer accepted their demands or fought a contest of strength to determine whether the employers could do without the skilled workers for longer than the workers could manage without their jobs.

The craft strategy offered little to the great masses of workers. Because it depends on restricting access to trades it could not be applied by common laborers, who were untrained, nor by semi-skilled employees in modern mass-production establishments whose employers trained them on-the-job. Shunned by craft unions, most women and African-Americans in the United States were crowded into nonunion occupations. Some sought employment as strikebreakers in occupations otherwise monopolized by craft unions controlled by white, native-born males (Washington, 1913; Whatley, 1993).

Unions among unskilled workers

To form unions, the unskilled needed a strategy of the weak that would utilize their numbers rather than specialized knowledge and accumulated savings. Inclusive unions have succeeded but only when they attract allies among politicians, state officials, and the affluent public. Sponsoring unions and protecting them from employer repression, allies can allow organization among workers without specialized skills. When successful, inclusive unions can grow quickly in mass mobilization of common laborers. This happened, for example, in Germany at the beginning of the Weimar Republic, during the French Popular Front of 1936-37, and in the United States during the New Deal of the 1930s. These were times when state support rewarded inclusive unions for organizing the unskilled. The bill for mass mobilization usually came later. Each boom was followed by a reaction against the extensive promises of the inclusive labor movement when employers and conservative politicians worked to put labor’s genie back in the bottle.

Solidarity and the Trade Unions

Unionized occupations of the late 1800s

By the late-nineteenth century, trade unions had gained a powerful position in several skilled occupations in the United States and elsewhere. Outside of mining, craft unions were formed among well-paid skilled craft workers — workers whom historian Eric Hobsbawm labeled the “labor aristocracy” (Hobsbawm, 1964; Geary, 1981). In 1892, for example, nearly two-thirds of British coal miners were union members, as were a third of machinists, millwrights and metal workers, cobblers and shoe makers, glass workers, printers, mule spinners, and construction workers (Bain and Price, 1980). French miners had formed relatively strong unions, as had skilled workers in the railroad operating crafts, printers, jewelry makers, cigar makers, and furniture workers (Friedman, 1998). Cigar makers, printers, furniture workers, some construction and metal craftsmen took the lead in early German unions (Kocka, 1986). In the United States, there were about 160,000 union members in 1880, including 120,000 belonging to craft unions, including carpenters, engineers, furniture makers, stone-cutters, iron puddlers and rollers, printers, and several railroad crafts. Another 40,000 belonged to “industrial” unions organized without regard for trade. About half of these were coal miners; most of the rest belonged to the Knights of Labor (KOL) (Friedman, 1999).

The Knights of Labor

In Europe, these craft organizations were to be the basis of larger, mass unions uniting workers without regard for trade or, in some cases, industry (Ansell, 2001). This process began in the United States in the 1880s when craft workers in the Knights of Labor reached out to organize more broadly. Formed by skilled male, native-born garment cutters in 1869, the Knights of Labor would seem an odd candidate to mobilize the mass of unskilled workers. But from a few Philadelphia craft workers, the Knights grew to become a national and even international movement. Membership reached 20,000 in 1881 and grew to 100,000 in 1885. Then, in 1886, when successful strikes on some western railroads attracted a mass of previously unorganized unskilled workers, the KOL grew to a peak membership of a million workers. For a brief time, the Knights of Labor was a general movement of the American working class (Ware, 1929; Voss, 1993).

The KOL became a mass movement with an ideology and program that united workers without regard for occupation, industry, race or gender (Hattam, 1993). Never espousing Marxist or socialist doctrines, the Knights advanced an indigenous form of popular American radicalism, a “republicanism” that would overcome social problems by extending democracy to the workplace. Valuing citizens according to their work, their productive labor, the Knights were true heirs of earlier bourgeois radicals. Open to all producers, including farmers and other employers, they excluded only those seen to be parasitic on the labor of producers — liquor dealers, gamblers, bankers, stock manipulators and lawyers. Welcoming all others without regard for race, gender, or skill, the KOL was the first American labor union to attract significant numbers of women, African-Americans, and the unskilled (Foner, 1974; 1979; Rachleff, 1984).

The KOL’s strategy

In practice, most KOL local assemblies acted like craft unions. They bargained with employers, conducted boycotts, and called members out on strike to demand higher wages and better working conditions. But unlike craft unions that depended on the bargaining leverage of a few strategically positioned workers, the KOL’s tactics reflected its inclusive and democratic vision. Without a craft union’s resources or control over labor supply, the Knights sought to win labor disputes by widening them to involve political authorities and the outside public able to pressure employers to make concessions. Activists hoped that politicizing strikes would favor the KOL because its large membership would tempt ambitious politicians while its members’ poverty drew public sympathy.

In Europe, a strategy like that of the KOL succeeded in promoting the organization of inclusive unions. But it failed in the United States. Comparing the strike strategies of trade unions and the Knights provides insight into the survival and eventual success of the trade unions and their confederation, the American Federation of Labor (AFL) in late-nineteenth century America. Seeking to transform industrial relations, local assemblies of the KOL struck frequently with large but short strikes involving skilled and unskilled workers. The Knights’ industrial leverage depended on political and social influence. It could succeed where trade unions would not go because the KOL strategy utilized numbers, the one advantage held by common laborers. But this strategy could succeed only where political authorities and the outside public might sympathize with labor. Later industrial and regional unions tried the same strategy, conducting short but large strikes. By demonstrating sufficient numbers and commitment, French and Italian unions, for example, would win from state officials concessions they could not force from recalcitrant employers (Shorter and Tilly, 1974; Friedman, 1998). But compared with the small strikes conducted by craft unions, “solidarity” strikes must walk a fine line, aggressive enough to draw attention but not so threatening to provoke a hostile reaction from threatened authorities. Such a reaction doomed the KOL.

The Knights’ collapse in 1886

In 1886, the Knights became embroiled in a national general strike demanding an eight-hour workday, the world’s first May Day. This led directly to the collapse of the KOL. The May Day strike wave in 1886 and the bombing at Haymarket Square in Chicago provoked a “red scare” of historic proportions driving membership down to half a million in September 1887. Police in Chicago, for example, broke up union meetings, seized union records, and even banned the color red from advertisements. The KOL responded politically, sponsoring a wave of independent labor parties in the elections of 1886 and supporting the Populist Party in 1890 (Fink, 1983). But even relatively strong showings by these independent political movements could not halt the KOL’s decline. By 1890, its membership had fallen by half again, and it fell to under 50,000 members by 1897.

Unions and radical political movements in Europe in the late 1800s

The KOL spread outside the United States, attracting an energetic following in the Canada, the United Kingdom, France, and other European countries. Industrial and regional unionism fared better in these countries than in the United States. Most German unionists belonged to industrial unions allied with the Social Democratic Party. Under Marxist leadership, unions and political party formed a centralized labor movement to maximize labor’s political leverage. English union membership was divided between members of a stable core of craft unions and a growing membership in industrial and regional unions based in mining, cotton textiles, and transportation. Allied with political radicals, these industrial and regional unions formed the backbone of the Labor Party, which held the balance of power in British politics after 1906.

The most radical unions were found in France. By the early 1890s, revolutionary syndicalists controlled the national union center, the Confédération générale du travail (or CGT), which they tried to use as a base for a revolutionary general strike where the workers would seize economic and political power. Consolidating craft unions into industrial and regional unions, the Bourses du travail, syndicalists conducted large strikes designed to demonstrate labor’s solidarity. Paradoxically, the syndicalists’ large strikes were effective because they provoked friendly government mediation. In the United States, state intervention was fatal for labor because government and employers usually united to crush labor radicalism. But in France, officials were more concerned to maintain a center-left coalition with organized labor against reactionary employers opposed to the Third Republic. State intervention helped French unionists to win concessions beyond any they could win with economic leverage. A radical strategy of inclusive industrial and regional unionism could succeed in France because the political leadership of the early Third Republic needed labor’s support against powerful economic and social groups who would replace the Republic with an authoritarian regime. Reminded daily of the importance of republican values and the coalition that sustained the Republic, French state officials promoted collective bargaining and labor unions. Ironically, it was the support of liberal state officials that allowed French union radicalism to succeed, and allowed French unions to grow faster than American unions and to organize the semi-skilled workers in the large establishments of France’s modern industries (Friedman, 1997; 1998).

The AFL and American Exceptionalism

By 1914, unions outside the United States had found that broad organization reduced the availability of strike breakers, advanced labor’s political goals, and could lead to state intervention on behalf of the unions. The United States was becoming exceptional, the only advanced capitalist country without a strong, united labor movement. The collapse of the Knights of Labor cleared the way for the AFL. Formed in 1881 as the Federation of Trade and Labor Unions, the AFL was organized to uphold the narrow interests of craft workers against the general interests of common laborers in the KOL. In practice, AFL-craft unions were little labor monopolies, able to win concessions because of their control over uncommon skills and because their narrow strategy did not frighten state officials. Many early AFL leaders, notably the AFL’s founding president Samuel Gompers and P. J. McGuire of the Carpenters, had been active in radical political movements. But after 1886, they learned to reject political involvements for fear that radicalism might antagonize state officials or employers and provoke repression.

AFL successes in the early twentieth-century

Entering the twentieth century, the AFL appeared to have a winning strategy. Union membership rose sharply in the late 1890s, doubling between 1896 and 1900 and again between 1900 and 1904. Fewer than 5 percent of industrial wage earners belonged to labor unions in 1895, but this share rose to 7 percent in 1900 and 13 percent in 1904, including over 21 percent of industrial wage earners (workers outside of commerce, government, and the professions). Half of coal miners in 1904 belonged to an industrial union (the United Mine Workers of America), but otherwise, most union members belonged to craft organizations, including nearly half the printers, and a third of cigar makers, construction workers and transportation workers. As shown in Table 4, other pockets of union strength included skilled workers in the metal trades, leather, and apparel. These craft unions had demonstrated their economic power, raising wages by around 15 percent and reducing hours worked (Friedman, 1991; Mullin, 1993).

Table 4
Unionization rates by industry in the United States, 1880-2000

Industry 1880 1910 1930 1953 1974 1983 2000
Agriculture Forestry Fishing 0.0 0.1 0.4 0.6 4.0 4.8 2.1
Mining 11.2 37.7 19.8 64.7 34.7 21.1 10.9
Construction 2.8 25.2 29.8 83.8 38.0 28.0 18.3
Manufacturing 3.4 10.3 7.3 42.4 37.2 27.9 14.8
Transportation Communication Utilities 3.7 20.0 18.3 82.5 49.8 46.4 24.0
Private Services 0.1 3.3 1.8 9.5 8.6 8.7 4.8
Public Employment 0.3 4.0 9.6 11.3 38.0 31.1 37.5
All Private 1.7 8.7 7.0 31.9 22.4 18.4 10.9
All 1.7 8.5 7.1 29.6 24.8 20.4 14.1

Note: This table shows the unionization rate, the share of workers belonging to unions, in different industries in the United States, 1880-1996.

Sources: 1880 and 1910: Friedman (1999): 83; 1930: Union membership from Wolman (1936); employment from United States, Bureau of the Census (1932); 1953: Troy (1957); 1974, 1986, 2000: United States, Current Population Survey.

Limits to the craft strategy

Even at this peak, the craft strategy had clear limits. Craft unions succeeded only in a declining part of American industry among workers still performing traditional tasks where training was through apprenticeship programs controlled by the workers themselves. By contrast, there were few unions in the rapidly growing industries employing semi-skilled workers. Nor was the AFL able to overcome racial divisions and state opposition to organize in the South (Friedman, 2000; Letwin, 1998). Compared with the KOL in the early 1880s, or with France’s revolutionary syndicalist unions, American unions were weak in steel, textiles, chemicals, paper and metal fabrication using technologies without traditional craft skills. AFL strongholds included construction, printing, cigar rolling, apparel cutting and pressing, and custom metal engineering, employed craft workers in relatively small establishments little changed from 25 years earlier (see Table 4).

Dependent on skilled craftsmen’s economic leverage, the AFL was poorly organized to battle large, technologically dynamic corporations. For a brief time, the revolutionary International Workers of the World (IWW), formed in 1905, organized semi-skilled workers in some mass production industries. But by 1914, it too had failed. It was state support that forced powerful French employers to accept unions. Without such assistance, no union strategy could force large American employers to accept unions.

Unions in the World War I Era

The AFL and World War I

For all its limits, it must be acknowledged that the AFL and its craft affiliates survived after their rivals ignited and died. The AFL formed a solid union movement among skilled craftsmen that with favorable circumstances could form the core of a broader union movement like what developed in Europe after 1900. During World War I, the Wilson administration endorsed unionization and collective bargaining in exchange for union support for the war effort. AFL affiliates used state support to organize mass-production workers in shipbuilding, metal fabrication, meatpacking and steel doubling union membership between 1915 and 1919. But when Federal support ended after the war’s end, employers mobilized to crush the nascent unions. The post-war union collapse has been attributed to the AFL’s failings. The larger truth is that American unions needed state support to overcome the entrenched power of capital. The AFL did not fail because of its deficient economic strategy; it failed because it had an ineffective political strategy (Friedman, 1998; Frank, 1994; Montgomery, 1987).

International effects of World War I

War gave labor extraordinary opportunities. Combatant governments rewarded pro-war labor leaders with positions in the expanded state bureaucracy and support for collective bargaining and unions. Union growth also reflected economic conditions when wartime labor shortages strengthened the bargaining position of workers and unions. Unions grew rapidly during and immediately after the war. British unions, for example, doubled their membership between 1914 and 1920, to enroll eight million workers, almost half the nonagricultural labor force (Bain and Price, 1980; Visser, 1989). Union membership tripled in Germany and Sweden, doubled in Canada, Denmark, the Netherlands, and Norway, and almost doubled in the United States (see Table 5 and Table 1). For twelve countries, membership grew by 121 percent between 1913 and 1920, including 119 percent growth in seven combatant countries and 160 percent growth in five neutral states.

Table 5
Impact of World War I on Union Membership Growth
Membership Growth in Wartime and After

12 Countries 7 Combatants 5 Neutrals
War-Time 1913 12 498 000 11 742 000 756 000
1920 27 649 000 25 687 000 1 962 000
Growth 1913-20: 121% 119% 160%
Post-war 1920 27 649 000
1929 18 149 000
Growth 1920-29: -34%

Shift toward the revolutionary left

Even before the war, frustration with the slow pace of social reform had led to a shift towards the revolutionary socialist and syndicalist left in Germany, the United Kingdom, and the United States (Nolan, 1981; Montgomery, 1987). In Europe, frustrations with rising prices, declining real wages and working conditions, and anger at catastrophic war losses fanned the flames of discontent into a raging conflagration. Compared with pre-war levels, the number of strikers rose ten or even twenty times after the war, including 2.5 million strikers in France in 1919 and 1920, compared with 200,000 strikers in 1913, 13 million German strikers, up from 300,000 in 1913, and 5 million American strikers, up from under 1 million in 1913. British Prime Minister Lloyd George warned in March 1919 that “The whole of Europe is filled with the spirit of revolution. There is a deep sense not only of discontent, but of anger and revolt among the workmen . . . The whole existing order in its political, social and economic aspects is questioned by the masses of the population from one end of Europe to the other” (quoted in Cronin, 1983: 22).

Impact of Communists

Inspired by the success of the Bolshevik revolution in Russia, revolutionary Communist Parties were organized throughout the world to promote revolution by organizing labor unions, strikes, and political protest. Communism was a mixed blessing for labor. The Communists included some of labor’s most dedicated activists and organizers who contributed greatly to union organization. But Communist help came at a high price. Secretive, domineering, intolerant of opposition, the Communists divided unions between their dwindling allies and a growing collection of outraged opponents. Moreover, they galvanized opposition, depriving labor of needed allies among state officials and the liberal bourgeoisie.

The “Lean Years”: Welfare Capitalism and the Open Shop

Aftermath of World War I

As with most great surges in union membership, the postwar boom was self-limiting. Helped by a sharp post- war economic contraction, employers and state officials ruthlessly drove back the radical threat, purging their workforce of known union activists and easily absorbing futile strikes during a period of rising unemployment. Such campaigns drove membership down by a third from a 1920 peak of 26 million members in eleven countries in 1920 to fewer than 18 million in 1924. In Austria, France, Germany, and the United States, labor unrest contributed to the election of conservative governments; in Hungary, Italy, and Poland it led to the installation of anti- democratic dictatorships that ruthlessly crushed labor unions. Economic stagnation, state repression, and anti-union campaigns by employers prevented any union resurgence through the rest of the 1920s. By 1929, unions in these eleven countries had added only 30,000 members, one-fifth of one percent.

Injunctions and welfare capitalism

The 1920s was an especially dark period for organized labor in the United States where weaknesses visible before World War I became critical failures. Labor’s opponents used fear of Communism to foment a post-war red scare that targeted union activists for police and vigilante violence. Hundreds of foreign-born activists were deported, and mobs led by the American Legion and the Ku Klux Klan broke up union meetings and destroyed union offices (see, for example, Frank, 1994: 104-5). Judges added law to the campaign against unions. Ignoring the intent of the Clayton Anti-Trust Act (1914) they used anti-trust law and injunctions against unions, forbidding activists from picketing or publicizing disputes, holding signs, or even enrolling new union members. Employers competed for their workers’ allegiance, offering paternalist welfare programs and systems of employee representation as substitutes for independent unions. They sought to build a nonunion industrial relations system around welfare capitalism (Cohen, 1990).

Stagnation and decline

After the promises of the war years, the defeat of postwar union drives in mass production industries like steel and meatpacking inaugurated a decade of union stagnation and decline. Membership fell by a third between 1920 and 1924. Unions survived only in the older trades where employment was usually declining. By 1924, they were almost completely eliminated from the dynamic industries of the second industrial revolution: including steel, automobiles, consumer electronics, chemicals and rubber manufacture.

New Deals for Labor

Great Depression

The nonunion industrial relations system of the 1920s might have endured and produced a docile working class organized in company unions (Brody, 1985). But the welfare capitalism of the 1920s collapsed when the Great Depression of the 1930s exposed its weaknesses and undermined political support for the nonunion, open shop. Between 1929 and 1933, real national income in the United States fell by one third, nonagricultural employment fell by a quarter, and unemployment rose from under 2 million in 1929 to 13 million in 1933, a quarter of the civilian labor force. Economic decline was nearly as great elsewhere, raising unemployment to over 15 percent in Austria, Canada, Germany, and the United Kingdom (Maddison, 1991: 260-61). Only the Soviet Union, with its authoritarian political economy was largely spared the scourge of unemployment and economic collapse — a point emphasized by Communists throughout the 1930s and later. Depression discredited the nonunion industrial relations system by forcing welfare capitalists to renege on promises to stabilize employment and to maintain wages. Then, by ignoring protests from members of employee representation plans, welfare capitalists further exposed the fundamental weakness of their system. Lacking any independent support, paternalist promises had no standing but depended entirely on the variable good will of employers. And sometimes that was not enough (Cohen, 1990).

Depression-era political shifts

Voters, too, lost confidence in employers. The Great Depression discredited the old political economy. Even before Franklin Roosevelt’s election as President of the United States in 1932, American states enacted legislation restricting the rights of creditors and landlords, restraining the use of the injunction in labor disputes, and providing expanded relief for the unemployed (Ely, 1998; Friedman, 2001). European voters abandoned centrist parties, embracing extremists of both left and right, Communists and Fascists. In Germany, the Nazis won, but Popular Front governments uniting Communists and socialists with bourgeois liberals assumed power in other countries, including Sweden, France and Spain. (The Spanish Popular Front was overthrown by a Fascist rebellion that installed a dictatorship led by Francisco Franco.) Throughout there was an impulse to take public control over the economy because free market capitalism and orthodox finance had led to disaster (Temin, 1990).

Economic depression lowers union membership when unemployed workers drop their membership and employers use their stronger bargaining position to defeat union drives (Bain and Elsheikh, 1976). Indeed, union membership fell with the onset of the Great Depression but, contradicting the usual pattern, membership rebounded sharply after 1932 despite high unemployment, rising by over 76 percent in ten countries by 1938 (see Table 6 and Table 1). The fastest growth came in countries with openly pro-union governments. In France, where the Socialist Léon Blum led a Popular Front government, and the United States, during Franklin Roosevelt’s New Deal, membership rose by 160 percent 1933-38. But membership grew by 33 percent in eight other countries even without openly pro-labor governments.

Table 6
Impact of the Great Depression and World War II on Union Membership Growth

11 Countries (no Germany) 10 Countries (no Austria)
Depression 1929 12 401 000 11 508 000
1933 11 455 000 10 802 000
Growth 1929-33 -7.6% -6.1%
Popular Front Period 1933 10 802 000
1938 19 007 000
Growth 1933-38 76.0%
Second World War 1938 19 007 000
1947 35 485 000
Growth 1938-47 86.7%

French unions and the Matignon agreements

French union membership rose from under 900,000 in 1935 to over 4,500,000 in 1937. The Popular Front’s victory in the elections of June 1936 precipitated a massive strike wave and the occupation of factories and workplaces throughout France. Remembered in movie, song and legend, the factory occupations were a nearly spontaneous uprising of French workers that brought France’s economy to a halt. Contemporaries were struck by the extraordinarily cheerful feelings that prevailed, the “holiday feeling” and sense that the strikes were a new sort of non-violent revolution that would overturn hierarchy and replace capitalist authoritarianism with true social democracy (Phillippe and Dubief, 1993: 307-8). After Blum assumed office, he brokered the Matignon agreements, named after the premier’s official residence in Paris. Union leaders and heads of France’s leading employer associations agreed to end the strikes and occupations in exchange for wage increases of around 15 percent, a 40 hour workweek, annual vacations, and union recognition. Codified in statute by the Popular Front government, French unions gained new rights and protections from employer repression. Only then did workers flock into unions. In a few weeks, French unions gained four million members with the fastest growth in the new industries of the second industrial revolution. Unions in metal fabrication and chemicals grew by 1,450 percent and 4,000 percent respectively (Magraw, 1992: 2, 287-88).

French union leader Léon Jouhaux hailed the Matignon agreements as “the greatest victory of the workers’ movement.” It included lasting gains, including annual vacations and shorter workweeks. But Simone Weil described the strikers of May 1936 as “soldiers on leave,” and they were soon returned to work. Regrouping, employers discharged union activists and attacked the precarious unity of the Popular Front government. Fighting an uphill battle against renewed employer resistance, the Popular Front government fell before it could build a new system of cooperative industrial relations. Contained, French unions were unable to maintain their momentum towards industrial democracy. Membership fell by a third in 1937-39.

The National Industrial Recovery Act

A different union paradigm was developed in the United States. Rather than vehicles for a democratic revolution, the New Deal sought to integrate organized labor into a reformed capitalism that recognized capitalist hierarchy in the workplace, using unions only to promote macroeconomic stabilization by raising wages and consumer spending (Brinkley, 1995). Included as part of a program for economic recovery was section 7(a) of the National Industrial Recovery Act (NIRA) giving “employees . . . the right to organize and bargain collectively through representatives of their own choosing . . . free from the interference, restraint, or coercion of employers.” AFL-leader William Green pronounced this a “charter of industrial freedom” and workers rushed into unions in a wave unmatched since the Knights of Labor in 1886. As with the KOL, the greatest increase came among the unskilled. Coal miners, southern textile workers, northern apparel workers, Ohio tire makers, Detroit automobile workers, aluminum, lumber and sawmill workers all rushed into unions. For the first time in fifty years, American unions gained a foothold in mass production industries.

AFL’s lack of enthusiasm

Promises of state support brought common laborers into unions. But once there, the new unionists received little help from aging AFL leaders. Fearing that the new unionists’ impetuous zeal and militant radicalism would provoke repression, AFL leaders tried to scatter the new members among contending craft unions with archaic craft jurisdictions. The new unionists were swept up in the excitement of unity and collective action but a half-century of experience had taught the AFL’s leadership to fear such enthusiasms.

The AFL dampened the union boom of 1933-34, but, again, the larger problem was not with the AFL’s flawed tactics but with its lack of political leverage. Doing little to enforce the promises of Section 7(a), the Federal government left employers free to ignore the law. Some flatly prohibited union organization; others formally honored the law but established anemic employee representation plans while refusing to deal with independent unions (Irons, 2000). By 1935 almost as many industrial establishments had employer-dominated employee- representation plans (27 percent) as had unions (30 percent). The greatest number had no labor organization at all (43 percent).

Birth of the CIO

Implacable management resistance and divided leadership killed the early New Deal union surge. It died even before the NIRA was ruled unconstitutional in 1935. Failure provoked rebellion within the AFL. Led by John L. Lewis of the United Mine Workers, eight national unions launched a campaign for industrial organization as the Committee for Industrial Organization. After Lewis punched Carpenter’s Union leader William L Hutcheson on the floor of the AFL convention in 1935, the Committee became an independent Congress of Industrial Organization (CIO). Including many Communist activists, CIO committees fanned out to organize workers in steel, automobiles, retail trade, journalism and other industries. Building effectively on local rank and file militancy, including sitdown strikes in automobiles, rubber, and other industries, the CIO quickly won contracts from some of the strongest bastions of the open shop, including United States Steel and General Motors (Zieger, 1995).

The Wagner Act

Creative strategy and energetic organizing helped. But the CIO owed its lasting success to state support. After the failure of the NIRA, New Dealers sought another way to strengthen labor as a force for economic stimulus. This led to the enactment in 1935 of the National Labor Relations Act, also known as the “Wagner Act.” The Wagner Act established a National Labor Relations Board charged to enforce employees’ “right to self-organization, to form, join, or assist labor organizations to bargain collectively through representatives of their own choosing and to engage in concerted activities for the purpose of collective bargaining or other mutual aid or protection.” It provided for elections to choose union representation and required employers to negotiate “in good faith” with their workers’ chosen representatives. Shifting labor conflict from strikes to elections and protecting activists from dismissal for their union work, the Act lowered the cost to individual workers of supporting collective action. It also put the Federal government’s imprimatur on union organization.

Crucial role of rank-and-file militants and state government support

Appointed by President Roosevelt, the first NLRB was openly pro-union, viewing the Act’s preamble as mandate to promote organization. By 1945 the Board had supervised 24,000 union elections involving some 6,000,000 workers, leading to the unionization of nearly 5,000,000 workers. Still, the NLRB was not responsible for the period’s union boom. The Wagner Act had no direct role in the early CIO years because it was ignored for two years until its constitutionality was established by the Supreme Court in National Labor Relations Board v. Jones and Laughlin Steel Company (1937). Furthermore, the election procedure’s gross contribution of 5,000,000 members was less than half of the period’s net union growth of 11,000,000 members. More important than the Wagner Act were crucial union victories over prominent open shop employers in cities like Akron, Ohio, Flint, Michigan, and among Philadelphia-area metal workers. Dedicated rank-and-file militants and effective union leadership were crucial in these victories. As important was the support of pro-New Deal local and state governments. The Roosevelt landslides of 1934 and 1936 brought to office liberal Democratic governors and mayors who gave crucial support to the early CIO. Placing a right to collective bargaining above private property rights, liberal governors and other elected officials in Michigan, Ohio, Pennsylvania and elsewhere refused to send police to evict sit-down strikers who had seized control of factories. This state support allowed the minority of workers who actively supported unionization to use force to overcome the passivity of the majority of workers and the opposition of the employers. The Open Shop of the 1920s was not abandoned; it was overwhelmed by an aggressive, government-backed labor movement (Gall, 1999; Harris, 2000).

World War II

Federal support for union organization was also crucial during World War II. Again, war helped unions both by eliminating unemployment and because state officials supported unions to gain support for the war effort. Established to minimize labor disputes that might disrupt war production, the National War Labor Board instituted a labor truce where unions exchanged a no-strike pledge for employer recognition. During World War II, employers conceded union security and “maintenance of membership” rules requiring workers to pay their union dues. Acquiescing to government demands, employers accepted the institutionalization of the American labor movement, guaranteeing unions a steady flow of dues to fund an expanded bureaucracy, new benefit programs, and even to raise funds for political action. After growing from 3.5 to 10.2 million members between 1935 and 1941, unions added another 4 million members during the war. “Maintenance of membership” rules prevented free riders even more effectively than had the factory takeovers and violence of the late-1930s. With millions of members and money in the bank, labor leaders like Sidney Hillman and Phillip Murray had the ear of business leaders and official Washington. Large, established, and respected: American labor had made it, part of a reformed capitalism committed to both property and prosperity.

Even more than the First World War, World War Two promoted unions and social change. A European civil war, the war divided the continent not only between warring countries but within countries between those, usually on the political right, who favored fascism over liberal parliamentary government and those who defended democracy. Before the war, left and right contended over the appeasement of Nazi Germany and fascist Italy; during the war, many businesses and conservative politicians collaborated with the German occupation against a resistance movement dominated by the left. Throughout Europe, victory over Germany was a triumph for labor that led directly to the entry into government of socialists and Communists.

Successes and Failures after World War II

Union membership exploded during and after the war, nearly doubling between 1938 and 1946. By 1947, unions had enrolled a majority of nonagricultural workers in Scandinavia, Australia, and Italy, and over 40 percent in most other European countries (see Table 1). Accumulated depression and wartime grievances sparked a post- war strike wave that included over 6 million strikers in France in 1948, 4 million in Italy in 1949 and 1950, and 5 million in the United States in 1946. In Europe, popular unrest led to a dramatic political shift to the left. The Labor Party government elected in the United Kingdom in 1945 established a new National Health Service, and nationalized mining, the railroads, and the Bank of England. A center-left post-war coalition government in France expanded the national pension system and nationalized the Bank of France, Renault, and other companies associated with the wartime Vichy regime. Throughout Europe, the share of national income devoted to social services jumped dramatically, as did the share of income going to the working classes.

Europeans unions and the state after World War II

Unions and the political left were stronger everywhere throughout post-war Europe, but in some countries labor’s position deteriorated quickly. In France, Italy, and Japan, the popular front uniting Communists, socialists, and bourgeois liberals dissolved, and labor’s management opponents recovered state support, with the onset of the Cold War. In these countries, union membership dropped after 1947 and unions remained on the defensive for over a decade in a largely adversarial industrial relations system. Elsewhere, notably in countries with weak Communist movements, such as in Scandinavia but also in Austria, Germany, and the Netherlands, labor was able to compel management and state officials to accept strong and centralized labor movements as social partners. In these countries, stable industrial relations allowed cooperation between management and labor to raise productivity and to open new markets for national companies. High-union-density and high-union-centralization allowed Scandinavian and German labor leaders to negotiate incomes policies with governments and employers restraining wage inflation in exchange for stable employment, investment, and wages linked to productivity growth. Such policies could not be instituted in countries with weaker and less centralized labor movements, including France, Italy, Japan, the United Kingdom and the United States because their unions had not been accepted as bargaining partners by management and they lacked the centralized authority to enforce incomes policies and productivity bargains (Alvarez, Garrett, and Lange, 1992).

Europe since the 1960s

Even where European labor was the weakest, in France or Italy in the 1950s, unions were stronger than before World War II. Working with entrenched socialist and labor political parties, European unions were able to maintain high wages, restrictions on managerial autonomy, and social security. The wave of popular unrest in the late 1960s and early 1970s would carry most European unions to new heights, briefly bringing membership to over 50 percent of the labor force in the United Kingdom and in Italy, and bringing socialists into the government in France, Germany, Italy, and the United Kingdom. Since 1980, union membership has declined some and there has been some retrenchment in the welfare state. But the essentials of European welfare states and labor relations have remained (Western, 1997; Golden and Pontusson, 1992).

Unions begin to decline in the US

It was after World War II that American Exceptionalism became most valid, when the United States emerged as the advanced, capitalist democracy with the weakest labor movement. The United States was the only advanced capitalist democracy where unions went into prolonged decline right after World War II. At 35 percent, the unionization rate in 1945 was the highest in American history, but even then it was lower than in most other advanced capitalist economies. It has been falling since. The post-war strike wave, including three million strikers in 1945 and five million in 1946, was the largest in American history but it did little to enhance labor’s political position or bargaining leverage. Instead, it provoked a powerful reaction among employers and others suspicious of growing union power. A concerted drive by the CIO to organize the South, “Operation Dixie,” failed dismally in 1946. Unable to overcome private repression, racial divisions, and the pro-employer stance of southern local and state governments, the CIO’s defeat left the South as a nonunion, low-wage domestic enclave and a bastion of anti- union politics (Griffith, 1988). Then, in 1946, a conservative Republican majority was elected to Congress, dashing hopes for a renewed, post-war New Deal.

The Taft-Hartley Act and the CIO’s Expulsion of Communists

Quickly, labor’s wartime dreams turned to post-war nightmares. The Republican Congress amended the Wagner Act, enacting the Taft-Hartley Act in 1947 to give employers and state officials new powers against strikers and unions. The law also required union leaders to sign a non-Communist affidavit as a condition for union participation in NLRB-sponsored elections. This loyalty oath divided labor during a time of weakness. With its roots in radical politics and an alliance of convenience between Lewis and the Communists, the CIO was torn by the new Red Scare. Hoping to appease the political right, the CIO majority in 1949 expelled ten Communist-led unions with nearly a third of the organization’s members. This marked the end of the CIO’s expansive period. Shorn of its left, the CIO lost its most dynamic and energetic organizers and leaders. Worse, it plunged the CIO into a civil war; non-Communist affiliates raided locals belonging to the “communist-led” unions fatally distracting both sides from the CIO’s original mission to organize the unorganized and empower the dispossessed. By breaking with the Communists, the CIO’s leadership signaled that it had accepted its place within a system of capitalist hierarchy. Little reason remained for the CIO to remain independent. In 1955 it merged with the AFL to form the AFL-CIO.

The Golden Age of American Unions

Without the revolutionary aspirations now associated with the discredited Communists, America’s unions settled down to bargain over wages and working conditions without challenging such managerial prerogatives as decisions about prices, production, and investment. Some labor leaders, notably James Hoffa of the Teamsters but also local leaders in construction and service trades, abandoned all higher aspirations to use their unions for purely personal financial gain. Allying themselves with organized crime, they used violence to maintain their power over employers and their own rank-and-file membership. Others, including former-CIO leaders, like Walter Reuther of the United Auto Workers, continued to push the envelope of legitimate bargaining topics, building challenges to capitalist authority at the workplace. But even the UAW was unable to force major managerial prerogatives onto the bargaining table.

The quarter century after 1950 formed a ‘golden age’ for American unions. Established unions found a secure place at the bargaining table with America’s leading firms in such industries as autos, steel, trucking, and chemicals. Contracts were periodically negotiated providing for the exchange of good wages for cooperative workplace relations. Rules were negotiated providing a system of civil authority at work, with negotiated regulations for promotion and layoffs, and procedures giving workers opportunities to voice grievances before neutral arbitrators. Wages rose steadily, by over 2 percent per year and union workers earned a comfortable 20 percent more than nonunion workers of similar age, experience and education. Wages grew faster in Europe but American wages were higher and growth was rapid enough to narrow the gap between rich and poor, and between management salaries and worker wages. Unions also won a growing list of benefit programs, medical and dental insurance, paid holidays and vacations, supplemental unemployment insurance, and pensions. Competition for workers forced many nonunion employers to match the benefit packages won by unions, but unionized employers provided benefits worth over 60 percent more than were given nonunion workers (Freeman and Medoff, 1984; Hirsch and Addison, 1986).

Impact of decentralized bargaining in the US

In most of Europe, strong labor movements limit the wage and benefit advantages of union membership by forcing governments to extend union gains to all workers in an industry regardless of union status. By compelling nonunion employers to match union gains, this limited the competitive penalty borne by unionized firms. By contrast, decentralized bargaining and weak unions in the United States created large union wage differentials that put unionized firms at a competitive disadvantage, encouraging them to seek out nonunion labor and localities. A stable and vocal workforce with more experience and training did raise unionized firms’ labor productivity by 15 percent or more above the level of nonunion firms and some scholars have argued that unionized workers earn much of their wage gain. Others, however, find little productivity gain for unionized workers after account is taken of greater use of machinery and other nonlabor inputs by unionized firms (compare Freeman and Medoff, 1984 and Hirsch and Addison, 1986). But even unionized firms with higher labor productivity were usually more conscious of the wages and benefits paid to union worker than they were of unionization’s productivity benefits.

Unions and the Civil Rights Movement

Post-war unions remained politically active. European unions were closely associated with political parties, Communists in France and Italy, socialists or labor parties elsewhere. In practice, notwithstanding revolutionary pronouncements, even the Communist’s political agenda came to resemble that of unions in the United States, liberal reform including a commitment to full employment and the redistribution of income towards workers and the poor (Boyle, 1998). Golden age unions have also been at the forefront of campaigns to extend individual rights. The major domestic political issue of the post-war United States, civil rights, was troubling for many unions because of the racist provisions in their own practice. Nonetheless, in the 1950s and 1960s, the AFL-CIO strongly supported the civil rights movement, funded civil rights organizations and lobbied in support of civil rights legislation. The AFL-CIO pushed unions to open their ranks to African-American workers, even at the expense of losing affiliates in states like Mississippi. Seizing the opportunity created by the civil rights movement, some unions gained members among nonwhites. The feminist movement of the 1970s created new challenges for the masculine and sometimes misogynist labor movement. But, here too, the search for members and a desire to remove sources of division eventually brought organized labor to the forefront. The AFL-CIO supported the Equal Rights Amendment and began to promote women to leadership positions.

Shift of unions to the public sector

In no other country have women and members of racial minorities assumed such prominent positions in the labor movement as they have in the United States. The movement of African-American and women to leadership positions in the late-twentieth century labor movement was accelerated by a shift in the membership structure of the United States union movement. Maintaining their strength in traditional, masculine occupations in manufacturing, construction, mining, and transportation, European unions remained predominantly male. Union decline in these industries combined with growth in heavily female public sector employments in the United States led to the femininization of the American labor movement. Union membership began to decline in the private sector in the United States immediately after World War II. Between 1953 and 1983, for example, the unionization rate fell from 42 percent to 28 percent in manufacturing, by nearly half in transportation, and by over half in construction and mining (see Table 4). By contrast, after 1960, public sector workers won new opportunities to form unions. Because women and racial minorities form a disproportionate share of these public sector workers, increasing union membership there has changed the American labor movement’s racial and gender composition. Women comprised only 19 percent of American union members in the mid-1950s but their share rose to 40 percent by the late 1990s. By then, the most unionized workers were no longer the white male skilled craftsmen of old. Instead, they were nurses, parole officers, government clerks, and most of all, school teachers.

Union Collapse and Union Avoidance in the US

Outside the United States, unions grew through the 1970s and, despite some decline since the 1980s, European and Canadian unions remain large and powerful. The United States is different. Union decline since World War II has brought the United States private-sector labor movement down to early twentieth century levels. As a share of the nonagricultural labor force, union membership fell from its 1945 peak of 35 percent down to under 30 percent in the early 1970s. From there, decline became a general rout. In the 1970s, rising unemployment, increasing international competition, and the movement of industry to the nonunion South and to rural areas undermined the bargaining position of many American unions leaving them vulnerable to a renewed management offensive. Returning to pre-New Deal practices, some employers established new welfare and employee representation programs, hoping to lure worker away from unions (Heckscher, 1987; Jacoby, 1997). Others returned to pre-New Deal repression. By the early 1980s, union avoidance had become an industry. Anti-union consultants and lawyers openly counseled employers how to use labor law to evade unions. Findings of employers’ unfair labor practices in violation of the Wagner Act tripled in the 1970s; by the 1980s, the NLRB reinstated over 10,000 workers a year who were illegally discharged for union activity, nearly one for every twenty who voted for a union in an NLRB election (Weiler, 1983). By the 1990s, the unionization rate in the United States fell to under 14 percent, including only 9 percent of the private sector workers and 37 percent of those in the public sector. Unions now have minimal impact on wages or working conditions for most American workers.

Nowhere else have unions collapsed as in the United States. With a unionization rate dramatically below that of other countries, including Canada, the United States has achieved exceptional status (see Table 7). There remains great interest in unions among American workers; where employers do not resist, unions thrive. In the public sector and in some private employers where workers have free choice to join a union, they are as likely as they ever were, and as likely as workers anywhere. In the past, as after 1886 and in the 1920s, when American employers broke unions, they revived when a government committed to workplace democracy sheltered them from employer repression. If we see another such government, we may yet see another union revival.

Table 7
Union Membership Rates for the United States and Six Other Leading Industrial Economies, 1970 to 1990

1970 1980 1990
U.S.: Unionization Rate: All industries 30.0 24.7 17.6
U.S.: Unionization Rate: Manufacturing 41.0 35.0 22.0
U.S.: Unionization Rate: Financial services 5.0 4.0 2.0
Six Countries: Unionization Rate: All industries 37.1 39.7 35.3
Six Countries: Unionization Rate: Manufacturing 38.8 44.0 35.2
Five Countries: Unionization Rate: Financial services 23.9 23.8 24.0
Ratio: U.S./Six Countries: All industries 0.808 0.622 0.499
Ratio: U.S./Six Countries: Manufacturing 1.058 0.795 0.626
Ratio: U.S./Five Countries: Financial services 0.209 0.168 0.083

Note: The unionization rate reported is the number of union members out of 100 workers in the specified industry. The ratio shown is the unionization rate for the United States divided by the unionization rate for the other countries. The six countries are Canada, France, Germany, Italy, Japan, and the United Kingdom. Data on union membership in financial services in France are not available.

Source: Visser (1991): 110.

References

Alvarez, R. Michael, Geoffrey Garrett and Peter Lange. “Government Partisanship, Labor Organization, and Macroeconomic Performance,” American Political Science Review 85 (1992): 539-556.

Ansell, Christopher K. Schism and Solidarity in Social Movements: The Politics of Labor in the French Third Republic. Cambridge: Cambridge University Press, 2001.

Arnesen, Eric, Brotherhoods of Color: Black Railroad Workers and the Struggle for Equality. Cambridge, MA: Harvard University Press, 2001.

Bain, George S., and Farouk Elsheikh. Union Growth and the Business Cycle: An Econometric Analysis. Oxford: Basil Blackwell, 1976.

Bain, George S. and Robert Price. Profiles of Union Growth: A Comparative Statistical Portrait of Eight Countries. Oxford: Basil Blackwell, 1980.

Bernard, Phillippe and Henri Dubief. The Decline of the Third Republic, 1914-1938. Cambridge: Cambridge University Press, 1993.

Blewett, Mary H. Men, Women, and Work: Class, Gender and Protest in the New England Shoe Industry, 1780-1910. Urbana, IL: University of Illinois Press, 1988.

Boyle, Kevin, editor. Organized Labor and American Politics, 1894-1994: The Labor-Liberal Alliance. Albany, NY: State University of New York Press, 1998.

Brinkley, Alan. The End of Reform: New Deal Liberalism in Recession and War. New York: Alfred A. Knopf, 1995.

Brody, David. Workers in Industrial America: Essays on the Twentieth-Century Struggle. New York: Oxford University Press, 1985.

Cazals, Rémy. Avec les ouvriers de Mazamet dans la grève et l’action quotidienne, 1909-1914. Paris: Maspero, 1978.

Cohen, Lizabeth. Making A New Deal: Industrial Workers in Chicago, 1919-1939. Cambridge: Cambridge University Press, 1990.

Cronin, James E. Industrial Conflict in Modern Britain. London: Croom Helm, 1979.

Cronin, James E. “Labor Insurgency and Class Formation.” In Work, Community, and Power: The Experience of Labor in Europe and America, 1900-1925, edited by James E. Cronin and Carmen Sirianni. Philadelphia: Temple University Press, 1983. .

Cronin, James E. and Carmen Sirianni, editors. Work, Community, and Power: The Experience of Labor in Europe and America, 1900-1925. Philadelphia: Temple University Press, 1983.

Dawley, Alan. Class and Community: The Industrial Revolution in Lynn. Cambridge, MA: Harvard University Press, 1976.

Ely, James W., Jr. The Guardian of Every Other Right: A Constitutional History of Property Rights. New York: Oxford, 1998.

Fink, Leon. Workingmen’s Democracy: The Knights of Labor and American Politics. Urbana, IL: University of Illinois Press, 1983.

Fink, Leon. “The New Labor History and the Powers of Historical Pessimism: Consensus, Hegemony, and the Case of the Knights of Labor.” Journal of American History 75 (1988): 115-136.

Foner, Philip S. Organized Labor and the Black Worker, 1619-1973. New York: International Publishers, 1974.

Foner, Philip S. Women and the American Labor Movement: From Colonial Times to the Eve of World War I. New York: Free Press, 1979.

Frank, Dana. Purchasing Power: Consumer Organizing, Gender, and the Seattle Labor Movement, 1919- 1929. Cambridge: Cambridge University Press, 1994.

Freeman, Richard and James Medoff. What Do Unions Do? New York: Basic Books, 1984.

Friedman, Gerald. “Dividing Labor: Urban Politics and Big-City Construction in Late-Nineteenth Century America.” In Strategic Factors in Nineteenth-Century American Economic History, edited by Claudia Goldin and Hugh Rockoff, 447-64. Chicago: University of Chicago Press, 1991.

Friedman, Gerald. “Revolutionary Syndicalism and French Labor: The Rebels Behind the Cause.” French Historical Studies 20 (Spring 1997).

Friedman, Gerald. State-Making and Labor Movements: France and the United States 1876-1914. Ithaca, NY: Cornell University Press, 1998.

Friedman, Gerald. “New Estimates of United States Union Membership, 1880-1914.” Historical Methods 32 (Spring 1999): 75-86.

Friedman, Gerald. “The Political Economy of Early Southern Unionism: Race, Politics, and Labor in the South, 1880-1914.” Journal of Economic History 60, no. 2 (2000): 384-413.

Friedman, Gerald. “The Sanctity of Property in American Economic History” (manuscript, University of Massachusetts, July 2001).

Gall, Gilbert. Pursuing Justice: Lee Pressman, the New Deal, and the CIO. Albany, NY: State University of New York Press, 1999.

Gamson, William A. The Strategy of Social Protest. Homewood, IL: Dorsey Press, 1975.

Geary, Richard. European Labour Protest, 1848-1939. New York: St. Martin’s Press, 1981.

Golden, Miriam and Jonas Pontusson, editors. Bargaining for Change: Union Politics in North America and Europe. Ithaca, NY: Cornell University Press, 1992.

Griffith, Barbara S. The Crisis of American Labor: Operation Dixie and the Defeat of the CIO. Philadelphia: Temple University Press, 1988.

Harris, Howell John. Bloodless Victories: The Rise and Fall of the Open Shop in the Philadelphia Metal Trades, 1890-1940. Cambridge: Cambridge University Press, 2000.

Hattam, Victoria C. Labor Visions and State Power: The Origins of Business Unionism in the United States. Princeton: Princeton University Press, 1993.

Heckscher, Charles C. The New Unionism: Employee Involvement in the Changing Corporation. New York: Basic Books, 1987.

Hirsch, Barry T. and John T. Addison. The Economic Analysis of Unions: New Approaches and Evidence. Boston: Allen and Unwin, 1986.

Hirschman, Albert O. Exit, Voice and Loyalty: Responses to Decline in Firms, Organizations, and States. Cambridge, MA, Harvard University Press, 1970.

Hirschman, Albert O. Shifting Involvements: Private Interest and Public Action. Princeton: Princeton University Press, 1982.

Hobsbawm, Eric J. Labouring Men: Studies in the History of Labour. London: Weidenfeld and Nicolson, 1964.

Irons, Janet. Testing the New Deal: The General Textile Strike of 1934 in the American South. Urbana, IL: University of Illinois Press, 2000.

Jacoby, Sanford. Modern Manors: Welfare Capitalism Since the New Deal. Princeton: Princeton University Press, 1997.

Katznelson, Ira and Aristide R. Zolberg, editors. Working-Class Formation: Nineteenth-Century Patterns in Western Europe and the United States. Princeton: Princeton University Press, 1986. Kocka, Jurgen. “Problems of Working-Class Formation in Germany: The Early Years, 1800-1875.” In Working- Class Formation: Nineteenth-Century Patterns in Western Europe and the United States, edited by Ira Katznelson and Aristide R. Zolberg, 279-351. Princeton: Princeton University Press, 1986. Letwin, Daniel. The Challenge of Interracial Unionism: Alabama Coal Miners, 1878-1921. Chapel Hill: University of North Carolina Press, 1998. Maddison, Angus. Dynamic Forces in Capitalist Development: A Long-Run Comparative View. Oxford: Oxford University Press, 1991. Magraw, Roger. A History of the French Working Class, two volumes. London: Blackwell, 1992. Milkman, Ruth. Women, Work, and Protest: A Century of United States Women’s Labor. Boston: Routledge and Kegan Paul, 1985.

Montgomery, David. The Fall of the House of Labor: The Workplace, the State, and American Labor Activism, 1865-1920. Cambridge: Cambridge University Press, 1987.

Mullin, Debbie Dudley. “The Porous Umbrella of the AFL: Evidence From Late Nineteenth-Century State Labor Bureau Reports on the Establishment of American Unions.” Ph.D. diss., University of Virginia, 1993.

Nolan, Mary. Social Democracy and Society: Working-Class Radicalism in Dusseldorf, 1890-1920. Cambridge: Cambridge University Press, 1981.

Olson, Mancur. The Logic of Collective Action: Public Goods and the Theory of Groups. Cambridge, MA: Harvard University Press, 1971.

Perlman, Selig. A Theory of the Labor Movement. New York: MacMillan, 1928.

Rachleff, Peter J. Black Labor in the South, 1865-1890. Philadelphia: Temple University Press, 1984.

Roediger, David. The Wages of Whiteness: Race and the Making of the American Working Class. London: Verso, 1991.

Scott, Joan. The Glassworkers of Carmaux: French Craftsmen in Political Action in a Nineteenth-Century City. Cambridge, MA: Harvard University Press, 1974.

Sewell, William H. Jr. Work and Revolution in France: The Language of Labor from the Old Regime to 1848. Cambridge: Cambridge University Press, 1980.

Shorter, Edward and Charles Tilly. Strikes in France, 1830-1968. Cambridge: Cambridge University Press, 1974.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: MIT Press, 1990.

Thompson, Edward P. The Making of the English Working Class. New York: Vintage, 1966.

Troy, Leo. Distribution of Union Membership among the States, 1939 and 1953. New York: National Bureau of Economic Research, 1957.

United States, Bureau of the Census. Census of Occupations, 1930. Washington, DC: Government Printing Office, 1932.

Visser, Jelle. European Trade Unions in Figures. Boston: Kluwer, 1989.

Voss, Kim. The Making of American Exceptionalism: The Knights of Labor and Class Formation in the Nineteenth Century. Ithaca, NY: Cornell University Press, 1993.

Ware, Norman. The Labor Movement in the United States, 1860-1895: A Study in Democracy. New York: Vintage, 1929.

Washington, Booker T. “The Negro and the Labor Unions.” Atlantic Monthly (June 1913).

Weiler, Paul. “Promises to Keep: Securing Workers Rights to Self-Organization Under the NLRA.” Harvard Law Review 96 (1983).

Western, Bruce. Between Class and Market: Postwar Unionization in the Capitalist Democracies. Princeton: Princeton University Press, 1997.

Whatley, Warren. “African-American Strikebreaking from the Civil War to the New Deal.” Social Science History 17 (1993), 525-58.

Wilentz, Robert Sean. Chants Democratic: New York City and the Rise of the American Working Class, 1788-1850. Oxford: Oxford University Press, 1984.

Wolman, Leo. Ebb and Flow in Trade Unionism. New York: National Bureau of Economic Research, 1936.

Zieger, Robert. The CIO, 1935-1955. Chapel Hill: University of North Carolina Press, 1995.

Zolberg, Aristide. “Moments of Madness.” Politics and Society 2 (Winter 1972): 183-207. 60

Citation: Friedman, Gerald. “Labor Unions in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/labor-unions-in-the-united-states/

History of Labor Turnover in the U.S.

Laura Owen, DePaul University

Labor turnover measures the movement of workers in and out of employment with a particular firm. Consequently, concern with the issue and interest in measuring such movement only arose when working for an employer (rather than self-employment in craft or agricultural production) became the norm. The rise of large scale firms in the late nineteenth century and the decreasing importance (in percentage terms) of agricultural employment meant that a growing number of workers were employed by firms. It was only in this context that interest in measuring labor turnover and understanding its causes began.

Trends in Labor Turnover

Labor turnover is typically measured in terms of the separation rate (quits, layoffs, and discharges per 100 employees on the payroll). The aggregate data on turnover among U.S. workers is available from a series of studies focusing almost entirely on the manufacturing sector. These data show high rates of labor turnover (annual rates exceeding 100%) in the early decades of the twentieth century, substantial declines in the 1920s, significant fluctuations during the economic crisis of the 1930s and the boom of the World War II years, and a return to the low rates of the 1920s in the post-war era. (See Figure 1 and its notes.) Firm and state level data (from the late nineteenth and early twentieth centuries) also indicate that labor turnover rates exceeding 100 were common to many industries.

Contemporaries expressed concern over the high rates of labor turnover in the early part of the century and conducted numerous studies to understand its causes and consequences. (See for example, Douglas 1918, Lescohier 1923, and Slichter 1921.) Some of these studies focused on the irregularity in labor demand which resulted in seasonal and cyclical layoffs. Others interpreted the high rates of labor turnover as an indication of worker dissatisfaction and labor relations problems. Many observers began to recognize that labor turnover was costly for the firm (in terms of increased hiring and training expenditures) and for the worker (in terms of irregularity of income flows).

Both the high rates of labor turnover in the early years of the twentieth century and the dramatic declines in the 1920s are closely linked with changes in the worker-initiated component of turnover rates. During the 1910s and 1920s, quits accounted (on average) for over seventy percent of all separations and the decline in annual separation rates from 123.4 in 1920 to 37.1 in 1928 was primarily driven be a decline in quit rates, from 100.9 to 25.8 per 100 employees.

Explanations of the Decline in Turnover in the 1920s

The aggregate decline in labor turnover in the 1920s appears to be the beginning of a long run trend. Numerous studies, seeking to identify why workers began quitting their jobs less frequently, have pointed to the role of altered employment relationships. (See, for example, Owen 1995b, Ozanne 1967, and Ross 1958.) The new practices of employers, categorized initially as welfare work and later as the development of internal labor markets, included a variety of policies aimed at strengthening the attachment between workers and firms. The most important of these policies were the establishment of personnel or employment departments, the offering of seniority-based compensation, and the provision of on-the-job training and internal promotion ladders. In the U.S., these changes in employment practices began at a few firms around the turn of the twentieth century, intensified during WWI and became more widespread in the 1920s. However, others have suggested that the changes in quit behavior in the 1920s were the result of immigration declines (due to newly implemented quotas) and slack labor markets (Goldin 2000, Jacoby 1985).

Even the motivation of firms’ implementation of the new practices is subject to debate. One argument focuses on how the shift from craft to mass production increased the importance of firm-specific skills and on-the-job training. Firms’ greater investment in training meant that it was more costly to have workers leave and provided the incentive for firms to lower turnover. However, others have provided evidence that job ladders and internal promotion were not always implemented to reward the increased worker productivity resulting from on-the-job training. Rather, these employment practices were sometimes attempts to appease workers and to prevent unionization. Labor economists have also noted that providing various forms of deferred compensation (pensions, wages which increase with seniority, etc.) can increase worker effort and reduce the costs of monitoring workers. Whether promotion ladders established within firms reflect an attempt to forestall unionization, a means of protecting firm investments in training by lowering turnover, or a method of ensuring worker effort is still open to debate, though the explanations are not necessarily mutually exclusive (Jacoby 1983, Lazear 1981, Owen 1995b, Sundstrum 1988, Stone 1974).

Subsequent Patterns of Labor Turnover

In the 1930s and 1940s the volatility in labor turnover increased and the relationships between the components of total separations shifted (Figure 1). The depressed labor markets of the 1930s meant that procyclical quit rates declined, but increased layoffs kept total separation rates relatively high, (on average 57 per 100 employees between 1930 and 1939). During the tight labor markets of the World War II years, turnover again reached rates exceeding 100%, with increases in quits acting as the primary determinant. Quits and total separations declined after the war, producing much lower and less volatile turnover rates between 1950 and 1970 (Figure 1).

Though the decline in labor turnover in the early part of the twentieth century was seen by many as a sign of improved labor-management relations, the low turnover rates of the post-WWII era led macroeconomists to begin to question the benefits of strong attachments between workers and firms. Specifically, there was concern that long-term employment contracts (either implicit or explicit) might generate wage rigidities which could result in increased unemployment and other labor market adjustment problems (Ross 1958). More recently, labor economists have wondered whether the movement toward long-term attachments between workers and firms is reversing itself. “Changes in Job Stability and Job Security” a special issue of the Journal of Labor Economics (October 1999) includes numerous analyses suggesting that job instability increased among some groups of workers (particularly those with longer tenure) amidst the restructuring activities of the 1990s.

Turnover Data and Methods of Analysis

The historical analyses of labor turnover have relied upon two types of data. The first type consists of firm-level data on turnover within a particular workplace or governmental collections (through firms) of data on the level of turnover within particular industries or geographic locales. If these turnover data are broken down into their components – quits, layoffs, and discharges – a quit rate model (such as the one developed by Parsons 1973) can be employed to analyze the worker-initiated component of turnover as it relates to job search behavior. These analyses (see for example, Owen 1995a) estimate quit rates as a function of variables reflecting labor demand conditions (e.g., unemployment and relative wages) and of labor supply variables reflecting the composition of the labor force (e.g., age/gender distributions and immigrant flows).

The second type of turnover data is generated using employment records or governmental surveys as the source for information specific to individual workers. Job histories can be created with these data and used to analyze the impact of individual characteristics such as age, education, and occupation, on labor turnover, firm tenure and occupational experience. Analysis of this type of data typically employs a “hazard” model that estimates the probability of a worker’s leaving a job as a function of individual worker characteristics. (See, for example, Carter and Savoca 1992, Maloney 1998, Whatley and Sedo 1998.)

Labor Turnover and Long Term Employment

Another measure of worker/firm attachment is tenure – the number of years a worker stays with a particular job or firm. While significant declines in labor turnover (such as those observed in the 1920s) will likely be reflected in rising average tenure with the firm, high rates of labor turnover do not imply that long tenure is not present among the workforce. If high turnover is concentrated among a subset of workers (the young or the unskilled), then high turnover can co-exist with the existence of life-time jobs for another subset (the skilled). For example, the high rates of labor turnover that were common until the mid-1920s co-existed with long term jobs for some workers. The evidence indicates that while long-term employment became more common in the twentieth century, it was not completely absent from nineteenth-century labor markets (Carter 1988, Carter and Savoca 1990, Hall 1982).

Notes on Turnover Data in Figure 1

The turnover data used to generate Figure 1 come from three separate sources: Brissenden and Frankel (1920) for the 1910-1918 data; Berridge (1929) for the 1919-1929 data; and U.S. Bureau of the Census (1972) for the 1930-1970 data. Several adjustments were necessary to present them in a single format. The Brissenden and Frankel study calculated the separate components of turnover (quits and layoffs) from only a subsample of their data. The subsample data were used to calculate the percentage of total separations accounted for by quits and layoffs and these percentages were applied to the total separations data from the full sample to estimate the quit and layoff components. The 1930-1970 data reported in Historical Statistics of the United States were collected by the U.S. Bureau of Labor Statistics and originally reported in Employment and Earning, U.S., 1909-1971. Unlike the earlier series, these data were originally reported as average monthly rates and have been converted into annualized figures by multiplying times 12.

In addition to the adjustments described above, there are four issues relating to the comparability of these data which should be noted. First, the turnover data for the 1919 to 1929 period are median rates, whereas the data from before and after that period were compiled as weighted averages of the rates of all firms surveyed. If larger firms have lower turnover rates (as Arthur Ross 1958 notes), medians will be higher than weighted averages. The data for the one year covered by both studies (1919) confirm this difference: the median turnover rates from Berridge (1920s data) exceed the weighted average turnover rates from Brissenden and Frankel (1910s data). Brissenden and Frankel suggested that the actual turnover of labor in manufacturing may have been much higher than their sample statistics suggest:

The establishments from which the Bureau of Labor Statistics has secured labor mobility figures have necessarily been the concerns which had the figures to give, that is to say, concerns which had given rather more attention than most firms to their force-maintenance problems. These firms reporting are chiefly concerns which had more or less centralized employment systems and were relatively more successful in the maintenance of a stable work force (1920, p. 40).

A similar underestimation bias continued with the BLS collection of data because the average firm size in the sample was larger than the average firm size in the whole population of manufacturing firms (U.S. Bureau of the Census, p.160), and larger firms tend to have lower turnover rates.

Second, the data for 1910-1918 (Brissenden and Frankel) includes workers in public utilities and mercantile establishments in addition to workers in manufacturing industries and is therefore not directly comparable to the later series on the turnover of manufacturing workers. However, these non-manufacturing workers had lower turnover rates than the manufacturing workers in both 1913/14 and 1917/18 (the two years for which Brissenden and Frankel present industry-level data). Thus, the decline in turnover of manufacturing workers from the 1910s to the 1920s may actually be underestimated.

Third, the turnover rates for 1910 to 1918 (Brissenden and Frankel) were originally calculated per labor hour. The number of employees was estimated at one worker per 3,000 labor hours – the number of hours in a typical work year. This conversion generates the number of full-year workers, not allowing for any procyclicality of labor hours. If labor hours are procyclical, this calculation overstates (understates) the number of workers during an upswing (downswing), thus dampening the response of turnover rates to economic cycles.

Fourth, total separations are broken down into quits, layoffs, discharges and other (including military enlistment, death and retirement). Prior to 1940, the “other” separations were included in quits.

References

Berridge, William A. “Labor Turnover in American Factories.” Monthly Labor Review 29 (July 1929): 62-65.
Brissenden, Paul F. and Emil Frankel. “Mobility of Labor in American Industry.” Monthly Labor Review 10 (June 1920): 1342-62.
Carter, Susan B. “The Changing Importance of Lifetime Jobs, 1892-1978.”Industrial Relations 27, no. 3 (1988): 287-300.
Carter, Susan B. and Elizabeth Savoca. “The ‘Teaching Procession’? Another Look at Teacher Tenure, 1845-1925.” Explorations in Economic History 29, no. 4 (1992): 401-16.
Carter, Susan B. and Elizabeth Savoca. “Labor Mobility and Lengthy Jobs in Nineteenth-Century America.” Journal of Economic History 50, no. 1 (1990): 1-16.
Douglas, Paul H. “The Problem of Labor Turnover.” American Economic Review 8, no. 2 (1918): 306-16.
Goldin, Claudia. “Labor Markets in the Twentieth Century.” In The Cambridge Economic History of the United States, III, edited by Stanley L. Engerman and Robert E. Gallman, 549-623. Cambridge: Cambridge University Press, 2000.
Hall, Robert E. “The Importance of Lifetime Jobs in the U.S. Economy.” American Economic Review 72, no. 4 (1982): 716-24.
Jacoby, Sanford M. “Industrial Labor Mobility in Historical Perspective.” Industrial Relations 22, no. 2 (1983): 261-82.
Jacoby, Sanford M. Employing Bureaucracy: Managers, Unions, and the Transformation of Work in American Industry, 1900-1945. New York: Columbia University Press, 1985.
Lazear, Edward. P. “Agency, Earnings Profiles, Productivity, and Hours Reduction.” American Economic Review 71, no. 4 (1981): 606-19.
Lescohier, Don D. The Labor Market. New York: Macmillan, 1923.
Maloney, Thomas N. “Racial Segregation, Working Conditions and Workers’ Health: Evidence from the A.M. Byers Company, 1916-1930.” Explorations in Economic History 35, no. 3 (1998): 272-95.
Owen, Laura .J. “Worker Turnover in the 1920s: What Labor Supply Arguments Don’t Tell Us.” Journal of Economic History 55, no.4 (1995a): 822-41.
Owen, Laura J. “Worker Turnover in the 1920s: The Role of Changing Employment Policies.” Industrial and Corporate Change 4 (1995b): 499-530.
Ozanne, Robert. A Century of Labor-Management Relations at McCormick and International Harvester. Madison: University of Wisconsin Press, 1967.
Parsons, Donald O. “Quit Rates Over Time: A Search and Information Approach.” American Economic Review 63, no.3 (1973): 390-401.
Ross, Arthur M. “Do We Have a New Industrial Feudalism?” American Economic Review 48 (1958): 903-20.
Slichter, Sumner. The Turnover of Factory Labor. New York: Appleton, 1921.
Stone, Katherine. “The Origins of Job Structures in the Steel Industry.” Review of Radical Political Economy 6, no.2 (1974): 113-73.
Sundstrom, William A. “Internal Labor Markets before World War I: On-the-Job Training and Employee Promotion.” Explorations in Economic History 25 (October 1988): 424-45.
U.S. Bureau of the Census. Historical Statistics of the United States, Colonial Times to 1970. Washington, D.C., 1975.
Whatley, Warren C. and Stan Sedo. “Quit Behavior as a Measure of Worker Opportunity: Black Workers in the Interwar Industrial North.” American Economic Revie w 88, no. 2 (1998): 363-67.

Citation: Owen, Laura. “History of Labor Turnover in the U.S.”. EH.Net Encyclopedia, edited by Robert Whaples. April 29, 2004. URL http://eh.net/encyclopedia/history-of-labor-turnover-in-the-u-s/

The History of American Labor Market Institutions and Outcomes

Joshua Rosenbloom, University of Kansas

One of the most important implications of modern microeconomic theory is that perfectly competitive markets produce an efficient allocation of resources. Historically, however, most markets have not approached the level of organization of this theoretical ideal. Instead of the costless and instantaneous communication envisioned in theory, market participants must rely on a set of incomplete and often costly channels of communication to learn about conditions of supply and demand; and they may face significant transaction costs to act on the information that they have acquired through these channels.

The economic history of labor market institutions is concerned with identifying the mechanisms that have facilitated the allocation of labor effort in the economy at different times, tracing the historical processes by which they have responded to shifting circumstances, and understanding how these mechanisms affected the allocation of labor as well as the distribution of labor’s products in different epochs.

Labor market institutions include both formal organizations (such as union hiring halls, government labor exchanges, and third party intermediaries such as employment agents), and informal mechanisms of communication such as word-of-mouth about employment opportunities passed between family and friends. The impact of these institutions is broad ranging. It includes the geographic allocation of labor (migration and urbanization), decisions about education and training of workers (investment in human capital), inequality (relative wages), the allocation of time between paid work and other activities such as household production, education, and leisure, and fertility (the allocation of time between production and reproduction).

Because each worker possesses a unique bundle of skills and attributes and each job is different, labor market transactions require the communication of a relatively large amount of information. In other words, the transactions costs involved in the exchange of labor are relatively high. The result is that the barriers separating different labor markets have sometimes been quite high, and these markets are relatively poorly integrated with one another.

The frictions inherent in the labor market mean that even during macroeconomic expansions there may be both a significant number of unemployed workers and a large number of unfilled vacancies. When viewed from some distance and looked at in the long-run, however, what is most striking is how effective labor market institutions have been in adapting to the shifting patterns of supply and demand in the economy. Over the past two centuries American labor markets have accomplished a massive redistribution of labor out of agriculture into manufacturing, and then from manufacturing into services. At the same time they have accomplished a huge geographic reallocation of labor between the United States and other parts of the world as well as within the United States itself, both across states and regions and from rural locations to urban areas.

This essay is organized topically, beginning with a discussion of the evolution of institutions involved in the allocation of labor across space and then taking up the development of institutions that fostered the allocation of labor across industries and sectors. The third section considers issues related to labor market performance.

The Geographic Distribution of Labor

One of the dominant themes of American history is the process of European settlement (and the concomitant displacement of the native population). This movement of population is in essence a labor market phenomenon. From the beginning of European settlement in what became the United States, labor markets were characterized by the scarcity of labor in relation to abundant land and natural resources. Labor scarcity raised labor productivity and enabled ordinary Americans to enjoy a higher standard of living than comparable Europeans. Counterbalancing these inducements to migration, however, were the high costs of travel across the Atlantic and the significant risks posed by settlement in frontier regions. Over time, technological changes lowered the costs of communication and transportation. But exploiting these advantages required the parallel development of new labor market institutions.

Trans-Atlantic Migration in the Colonial Period

During the seventeenth and eighteenth centuries a variety of labor market institutions developed to facilitate the movement of labor in response to the opportunities created by American factor proportions. While some immigrants migrated on their own, the majority of immigrants were either indentured servants or African slaves.

Because of the cost of passage—which exceeded half a year’s income for a typical British immigrant and a full year’s income for a typical German immigrant—only a small portion of European migrants could afford to pay for their passage to the Americas (Grubb 1985a). They did so by signing contracts, or “indentures,” committing themselves to work for a fixed number of years in the future—their labor being their only viable asset—with British merchants, who then sold these contracts to colonists after their ship reached America. Indentured servitude was introduced by the Virginia Company in 1619 and appears to have arisen from a combination of the terms of two other types of labor contract widely used in England at the time: service in husbandry and apprenticeship (Galenson 1981). In other cases, migrants borrowed money for their passage and committed to repay merchants by pledging to sell themselves as servants in America, a practice known as “redemptioner servitude (Grubb 1986). Redemptioners bore increased risk because they could not predict in advance what terms they might be able to negotiate for their labor, but presumably they did so because of other benefits, such as the opportunity to choose their own master, and to select where they would be employed.

Although data on immigration for the colonial period are scattered and incomplete a number of scholars have estimated that between half and three quarters of European immigrants arriving in the colonies came as indentured or redemptioner servants. Using data for the end of the colonial period Grubb (1985b) found that close to three-quarters of English immigrants to Pennsylvania and nearly 60 percent of German immigrants arrived as servants.

A number of scholars have examined the terms of indenture and redemptioner contracts in some detail (see, e.g., Galenson 1981; Grubb 1985a). They find that consistent with the existence of a well-functioning market, the terms of service varied in response to differences in individual productivity, employment conditions, and the balance of supply and demand in different locations.

The other major source of labor for the colonies was the forced migration of African slaves. Slavery had been introduced in the West Indies at an early date, but it was not until the late seventeenth century that significant numbers of slaves began to be imported into the mainland colonies. From 1700 to 1780 the proportion of blacks in the Chesapeake region grew from 13 percent to around 40 percent. In South Carolina and Georgia, the black share of the population climbed from 18 percent to 41 percent in the same period (McCusker and Menard, 1985, p. 222). Galenson (1984) explains the transition from indentured European to enslaved African labor as the result of shifts in supply and demand conditions in England and the trans-Atlantic slave market. Conditions in Europe improved after 1650, reducing the supply of indentured servants, while at the same time increased competition in the slave trade was lowering the price of slaves (Dunn 1984). In some sense the colonies’ early experience with indentured servants paved the way for the transition to slavery. Like slaves, indentured servants were unfree, and ownership of their labor could be freely transferred from one owner to another. Unlike slaves, however, they could look forward to eventually becoming free (Morgan 1971).

Over time a marked regional division in labor market institutions emerged in colonial America. The use of slaves was concentrated in the Chesapeake and Lower South, where the presence of staple export crops (rice, indigo and tobacco) provided economic rewards for expanding the scale of cultivation beyond the size achievable with family labor. European immigrants (primarily indentured servants) tended to concentrate in the Chesapeake and Middle Colonies, where servants could expect to find the greatest opportunities to enter agriculture once they had completed their term of service. While New England was able to support self-sufficient farmers, its climate and soil were not conducive to the expansion of commercial agriculture, with the result that it attracted relatively few slaves, indentured servants, or free immigrants. These patterns are illustrated in Table 1, which summarizes the composition and destinations of English emigrants in the years 1773 to 1776.

Table 1

English Emigration to the American Colonies, by Destination and Type, 1773-76

Total Emigration
Destination Number Percentage Percent listed as servants
New England 54 1.20 1.85
Middle Colonies 1,162 25.78 61.27
New York 303 6.72 11.55
Pennsylvania 859 19.06 78.81
Chesapeake 2,984 66.21 96.28
Maryland 2,217 49.19 98.33
Virginia 767 17.02 90.35
Lower South 307 6.81 19.54
Carolinas 106 2.35 23.58
Georgia 196 4.35 17.86
Florida 5 0.11 0.00
Total 4,507 80.90

Source: Grubb (1985b, p. 334).

International Migration in the Nineteenth and Twentieth Centuries

American independence marks a turning point in the development of labor market institutions. In 1808 Congress prohibited the importation of slaves. Meanwhile, the use of indentured servitude to finance the migration of European immigrants fell into disuse. As a result, most subsequent migration was at least nominally free migration.

The high cost of migration and the economic uncertainties of the new nation help to explain the relatively low level of immigration in the early years of the nineteenth century. But as the costs of transportation fell, the volume of immigration rose dramatically over the course of the century. Transportation costs were of course only one of the obstacles to international population movements. At least as important were problems of communication. Potential migrants might know in a general way that the United States offered greater economic opportunities than were available at home, but acting on this information required the development of labor market institutions that could effectively link job-seekers with employers.

For the most part, the labor market institutions that emerged in the nineteenth century to direct international migration were “informal” and thus difficult to document. As Rosenbloom (2002, ch. 2) describes, however, word-of-mouth played an important role in labor markets at this time. Many immigrants were following in the footsteps of friends or relatives already in the United States. Often these initial pioneers provided material assistance—helping to purchase ship and train tickets, providing housing—as well as information. The consequences of this so-called “chain migration” are readily reflected in a variety of kinds of evidence. Numerous studies of specific migration streams have documented the role of a small group of initial migrants in facilitating subsequent migration (for example, Barton 1975; Kamphoefner 1987; Gjerde 1985). At a more aggregate level, settlement patterns confirm the tendency of immigrants from different countries to concentrate in different cities (Ward 1971, p. 77; Galloway, Vedder and Shukla 1974).

Informal word-of-mouth communication was an effective labor market institution because it served both employers and job-seekers. For job-seekers the recommendations of friends and relatives were more reliable than those of third parties and often came with additional assistance. For employers the recommendations of current employees served as a kind of screening mechanism, since their employees were unlikely to encourage the immigration of unreliable workers.

While chain migration can explain a quantitatively large part of the redistribution of labor in the nineteenth century it is still necessary to explain how these chains came into existence in the first place. Chain migration always coexisted with another set of more formal labor market institutions that grew up largely to serve employers who could not rely on their existing labor force to recruit new hires (such as railroad construction companies). Labor agents, often themselves immigrants, acted as intermediaries between these employers and job-seekers, providing labor market information and frequently acting as translators for immigrants who could not speak English. Steamship companies operating between Europe and the United States also employed agents to help recruit potential migrants (Rosenbloom 2002, ch. 3).

By the 1840s networks of labor agents along with boarding houses serving immigrants and other similar support networks were well established in New York, Boston, and other major immigrant destinations. The services of these agents were well documented in published guides and most Europeans considering immigration must have known that they could turn to these commercial intermediaries if they lacked friends and family to guide them. After some time working in America these immigrants, if they were successful, would find steadier employment and begin to direct subsequent migration, thus establishing a new link in the stream of chain migration.

The economic impacts of immigration are theoretically ambiguous. Increased labor supply, by itself, would tend to lower wages—benefiting employers and hurting workers. But because immigrants are also consumers, the resulting increase in demand for goods and services will increase the demand for labor, partially offsetting the depressing effect of immigration on wages. As long as the labor to capital ratio rises, however, immigration will necessarily lower wages. But if, as was true in the late nineteenth century, foreign lending follows foreign labor, then there may be no negative impact on wages (Carter and Sutch 1999). Whatever the theoretical considerations, however, immigration became an increasingly controversial political issue during the late nineteenth and early twentieth centuries. While employers and some immigrant groups supported continued immigration, there was a growing nativist sentiment among other segments of the population. Anti-immigrant sentiments appear to have arisen out of a mix of perceived economic effects and concern about the implications of the ethnic, religious and cultural differences between immigrants and the native born.

In 1882, Congress passed the Chinese Exclusion Act. Subsequent legislative efforts to impose further restrictions on immigration passed Congress but foundered on presidential vetoes. The balance of political forces shifted, however, in the wake of World War I. In 1917 a literacy requirement was imposed for the first time, and in 1921 an Emergency Quota Act was passed (Goldin 1994).

With the passage of the Emergency Quota Act in 1921 and subsequent legislation culminating in the National Origins Act, the volume of immigration dropped sharply. Since this time international migration into the United States has been controlled to varying degrees by legal restrictions. Variations in the rules have produced variations in the volume of legal immigration. Meanwhile the persistence of large wage gaps between the United States and Mexico and other developing countries has encouraged a substantial volume of illegal immigration. It remains the case, however, that most of this migration—both legal and illegal—continues to be directed by chains of friends and relatives.

Recent trends in outsourcing and off-shoring have begun to create a new channel by which lower-wage workers outside the United States can respond to the country’s high wages without physically relocating. Workers in India, China, and elsewhere possessing technical skills can now provide services such as data entry or technical support by phone and over the internet. While the novelty of this phenomenon has attracted considerable attention, the actual volume of jobs moved off-shore remains limited, and there are important obstacles to overcome before more jobs can be carried out remotely (Edwards 2004).

Internal Migration in the Nineteenth and Twentieth Centuries

At the same time that American economic development created international imbalances between labor supply and demand it also created internal disequilibrium. Fertile land and abundant natural resources drew population toward less densely settled regions in the West. Over the course of the century, advances in transportation technologies lowered the cost of shipping goods from interior regions, vastly expanding the area available for settlement. Meanwhile transportation advances and technological innovations encouraged the growth of manufacturing and fueled increased urbanization. The movement of population and economic activity from the Eastern Seaboard into the interior of the continent and from rural to urban areas in response to these incentives is an important element of U.S. economic history in the nineteenth century.

In the pre-Civil War era, the labor market response to frontier expansion differed substantially between North and South, with profound effects on patterns of settlement and regional development. Much of the cost of migration is a result of the need to gather information about opportunities in potential destinations. In the South, plantation owners could spread these costs over a relatively large number of potential migrants—i.e., their slaves. Plantations were also relatively self-sufficient, requiring little urban or commercial infrastructure to make them economically viable. Moreover, the existence of well-established markets for slaves allowed western planters to expand their labor force by purchasing additional labor from eastern plantations.

In the North, on the other hand, migration took place through the relocation of small, family farms. Fixed costs of gathering information and the risks of migration loomed larger in these farmers’ calculations than they did for slaveholders, and they were more dependent on the presence of urban merchants to supply them with inputs and market their products. Consequently the task of mobilizing labor fell to promoters who bought up large tracts of land at low prices and then subdivided them into individual lots. To increase the value of these lands promoters offered loans, actively encourage the development of urban services such as blacksmith shops, grain merchants, wagon builders and general stores, and recruited settlers. With the spread of railroads, railroad construction companies also played a role in encouraging settlement along their routes to speed the development of traffic.

The differences in processes of westward migration in the North and South were reflected in the divergence of rates of urbanization, transportation infrastructure investment, manufacturing employment, and population density, all of which were higher in the North than in the South in 1860 (Wright 1986, pp. 19-29).

The Distribution of Labor among Economic Activities

Over the course of U.S. economic development technological changes and shifting consumption patterns have caused the demand for labor to increase in manufacturing and services and decline in agriculture and other extractive activities. These broad changes are illustrated in Table 2. As technological changes have increased the advantages of specialization and the division of labor, more and more economic activity has moved outside the scope of the household, and the boundaries of the labor market have been enlarged. As a result more and more women have moved into the paid labor force. On the other hand, with the increasing importance of formal education, there has been a decline in the number of children in the labor force (Whaples 2005).

Table 2

Sectoral Distribution of the Labor Force, 1800-1999

Share in
Non-Agriculture
Year Total Labor Force (1000s) Agriculture Total Manufacturing Services
1800 1,658 76.2 23.8
1850 8,199 53.6 46.4
1900 29,031 37.5 59.4 35.8 23.6
1950 57,860 11.9 88.1 41.0 47.1
1999 133,489 2.3 97.7 24.7 73.0

Notes and Sources: 1800 and 1850 from Weiss (1986), pp. 646-49; remaining years from Hughes and Cain (2003), 547-48. For 1900-1999 Forestry and Fishing are included in the Agricultural labor force.

As these changes have taken place they have placed strains on existing labor market institutions and encouraged the development of new mechanisms to facilitate the distribution of labor. Over the course of the last century and a half the tendency has been a movement away from something approximating a “spot” market characterized by short-term employment relationships in which wages are equated to the marginal product of labor, and toward a much more complex and rule-bound set of long-term transactions (Goldin 2000, p. 586) While certain segments of the labor market still involve relatively anonymous and short-lived transactions, workers and employers are much more likely today to enter into long-term employment relationships that are expected to last for many years.

The evolution of labor market institutions in response to these shifting demands has been anything but smooth. During the late nineteenth century the expansion of organized labor was accompanied by often violent labor-management conflict (Friedman 2002). Not until the New Deal did unions gain widespread acceptance and a legal right to bargain. Yet even today, union organizing efforts are often met with considerable hostility.

Conflicts over union organizing efforts inevitably involved state and federal governments because the legal environment directly affected the bargaining power of both sides, and shifting legal opinions and legislative changes played an important part in determining the outcome of these contests. State and federal governments were also drawn into labor markets as various groups sought to limit hours of work, set minimum wages, provide support for disabled workers, and respond to other perceived shortcomings of existing arrangements. It would be wrong, however, to see the growth of government regulation as simply a movement from freer to more regulated markets. The ability to exchange goods and services rests ultimately on the legal system, and to this extent there has never been an entirely unregulated market. In addition, labor market transactions are never as simple as the anonymous exchange of other goods or services. Because the identities of individual buyers and sellers matter and the long-term nature of many employment relationships, adjustments can occur along other margins besides wages, and many of these dimensions involve externalities that affect all workers at a particular establishment, or possibly workers in an entire industry or sector.

Government regulations have responded in many cases to needs voiced by participants on both sides of the labor market for assistance to achieve desired ends. That has not, of course, prevented both workers and employers from seeking to use government to alter the way in which the gains from trade are distributed within the market.

The Agricultural Labor Market

At the beginning of the nineteenth century most labor was employed in agriculture, and, with the exception of large slave plantations, most agricultural labor was performed on small, family-run farms. There were markets for temporary and seasonal agricultural laborers to supplement family labor supply, but in most parts of the country outside the South, families remained the dominant institution directing the allocation of farm labor. Reliable estimates of the number of farm workers are not readily available before 1860, when the federal Census first enumerated “farm laborers.” At this time census enumerators found about 800 thousand such workers, implying an average of less than one-half farm worker per farm. Interpretation of this figure is complicated, however, and it may either overstate the amount of hired help—since farm laborers included unpaid family workers—or understate it—since it excluded those who reported their occupation simply as “laborer” and may have spent some of their time working in agriculture (Wright 1988, p. 193). A possibly more reliable indicator is provided by the percentage of gross value of farm output spent on wage labor. This figure fell from 11.4 percent in 1870 to around 8 percent by 1900, indicating that hired labor was on average becoming even less important (Wright 1988, pp. 194-95).

In the South, after the Civil War, arrangements were more complicated. Former plantation owners continued to own large tracts of land that required labor if they were to be made productive. Meanwhile former slaves needed access to land and capital if they were to support themselves. While some land owners turned to wage labor to work their land, most relied heavily on institutions like sharecropping. On the supply side, croppers viewed this form of employment as a rung on the “agricultural ladder” that would lead eventually to tenancy and possibly ownership. Because climbing the agricultural ladder meant establishing one’s credit-worthiness with local lenders, southern farm laborers tended to sort themselves into two categories: locally established (mostly older, married men) croppers and renters on the one hand, and mobile wage laborers (mostly younger and unmarried) on the other. While the labor market for each of these types of workers appears to have been relatively competitive, the barriers between the two markets remained relatively high (Wright 1987, p. 111).

While the predominant pattern in agriculture then was one of small, family-operated units, there was an important countervailing trend toward specialization that both depended on, and encouraged the emergence of a more specialized market for farm labor. Because specialization in a single crop increased the seasonality of labor demand, farmers could not afford to employ labor year-round, but had to depend on migrant workers. The use of seasonal gangs of migrant wage laborers developed earliest in California in the 1870s and 1880s, where employers relied heavily on Chinese immigrants. Following restrictions on Chinese entry, they were replaced first by Japanese, and later by Mexican workers (Wright 1988, pp. 201-204).

The Emergence of Internal Labor Markets

Outside of agriculture, at the beginning of the nineteenth century most manufacturing took place in small establishments. Hired labor might consist of a small number of apprentices, or, as in the early New England textile mills, a few child laborers hired from nearby farms (Ware 1931). As a result labor market institutions remained small-scale and informal, and institutions for training and skill acquisition remained correspondingly limited. Workers learned on the job as apprentices or helpers; advancement came through establishing themselves as independent producers rather than through internal promotion.

With the growth of manufacturing, and the spread of factory methods of production, especially in the years after the end of the Civil War, an increasing number of people could expect to spend their working-lives as employees. One reflection of this change was the emergence in the 1870s of the problem of unemployment. During the depression of 1873 for the first time cities throughout the country had to contend with large masses of industrial workers thrown out of work and unable to support themselves through, in the language of the time, “no fault of their own” (Keyssar 1986, ch. 2).

The growth of large factories and the creation of new kinds of labor skills specific to a particular employer created returns to sustaining long-term employment relationships. As workers acquired job- and employer-specific skills their productivity increased giving rise to gains that were available only so long as the employment relationship persisted. Employers did little, however, to encourage long-term employment relationships. Instead authority over hiring, promotion and retention was commonly delegated to foremen or inside contractors (Nelson 1975, pp. 34-54). In the latter case, skilled craftsmen operated in effect as their own bosses contracting with the firm to supply components or finished products for an agreed price, and taking responsibility for hiring and managing their own assistants.

These arrangements were well suited to promoting external mobility. Foremen were often drawn from the immigrant community and could easily tap into word-of-mouth channels of recruitment. But these benefits came increasingly into conflict with rising costs of hiring and training workers.

The informality of personnel policies prior to World War I seems likely to have discouraged lasting employment relationships, and it is true that rates of labor turnover at the beginning of the twentieth century were considerably higher than they were to be later (Owen, 2004). Scattered evidence on the duration of employment relationships gathered by various state labor bureaus at the end of the century suggests, however, at least some workers did establish lasting employment relationship (Carter 1988; Carter and Savocca 1990; Jacoby and Sharma 1992; James 1994).

The growing awareness of the costs of labor-turnover and informal, casual labor relations led reformers to advocate the establishment of more centralized and formal processes of hiring, firing and promotion, along with the establishment of internal job-ladders, and deferred payment plans to help bind workers and employers. The implementation of these reforms did not make significant headway, however, until the 1920s (Slichter 1929). Why employers began to establish internal labor markets in the 1920s remains in dispute. While some scholars emphasize pressure from workers (Jacoby 1984; 1985) others have stressed that it was largely a response to the rising costs of labor turnover (Edwards 1979).

The Government and the Labor Market

The growth of large factories contributed to rising labor tensions in the late nineteenth- and early twentieth-centuries. Issues like hours of work, safety, and working conditions all have a significant public goods aspect. While market forces of entry and exit will force employers to adopt policies that are sufficient to attract the marginal worker (the one just indifferent between staying and leaving), less mobile workers may find that their interests are not adequately represented (Freeman and Medoff 1984). One solution is to establish mechanisms for collective bargaining, and the years after the American Civil War were characterized by significant progress in the growth of organized labor (Friedman 2002). Unionization efforts, however, met strong opposition from employers, and suffered from the obstacles created by the American legal system’s bias toward protecting property and the freedom of contract. Under prevailing legal interpretation, strikes were often found by the courts to be conspiracies in restraint of trade with the result that the apparatus of government was often arrayed against labor.

Although efforts to win significant improvements in working conditions were rarely successful, there were still areas where there was room for mutually beneficial change. One such area involved the provision of disability insurance for workers injured on the job. Traditionally, injured workers had turned to the courts to adjudicate liability for industrial accidents. Legal proceedings were costly and their outcome unpredictable. By the early 1910s it became clear to all sides that a system of disability insurance was preferable to reliance on the courts. Resolution of this problem, however, required the intervention of state legislatures to establish mandatory state workers compensation insurance schemes and remove the issue from the courts. Once introduced workers compensation schemes spread quickly: nine states passed legislation in 1911; 13 more had joined the bandwagon by 1913, and by 1920 44 states had such legislation (Fishback 2001).

Along with workers compensation state legislatures in the late nineteenth century also considered legislation restricting hours of work. Prevailing legal interpretations limited the effectiveness of such efforts for adult males. But rules restricting hours for women and children were found to be acceptable. The federal government passed legislation restricting the employment of children under 14 in 1916, but this law was found unconstitutional in 1916 (Goldin 2000, p. 612-13).

The economic crisis of the 1930s triggered a new wave of government interventions in the labor market. During the 1930s the federal government granted unions the right to organize legally, established a system of unemployment, disability and old age insurance, and established minimum wage and overtime pay provisions.

In 1933 the National Industrial Recovery Act included provisions legalizing unions’ right to bargain collectively. Although the NIRA was eventually ruled to be unconstitutional, the key labor provisions of the Act were reinstated in the Wagner Act of 1935. While some of the provisions of the Wagner Act were modified in 1947 by the Taft-Hartley Act, its passage marks the beginning of the golden age of organized labor. Union membership jumped very quickly after 1935 from around 12 percent of the non-agricultural labor force to nearly 30 percent, and by the late 1940s had attained a peak of 35 percent, where it stabilized. Since the 1960s, however, union membership has declined steadily, to the point where it is now back at pre-Wagner Act levels.

The Social Security Act of 1935 introduced a federal unemployment insurance scheme that was operated in partnership with state governments and financed through a tax on employers. It also created government old age and disability insurance. In 1938, the federal Fair Labor Standards Act provided for minimum wages and for overtime pay. At first the coverage of these provisions was limited, but it has been steadily increased in subsequent years to cover most industries today.

In the post-war era, the federal government has expanded its role in managing labor markets both directly—through the establishment of occupational safety regulations, and anti-discrimination laws, for example—and indirectly—through its efforts to manage the macroeconomy to insure maximum employment.

A further expansion of federal involvement in labor markets began in 1964 with passage of the Civil Rights Act, which prohibited employment discrimination against both minorities and women. In 1967 the Age Discrimination and Employment Act was passed prohibiting discrimination against people aged 40 to 70 in regard to hiring, firing, working conditions and pay. The Family and Medical Leave Act of 1994 allows for unpaid leave to care for infants, children and other sick relatives (Goldin 2000, p. 614).

Whether state and federal legislation has significantly affected labor market outcomes remains unclear. Most economists would argue that the majority of labor’s gains in the past century would have occurred even in the absence of government intervention. Rather than shaping market outcomes, many legislative initiatives emerged as a result of underlying changes that were making advances possible. According to Claudia Goldin (2000, p. 553) “government intervention often reinforced existing trends, as in the decline of child labor, the narrowing of the wage structure, and the decrease in hours of work.” In other cases, such as Workers Compensation and pensions, legislation helped to establish the basis for markets.

The Changing Boundaries of the Labor Market

The rise of factories and urban employment had implications that went far beyond the labor market itself. On farms women and children had found ready employment (Craig 1993, ch. 4). But when the male household head worked for wages, employment opportunities for other family members were more limited. Late nineteenth-century convention largely dictated that married women did not work outside the home unless their husband was dead or incapacitated (Goldin 1990, p. 119-20). Children, on the other hand, were often viewed as supplementary earners in blue-collar households at this time.

Since 1900 changes in relative earnings power related to shifts in technology have encouraged women to enter the paid labor market while purchasing more of the goods and services that were previously produced within the home. At the same time, the rising value of formal education has lead to the withdrawal of child labor from the market and increased investment in formal education (Whaples 2005). During the first half of the twentieth century high school education became nearly universal. And since World War II, there has been a rapid increase in the number of college educated workers in the U.S. economy (Goldin 2000, p. 609-12).

Assessing the Efficiency of Labor Market Institutions

The function of labor markets is to match workers and jobs. As this essay has described the mechanisms by which labor markets have accomplished this task have changed considerably as the American economy has developed. A central issue for economic historians is to assess how changing labor market institutions have affected the efficiency of labor markets. This leads to three sets of questions. The first concerns the long-run efficiency of market processes in allocating labor across space and economic activities. The second involves the response of labor markets to short-run macroeconomic fluctuations. The third deals with wage determination and the distribution of income.

Long-Run Efficiency and Wage Gaps

Efforts to evaluate the efficiency of market allocation begin with what is commonly know as the “law of one price,” which states that within an efficient market the wage of similar workers doing similar work under similar circumstances should be equalized. The ideal of complete equalization is, of course, unlikely to be achieved given the high information and transactions costs that characterize labor markets. Thus, conclusions are usually couched in relative terms, comparing the efficiency of one market at one point in time with those of some other markets at other points in time. A further complication in measuring wage equalization is the need to compare homogeneous workers and to control for other differences (such as cost of living and non-pecuniary amenities).

Falling transportation and communications costs have encouraged a trend toward diminishing wage gaps over time, but this trend has not always been consistent over time, nor has it applied to all markets in equal measure. That said, what stands out is in fact the relative strength of forces of market arbitrage that have operated in many contexts to promote wage convergence.

At the beginning of the nineteenth century, the costs of trans-Atlantic migration were still quite high and international wage gaps large. By the 1840s, however, vast improvements in shipping cut the costs of migration, and gave rise to an era of dramatic international wage equalization (O’Rourke and Williamson 1999, ch. 2; Williamson 1995). Figure 1 shows the movement of real wages relative to the United States in a selection of European countries. After the beginning of mass immigration wage differentials began to fall substantially in one country after another. International wage convergence continued up until the 1880s, when it appears that the accelerating growth of the American economy outstripped European labor supply responses and reversed wage convergence briefly. World War I and subsequent immigration restrictions caused a sharper break, and contributed to widening international wage differences during the middle portion of the twentieth century. From World War II until about 1980, European wage levels once again began to converge toward the U.S., but this convergence reflected largely internally-generated improvements in European living standards rather then labor market pressures.

Figure 1

Relative Real Wages of Selected European Countries, 1830-1980 (US = 100)

Source: Williamson (1995), Tables A2.1-A2.3.

Wage convergence also took place within some parts of the United States during the nineteenth century. Figure 2 traces wages in the North Central and Southern regions of the U.S relative to those in the Northeast across the period from 1820 to the early twentieth century. Within the United States, wages in the North Central region of the country were 30 to 40 percent higher than in the East in the 1820s (Margo 2000a, ch. 5). Thereafter, wage gaps declined substantially, falling to the 10-20 percent range before the Civil War. Despite some temporary divergence during the war, wage gaps had fallen to 5 to 10 percent by the 1880s and 1890s. Much of this decline was made possible by faster and less expensive means of transportation, but it was also dependent on the development of labor market institutions linking the two regions, for while transportation improvements helped to link East and West, there was no corresponding North-South integration. While southern wages hovered near levels in the Northeast prior to the Civil War, they fell substantially below northern levels after the Civil War, as Figure 2 illustrates.

Figure 2

Relative Regional Real Wage Rates in the United States, 1825-1984

(Northeast = 100 in each year)

Notes and sources: Rosenbloom (2002, p. 133); Montgomery (1992). It is not possible to assemble entirely consistent data on regional wage variations over such an extended period. The nature of the wage data, the precise geographic coverage of the data, and the estimates of regional cost-of-living indices are all different. The earliest wage data—Margo (2000); Sundstrom and Rosenbloom (1993) and Coelho and Shepherd (1976) are all based on occupational wage rates from payroll records for specific occupations; Rosenbloom (1996) uses average earnings across all manufacturing workers; while Montgomery (1992) uses individual level wage data drawn from the Current Population Survey, and calculates geographic variations using a regression technique to control for individual differences in human capital and industry of employment. I used the relative real wages that Montgomery (1992) reported for workers in manufacturing, and used an unweighted average of wages across the cities in each region to arrive at relative regional real wages. Interested readers should consult the various underlying sources for further details.

Despite the large North-South wage gap Table 3 shows there was relatively little migration out of the South until large-scale foreign immigration came to an end. Migration from the South during World War I and the 1920s created a basis for future chain migration, but the Great Depression of the 1930s interrupted this process of adjustment. Not until the 1940s did the North-South wage gap begin to decline substantially (Wright 1986, pp. 71-80). By the 1970s the southern wage disadvantage had largely disappeared, and because of the decline fortunes of older manufacturing districts and the rise of Sunbelt cities, wages in the South now exceed those in the Northeast (Coelho and Ghali 1971; Bellante 1979; Sahling and Smith 1983; Montgomery 1992). Despite these shocks, however, the overall variation in wages appears comparable to levels attained by the end of the nineteenth century. Montgomery (1992), for example finds that from 1974 to 1984 the standard deviation of wages across SMSAs was only about 10 percent of the average wage.

Table 3

Net Migration by Region, and Race, 1870-1950

South Northeast North Central West
Period White Black White Black White Black White Black
Number (in 1,000s)
1870-80 91 -68 -374 26 26 42 257 0
1880-90 -271 -88 -240 61 -43 28 554 0
1890-00 -30 -185 101 136 -445 49 374 0
1900-10 -69 -194 -196 109 -1,110 63 1,375 22
1910-20 -663 -555 -74 242 -145 281 880 32
1920-30 -704 -903 -177 435 -464 426 1,345 42
1930-40 -558 -480 55 273 -747 152 1,250 55
1940-50 -866 -1581 -659 599 -1,296 626 2,822 356
Rate (migrants/1,000 Population)
1870-80 11 -14 -33 55 2 124 274 0
1880-90 -26 -15 -18 107 -3 65 325 0
1890-00 -2 -26 6 200 -23 104 141 0
1900-10 -4 -24 -11 137 -48 122 329 542
1910-20 -33 -66 -3 254 -5 421 143 491
1920-30 -30 -103 -7 328 -15 415 160 421
1930-40 -20 -52 2 157 -22 113 116 378
1940-50 -28 -167 -20 259 -35 344 195 964

Note: Net migration is calculated as the difference between the actual increase in population over each decade and the predicted increase based on age and sex specific mortality rates and the demographic structure of the region’s population at the beginning of the decade. If the actual increase exceeds the predicted increase this implies a net migration into the region; if the actual increase is less than predicted this implies net migration out of the region. The states included in the Southern region are Oklahoma, Texas, Arkansas, Louisiana, Mississippi, Alabama, Tennessee, Kentucky, West Virginia, Virginia, North Carolina, South Carolina, Georgia, and Florida.

Source: Eldridge and Thomas (1964, pp. 90, 99).

In addition to geographic wage gaps economists have considered gaps between farm and city, between black and white workers, between men and women, and between different industries. The literature on these topics is quite extensive and this essay can only touch on a few of the more general themes raised here as they relate to U.S. economic history.

Studies of farm-city wage gaps are a variant of the broader literature on geographic wage variation, related to the general movement of labor from farms to urban manufacturing and services. Here comparisons are complicated by the need to adjust for the non-wage perquisites that farm laborers typically received, which could be almost as large as cash wages. The issue of whether such gaps existed in the nineteenth century has important implications for whether the pace of industrialization was impeded by the lack of adequate labor supply responses. By the second half of the nineteenth century at least, it appears that farm-manufacturing wage gaps were small and markets were relatively integrated (Wright 1988, pp. 204-5). Margo (2000, ch. 4) offers evidence of a high degree of equalization within local labor markets between farm and urban wages as early as 1860. Making comparisons within counties and states, he reports that farm wages were within 10 percent of urban wages in eight states. Analyzing data from the late nineteenth century through the 1930s, Hatton and Williamson (1991) find that farm and city wages were nearly equal within U.S. regions by the 1890s. It appears, however that during the Great Depression farm wages were much more flexible than urban wages causing a large gap to emerge at this time (Alston and Williamson 1991).

Much attention has been focused on trends in wage gaps by race and sex. The twentieth century has seen a substantial convergence in both of these differentials. Table 4 displays comparisons of earnings of black males relative to white males for full time workers. In 1940, full-time black male workers earned only about 43 percent of what white male full-time workers did. By 1980 the racial pay ratio had risen to nearly 73 percent, but there has been little subsequent progress. Until the mid-1960s these gains can be attributed primarily to migration from the low-wage South to higher paying areas in the North, and to increases in the quantity and quality of black education over time (Margo 1995; Smith and Welch 1990). Since then, however, most gains have been due to shifts in relative pay within regions. Although it is clear that discrimination was a key factor in limiting access to education, the role of discrimination within the labor market in contributing to these differentials has been a more controversial topic (see Wright 1986, pp. 127-34). But the episodic nature of black wage gains, especially after 1964 is compelling evidence that discrimination has played a role historically in earnings differences and that federal anti-discrimination legislation was a crucial factor in reducing its effects (Donohue and Heckman 1991).

Table 4

Black Male Wages as a Percentage of White Male Wages, 1940-2004

Date Black Relative Wage
1940 43.4
1950 55.2
1960 57.5
1970 64.4
1980 72.6
1990 70.0
2004 77.0

Notes and Sources: Data for 1940 through 1980 are based on Census data as reported in Smith and Welch (1989, Table 8). Data for 1990 are from Ehrenberg and Smith (2000, Table 12.4) and refer to earnings of full time, full year workers. Data from 2004 are for median weekly earnings of full-time wage and salary workers derived from data in the Current Population Survey accessed on-line from the Bureau of Labor Statistic on 13 December 2005; URL ftp://ftp.bls.gov/pub/special.requests/lf/aat37.txt.

Male-Female wage gaps have also narrowed substantially over time. In the 1820s women’s earnings in manufacturing were a little less than 40 percent of those of men, but this ratio rose over time reaching about 55 percent by the 1920s. Across all sectors women’s relative pay rose during the first half of the twentieth century, but gains in female wages stalled during the 1950s and 1960s at the time when female labor force participation began to increase rapidly. Beginning in the late 1970s or early 1980s, relative female pay began to rise again, and today women earn about 80 percent what men do (Goldin 1990, table 3.2; Goldin 2000, pp. 606-8). Part of this remaining difference is explained by differences in the occupational distribution of men and women, with women tending to be concentrated in lower paying jobs. Whether these differences are the result of persistent discrimination or arise because of differences in productivity or a choice by women to trade off greater flexibility in terms of labor market commitment for lower pay remains controversial.

In addition to locational, sectoral, racial and gender wage differentials, economists have also documented and analyzed differences by industry. Krueger and Summers (1987) find that there are pronounced differences in wages by industry within well-specified occupational classes, and that these differentials have remained relatively stable over several decades. One interpretation of this phenomenon is that in industries with substantial market power workers are able to extract some of the monopoly rents as higher pay. An alternative view is that workers are in fact heterogeneous, and differences in wages reflect a process of sorting in which higher paying industries attract more able workers.

The Response to Short-run Macroeconomic Fluctuations

The existence of unemployment is one of the clearest indications of the persistent frictions that characterize labor markets. As described earlier, the concept of unemployment first entered common discussion with the growth of the factory labor force in the 1870s. Unemployment was not a visible social phenomenon in an agricultural economy, although there was undoubtedly a great deal of hidden underemployment.

Although one might have expected that the shift from spot toward more contractual labor markets would have increased rigidities in the employment relationship that would result in higher levels of unemployment there is in fact no evidence of any long-run increase in the level of unemployment.

Contemporaneous measurements of the rate of unemployment only began in 1940. Prior to this date, economic historians have had to estimate unemployment levels from a variety of other sources. Decennial censuses provide benchmark levels, but it is necessary to interpolate between these benchmarks based on other series. Conclusions about long-run changes in unemployment behavior depend to a large extent on the method used to interpolate between benchmark dates. Estimates prepared by Stanley Lebergott (1964) suggest that the average level of unemployment and its volatility have declined between the pre-1930 and post-World War II periods. Christina Romer (1986a, 1986b), however, has argued that there was no decline in volatility. Rather, she argues that the apparent change in behavior is the result of Lebergott’s interpolation procedure.

While the aggregate behavior of unemployment has changed surprisingly little over the past century, the changing nature of employment relationships has been reflected much more clearly in changes in the distribution of the burden of unemployment (Goldin 2000, pp. 591-97). At the beginning of the twentieth century, unemployment was relatively widespread, and largely unrelated to personal characteristics. Thus many employees faced great uncertainty about the permanence of their employment relationship. Today, on the other hand, unemployment is highly concentrated: falling heavily on the least skilled, the youngest, and the non-white segments of the labor force. Thus, the movement away from spot markets has tended to create a two-tier labor market in which some workers are highly vulnerable to economic fluctuations, while others remain largely insulated from economic shocks.

Wage Determination and Distributional Issues

American economic growth has generated vast increases in the material standard of living. Real gross domestic product per capita, for example, has increased more than twenty-fold since 1820 (Steckel 2002). This growth in total output has in large part been passed on to labor in the form of higher wages. Although labor’s share of national output has fluctuated somewhat, in the long-run it has remained surprisingly stable. According to Abramovitz and David (2000, p. 20), labor received 65 percent of national income in the years 1800-1855. Labor’s share dropped in the late nineteenth and early twentieth centuries, falling to a low of 54 percent of national income between 1890 and 1927, but has since risen, reaching 65 percent again in 1966-1989. Thus, over the long term, labor income has grown at the same rate as total output in the economy.

The distribution of labor’s gains across different groups in the labor force has also varied over time. I have already discussed patterns of wage variation by race and gender, but another important issue revolves around the overall level of inequality of pay, and differences in pay between groups of skilled and unskilled workers. Careful research by Picketty and Saez (2003) using individual income tax returns has documented changes in the overall distribution of income in the United States since 1913. They find that inequality has followed a U-shaped pattern over the course of the twentieth century. Inequality was relatively high at the beginning of the period they consider, fell sharply during World War II, held steady until the early 1970s and then began to increase, reaching levels comparable to those in the early twentieth century by the 1990s.

An important factor in the rising inequality of income since 1970 has been growing dispersion in wage rates. The wage differential between workers in the 90th percentile of the wage distribution and those in the 10th percentile increased by 49 percent between 1969 and 1995 (Plotnick et al 2000, pp. 357-58). These shifts are mirrored in increased premiums earned by college graduates relative to high school graduates. Two primary explanations have been advanced for these trends. First, there is evidence that technological changes—especially those associated with the increased use of information technology—has increased relative demand for more educated workers (Murnane, Willett and Levy (1995). Second, increased global integration has allowed low-wage manufacturing industries overseas to compete more effectively with U.S. manufacturers, thus depressing wages in what have traditionally been high-paying blue collar jobs.

Efforts to expand the scope of analysis over a longer-run encounter problems with more limited data. Based on selected wage ratios of skilled and unskilled workers Willamson and Lindert (1980) have argued that there was an increase in wage inequality over the course of the nineteenth century. But other scholars have argued that the wage series that Williamson and Lindert used are unreliable (Margo 2000b, pp. 224-28).

Conclusions

The history of labor market institutions in the United States illustrates the point that real world economies are substantially more complex than the simplest textbook models. Instead of a disinterested and omniscient auctioneer, the process of matching buyers and sellers takes place through the actions of self-interested market participants. The resulting labor market institutions do not respond immediately and precisely to shifting patterns of incentives. Rather they are subject to historical forces of increasing-returns and lock-in that cause them to change gradually and along path-dependent trajectories.

For all of these departures from the theoretically ideal market, however, the history of labor markets in the United States can also be seen as a confirmation of the remarkable power of market processes of allocation. From the beginning of European settlement in mainland North America, labor markets have done a remarkable job of responding to shifting patterns of demand and supply. Not only have they accomplished the massive geographic shifts associated with the settlement of the United States, but they have also dealt with huge structural changes induced by the sustained pace of technological change.

References

Abramovitz, Moses and Paul A. David. “American Macroeconomic Growth in the Era of Knowledge-Based Progress: The Long-Run Perspective.” In The Cambridge Economic History of the United States, Volume 3: The Twentieth Century, edited by Stanley L. Engerman and Robert Gallman. New York: Cambridge University Press, 2000.

Alston, Lee J. and Jeffery G. Williamson. “The Earnings Gap between Agricultural and Manufacturing Laborers, 1925-1941. Journal of Economic History 51, no. 1 (1991): 83-99.

Barton, Josef J. Peasants and Strangers: Italians, Rumanians, and Slovaks in an American City, 1890-1950. Cambridge, MA: Harvard University Press, 1975.

Bellante, Don. “The North-South Differential and the Migration of Heterogeneous Labor.” American Economic Review 69, no. 1 (1979): 166-75.

Carter, Susan B. “The Changing Importance of Lifetime Jobs in the U.S. Economy, 1892-1978.” Industrial Relations 27 (1988): 287-300.

Carter, Susan B. and Elizabeth Savoca. “Labor Mobility and Lengthy Jobs in Nineteenth-Century America.” Journal of Economic History 50, no. 1 (1990): 1-16.

Carter, Susan B. and Richard Sutch. “Historical Perspectives on the Economic Consequences of Immigration into the United States.” In The Handbook of International Migration: The American Experience, edited by Charles Hirschman, Philip Kasinitz and Josh DeWind. New York: Russell Sage Foundation, 1999.

Coelho, Philip R.P. and Moheb A. Ghali. “The End of the North-South Wage Differential.” American Economic Review 61, no. 5 (1971): 932-37.

Coelho, Philip R.P. and James F. Shepherd. “Regional Differences in Real Wages: The United States in 1851-1880.” Explorations in Economic History 13 (1976): 203-30.

Craig, Lee A. To Sow One Acre More: Childbearing and Farm Productivity in the Antebellum North. Baltimore: Johns Hopkins University Press, 1993.

Donahue, John H. III and James J. Heckman. “Continuous versus Episodic Change: The Impact of Civil Rights Policy on the Economic Status of Blacks.” Journal of Economic Literature 29, no. 4 (1991): 1603-43.

Dunn, Richard S. “Servants and Slaves: The Recruitment and Employment of Labor.” In Colonial British America: Essays in the New History of the Early Modern Era, edited by Jack P. Greene and J.R. Pole. Baltimore: Johns Hopkins University Press, 1984.

Edwards, B. “A World of Work: A Survey of Outsourcing.” Economist 13 November 2004.

Edwards, Richard. Contested Terrain: The Transformation of the Workplace in the Twentieth Century. New York: Basic Books, 1979.

Ehrenberg, Ronald G. and Robert S. Smith. Modern Labor Economics: Theory and Public Policy, seventh edition. Reading, MA; Addison-Wesley, 2000.

Eldridge, Hope T. and Dorothy Swaine Thomas. Population Redistribution and Economic Growth, United States 1870-1950, vol. 3: Demographic Analyses and Interrelations. Philadelphia: American Philosophical Society, 1964.

Fishback, Price V. “Workers’ Compensation.” EH.Net Encyclopedia, edited by Robert Whaples. August 15, 2001. URL http://www.eh.net/encyclopedia/articles/fishback.workers.compensation.

Freeman, Richard and James Medoff. What Do Unions Do? New York: Basic Books, 1984.

Friedman, Gerald (2002). “Labor Unions in the United States.” EH.Net Encyclopedia, edited by Robert Whaples. May 8, 2002. URL http://www.eh.net/encyclopedia/articles/friedman.unions.us.

Galenson, David W. White Servitude in Colonial America. New York: Cambridge University Press, 1981.

Galenson, David W. “The Rise and Fall of Indentured Servitude in the Americas: An Economic Analysis.” Journal of Economic History 44, no. 1 (1984): 1-26.

Galloway, Lowell E., Richard K. Vedder and Vishwa Shukla. “The Distribution of the Immigrant Population in the United States: An Econometric Analysis.” Explorations in Economic History 11 (1974): 213-26.

Gjerde, John. From Peasants to Farmers: Migration from Balestrand, Norway to the Upper Middle West. New York: Cambridge University Press, 1985.

Goldin, Claudia. “The Political Economy of Immigration Restriction in the United States, 1890 to 1921.” In The Regulated Economy: A Historical Approach to Political Economy, edited by Claudia Goldin and Gary Libecap. Chicago: University of Chicago Press, 1994.

Goldin, Claudia. “Labor Markets in the Twentieth Century.” In The Cambridge Economic History of the United States, Volume 3: The Twentieth Century, edited by Stanley L. Engerman and Robert Gallman. Cambridge: Cambridge University Press, 2000.

Grubb, Farley. “The Market for Indentured Immigrants: Evidence on the Efficiency of Forward Labor Contracting in Philadelphia, 1745-1773.” Journal of Economic History 45, no. 4 (1985a): 855-68.

Grubb, Farley. “The Incidence of Servitude in Trans-Atlantic Migration, 1771-1804.” Explorations in Economic History 22 (1985b): 316-39.

Grubb, Farley. “Redemptioner Immigration to Pennsylvania: Evidence on Contract Choice and Profitability.” Journal of Economic History 46, no. 2 (1986): 407-18.

Hatton, Timothy J. and Jeffrey G. Williamson (1991). “Integrated and Segmented Labor Markets: Thinking in Two Sectors.” Journal of Economic History 51, no. 2 (1991): 413-25.

Hughes, Jonathan and Louis Cain. American Economic History, sixth edition. Boston: Addison-Wesley, 2003.

Jacoby, Sanford M. “The Development of Internal Labor markets in American Manufacturing Firms.” In Internal Labor Markets, edited by Paul Osterman, 23-69. Cambridge, MA: MIT Press, 1984

Jacoby, Sanford M. Employing Bureaucracy: Managers, Unions, and the Transformation of Work in American Industry, 1900-1945. New York: Columbia University Press, 1985.

Jacoby, Sanford M. and Sunil Sharma. “Employment Duration and Industrial Labor Mobility in the United States, 1880-1980.” Journal of Economic History 52, no. 1 (1992): 161-79.

James, John A. “Job Tenure in the Gilded Age.” In Labour Market Evolution: The Economic History of Market Integration, Wage Flexibility, and the Employment Relation, edited by George Grantham and Mary MacKinnon. New York: Routledge, 1994.

Kamphoefner, Walter D. The Westfalians: From Germany to Missouri. Princeton, NJ: Princeton University Press, 1987.

Keyssar, Alexander. Out of Work: The First Century of Unemployment in Massachusetts. New York: Cambridge University Press, 1986.

Krueger, Alan B. and Lawrence H. Summers. “Reflections on the Inter-Industry Wage Structure.” In Unemployment and the Structure of Labor Markets, edited by Kevin Lang and Jonathan Leonard, 17-47. Oxford: Blackwell, 1987.

Lebergott, Stanley. Manpower in Economic Growth: The American Record since 1800. New York: McGraw-Hill, 1964.

Margo, Robert. “Explaining Black-White Wage Convergence, 1940-1950: The Role of the Great Compression.” Industrial and Labor Relations Review 48 (1995): 470-81.

Margo, Robert. Wages and Labor Markets in the United States, 1820-1860. Chicago: University of Chicago Press, 2000a.

Margo, Robert. “The Labor Force in the Nineteenth Century.” In The Cambridge Economic History of the United States, Volume 2: The Long Nineteenth Century, edited by Stanley L. Engerman and Robert E. Gallman, 207-44. New York: Cambridge University Press, 2000b.

McCusker, John J. and Russell R. Menard. The Economy of British America: 1607-1789. Chapel Hill: University of North Carolina Press, 1985.

Montgomery, Edward. “Evidence on Metropolitan Wage Differences across Industries and over Time.” Journal of Urban Economics 31 (1992): 69-83.

Morgan, Edmund S. “The Labor Problem at Jamestown, 1607-18.” American Historical Review 76 (1971): 595-611.

Murnane, Richard J., John B. Willett and Frank Levy. “The Growing Importance of Cognitive Skills in Wage Determination.” Review of Economics and Statistics 77 (1995): 251-66

Nelson, Daniel. Managers and Workers: Origins of the New Factory System in the United States, 1880-1920. Madison: University of Wisconsin Press, 1975.

O’Rourke, Kevin H. and Jeffrey G. Williamson. Globalization and History: The Evolution of a Nineteenth-Century Atlantic Economy. Cambridge, MA: MIT Press, 1999.

Owen, Laura. “History of Labor Turnover in the U.S.” EH.Net Encyclopedia, edited by Robert Whaples. April 30, 2004. URL http://www.eh.net/encyclopedia/articles/owen.turnover.

Piketty, Thomas and Emmanuel Saez. “Income Inequality in the United States, 1913-1998.” Quarterly Journal of Economics 118 (2003): 1-39.

Plotnick, Robert D. et al. “The Twentieth-Century Record of Inequality and Poverty in the United States” In The Cambridge Economic History of the United States, Volume 3: The Twentieth Century, edited by Stanley L. Engerman and Robert Gallman. New York: Cambridge University Press, 2000.

Romer, Christina. “New Estimates of Prewar Gross National Product and Unemployment.” Journal of Economic History 46, no. 2 (1986a): 341-52.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” Journal of Political Economy 94 (1986b): 1-37.

Rosenbloom, Joshua L. “Was There a National Labor Market at the End of the Nineteenth Century? New Evidence on Earnings in Manufacturing.” Journal of Economic History 56, no. 3 (1996): 626-56.

Rosenbloom, Joshua L. Looking for Work, Searching for Workers: American Labor Markets during Industrialization. New York: Cambridge University Press, 2002.

Slichter, Sumner H. “The Current Labor Policies of American Industries.” Quarterly Journal of Economics 43 (1929): 393-435.

Sahling, Leonard G. and Sharon P. Smith. “Regional Wage Differentials: Has the South Risen Again?” Review of Economics and Statistics 65 (1983): 131-35.

Smith, James P. and Finis R. Welch. “Black Economic Progress after Myrdal.” Journal of Economic Literature 27 (1989): 519-64.

Steckel, Richard. “A History of the Standard of Living in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. July 22, 2002. URL http://eh.net/encyclopedia/article/steckel.standard.living.us

Sundstrom, William A. and Joshua L. Rosenbloom. “Occupational Differences in the Dispersion of Wages and Working Hours: Labor Market Integration in the United States, 1890-1903.” Explorations in Economic History 30 (1993): 379-408.

Ward, David. Cities and Immigrants: A Geography of Change in Nineteenth-Century America. New York: Oxford University Press, 1971.

Ware, Caroline F. The Early New England Cotton Manufacture: A Study in Industrial Beginnings. Boston: Houghton Mifflin, 1931.

Weiss, Thomas. “Revised Estimates of the United States Workforce, 1800-1860.” In Long Term Factors in American Economic Growth, edited by Stanley L. Engerman and Robert E. Gallman, 641-78. Chicago: University of Chicago, 1986.

Whaples, Robert. “Child Labor in the United States.” EH.Net Encyclopedia, edited by Robert Whaples. October 8, 2005. URL http://eh.net/encyclopedia/article/whaples.childlabor.

Williamson, Jeffrey G. “The Evolution of Global Labor Markets since 1830: Background Evidence and Hypotheses.” Explorations in Economic History 32 (1995): 141-96.

Williamson, Jeffrey G. and Peter H. Lindert. American Inequality: A Macroeconomic History. New York: Academic Press, 1980.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy since the Civil War. New York: Basic Books, 1986.

Wright, Gavin. “Postbellum Southern Labor Markets.” In Quantity and Quiddity: Essays in U.S. Economic History, edited by Peter Kilby. Middletown, CT: Wesleyan University Press, 1987.

Wright, Gavin. “American Agriculture and the Labor Market: What Happened to Proletarianization?” Agricultural History 62 (1988): 182-209.

Citation: Rosenbloom, Joshua. “The History of American Labor Market Institutions and Outcomes”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-history-of-american-labor-market-institutions-and-outcomes/

A Brief Economic History of Modern Israel

Nadav Halevi, Hebrew University

The Pre-state Background

The history of modern Israel begins in the 1880s, when the first Zionist immigrants came to Palestine, then under Ottoman rule, to join the small existing Jewish community, establishing agricultural settlements and some industry, restoring Hebrew as the spoken national language, and creating new economic and social institutions. The ravages of World War I reduced the Jewish population by a third, to 56,000, about what it had been at the beginning of the century.

As a result of the war, Palestine came under the control of Great Britain, whose Balfour Declaration had called for a Jewish National Home in Palestine. Britain’s control was formalized in 1920, when it was given the Mandate for Palestine by the League of Nations. During the Mandatory period, which lasted until May 1948, the social, political and economic structure for the future state of Israel was developed. Though the government of Palestine had a single economic policy, the Jewish and Arab economies developed separately, with relatively little connection.

Two factors were instrumental in fostering rapid economic growth of the Jewish sector: immigration and capital inflows. The Jewish population increased mainly through immigration; by the end of 1947 it had reached 630,000, about 35 percent of the total population. Immigrants came in waves, particularly large in the mid 1920s and mid 1930s. They consisted of ideological Zionists and refugees, economic and political, from Central and Eastern Europe. Capital inflows included public funds, collected by Zionist institutions, but were for the most part private funds. National product grew rapidly during periods of large immigration, but both waves of mass immigration were followed by recessions, periods of adjustment and consolidation.

In the period from 1922 to 1947 real net domestic product (NDP) of the Jewish sector grew at an average rate of 13.2 percent, and in 1947 accounted for 54 percent of the NDP of the Jewish and Arab economies together. NDP per capita in the Jewish sector grew at a rate of 4.8 percent; by the end of the period it was 8.5 times larger in than in 1922, and 2.5 times larger than in the Arab sector (Metzer, 1998). Though agricultural development – an ideological objective – was substantial, this sector never accounted for more than 15 percent of total net domestic product of the Jewish economy. Manufacturing grew slowly for most of the period, but very rapidly during World War II, when Palestine was cut off from foreign competition and was a major provider to the British armed forces in the Middle East. By the end of the period, manufacturing accounted for a quarter of NDP. Housing construction, though a smaller component of NDP, was the most volatile sector, and contributed to sharp business cycle movements. A salient feature of the Jewish economy during the Mandatory period, which carried over into later periods, was the dominant size of the services sector – more than half of total NDP. This included a relatively modern educational and health sector, efficient financial and business sectors, and semi-governmental Jewish institutions, which later were ready to take on governmental duties.

The Formative Years: 1948-1965

The state of Israel came into being, in mid May 1948, in the midst of a war with its Arab neighbors. The immediate economic problems were formidable: to finance and wage a war, to take in as many immigrants as possible (first the refugees kept in camps in Europe and on Cyprus), to provide basic commodities to the old and new population, and to create a government bureaucracy to cope with all these challenges. The creation of a government went relatively smoothly, as semi-governmental Jewish institutions which had developed during the Mandatory period now became government departments.

Cease-fire agreements were signed during 1949. By the end of that year a total of 340,000 immigrants had arrived, and by the end of 1951 an additional 345,000 (the latter including immigrants from Arab countries), thus doubling the Jewish population. Immediate needs were met by a strict austerity program and inflationary government finance, repressed by price controls and rationing of basic commodities. However, the problems of providing housing and employment for the new population were solved only gradually. A New Economic Policy was introduced in early 1952. It consisted of exchange rate devaluation, the gradual relaxation of price controls and rationing, and curbing of monetary expansion, primarily by budgetary restraint. Active immigration encouragement was curtailed, to await the absorption of the earlier mass immigration.

From 1950 until 1965, Israel achieved a high rate of growth: Real GNP (gross national product) grew by an average annual rate of over 11 percent, and per capita GNP by greater than 6 percent. What made this possible? Israel was fortunate in receiving large sums of capital inflows: U.S. aid in the forms of unilateral transfers and loans, German reparations and restitutions to individuals, sale of State of Israel Bonds abroad, and unilateral transfers to public institutions, mainly the Jewish Agency, which retained responsibility for immigration absorption and agricultural settlement. Thus, Israel had resources available for domestic use – for public and private consumption and investment – about 25 percent more than its own GNP. This made possible a massive investment program, mainly financed through a special government budget. Both the enormity of needs and the socialist philosophy of the main political party in the government coalitions led to extreme government intervention in the economy.

Governmental budgets and strong protectionist measures to foster import-substitution enabled the development of new industries, chief among them textiles, and subsidies were given to help the development of exports, additional to the traditional exports of citrus products and cut diamonds.

During the four decades from the mid 1960s until the present, Israel’s economy developed and changed, as did economic policy. A major factor affecting these developments has been the Arab-Israeli conflict. Its influence is discussed first, and is followed by brief descriptions of economic growth and fluctuations, and evolution of economic policy.

The Arab-Israel Conflict

The most dramatic event of the 1960s was the Six Day War of 1967, at the end of which Israel controlled the West Bank (of the Jordan River) – the area of Palestine absorbed by the Jordan since 1949 – and the Gaza Strip, controlled until then by Egypt.

As a consequence of the occupation of these territories Israel was responsible for the economic as well as the political life in the areas taken over. The Arab sections of Jerusalem were united with the Jewish section. Jewish settlements were established in parts of the occupied territories. As hostilities intensified, special investments in infrastructure were made to protect Jewish settlers. The allocation of resources to Jewish settlements in the occupied territories has been a political and economic issue ever since.

The economies of Israel and the occupied territories were partially integrated. Trade in goods and services developed, with restrictions placed on exports to Israel of products deemed too competitive, and Palestinian workers were employed in Israel particularly in construction and agriculture. At its peak, in 1996, Palestinian employment in Israel reached 115,000 to 120,000, about 40 percent of the Palestinian labor force, but never more than 6.5 percent of total Israeli employment. Thus, while employment in Israel was a major contributor to the economy of the Palestinians, its effects on the Israeli economy, except for the sectors of construction and agriculture, were not large.

The Palestinian economy developed rapidly – real per capita national income grew at an annual rate of close to 20 percent in 1969-1972 and 5 percent in 1973-1980 – but fluctuated widely thereafter, and actually decreased in times of hostilities. Palestinian per capita income equaled 10.2 percent of Israeli per capita income in 1968, 22.8 percent in 1986, and declined to 9.7 percent in 1998 (Kleiman, 2003).

As part of the peace process between Israel and the Palestinians initiated in the 1990s, an economic agreement was signed between the parties in 1994, which in effect transformed what had been essentially a one-sided customs agreement (which gave Israel full freedom to export to the Territories but put restrictions on Palestinian exports to Israel) into a more equal customs union: the uniform external trade policy was actually Israel’s, but the Palestinians were given limited sovereignty regarding imports of certain commodities.

Arab uprisings (intifadas), in the 1980s, and especially the more violent one beginning in 2000 and continuing into 2005, led to severe Israeli restrictions on interaction between the two economies, particularly employment of Palestinians in Israel, and even to military reoccupation of some areas given over earlier to Palestinian control. These measures set the Palestinian economy back many years, wiping out much of the gains in income which had been achieved since 1967 – per capita GNP in 2004 was $932, compared to about $1500 in 1999. Palestinian workers in Israel were replaced by foreign workers.

An important economic implication of the Arab-Israel conflict is that Israel must allocate a major part of its budget to defense. The size of the defense budget has varied, rising during wars and armed hostilities. The total defense burden (including expenses not in the budget) reached its maximum relative size during and after the Yom Kippur War of 1973, close to 30 percent of GNP in 1974-1978. In the 2000-2004 period, the defense budget alone reached about 22 to 25 percent of GDP. Israel has been fortunate in receiving generous amounts of U.S. aid. Until 1972 most of this came in the form of grants and loans, primarily for purchases of U.S. agricultural surpluses. But since 1973 U.S. aid has been closely connected to Israel’s defense needs. During 1973-1982 annual loans and grants averaged $1.9 billion, and covered some 60 percent of total defense imports. But even in more tranquil periods, the defense burden, exclusive of U.S. aid, has been much larger than usual in industrial countries during peace time.

Growth and Economic Fluctuations

The high rates of growth of income and income per capita which characterized Israel until 1973 were not achieved thereafter. GDP growth fluctuated, generally between 2 and 5 percent, reaching as high as 7.5 percent in 2000, but falling below zero in the recession years from 2001 to mid 2003. By the end of the twentieth century income per capita reached about $20,000, similar to many of the more developed industrialized countries.

Economic fluctuations in Israel have usually been associated with waves of immigration: a large flow of immigrants which abruptly increases the population requires an adjustment period until it is absorbed productively, with the investments for its absorption in employment and housing stimulating economic activity. Immigration never again reached the relative size of the first years after statehood, but again gained importance with the loosening of restrictions on emigration from the Soviet Union. The total number of immigrants in 1972-1982 was 325,000, and after the collapse of the Soviet Union immigration totaled 1,050,000 in 1990-1999, mostly from the former Soviet Union. Unlike the earlier period, these immigrants were gradually absorbed in productive employment (though often not in the same activity as abroad) without resort to make-work projects. By the end of the century the population of Israel passed 6,300,000, with the Jewish population being 78 percent of the total. The immigrants from the former Soviet Union were equal to about one-fifth of the Jewish population, and were a significant and important addition of human capital to the labor force.

As the economy developed, the structure of output changed. Though the service sectors are still relatively large – trade and services contributing 46 percent of the business sector’s product – agriculture has declined in importance, and industry makes up over a quarter of the total. The structure of manufacturing has also changed: both in total production and in exports the share of traditional, low-tech industries has declined, with sophisticated, high-tech products, particularly electronics, achieving primary importance.

Fluctuations in output were marked by periods of inflation and periods of unemployment. After a change in exchange rate policy in the late 1970s (discussed below), an inflationary spiral was unleashed. Hyperinflation rates were reached in the early 1980s, about 400 percent per year by the time a drastic stabilization policy was imposed in 1985. Exchange rate stabilization, budgetary and monetary restraint, and wage and price freezes sharply reduced the rate of inflation to less than 20 percent, and then to about 16 percent in the late 1980s. Very drastic monetary policy, from the late 1990s, finally reduced the inflation to zero by 2005. However, this policy, combined with external factors such as the bursting of the high-tech bubble, recession abroad, and domestic insecurity resulting from the intifada, led to unemployment levels above 10 percent at the beginning of the new century. The economic improvements since the latter half of 2003 have, as yet (February 2005), not significantly reduced the level of unemployment.

Policy Changes

The Israeli economy was initially subject to extensive government controls. Only gradually was the economy converted into a fairly free (though still not completely so) market economy. This process began in the 1960s. In response to a realization by policy makers that government intervention in the economy was excessive, and to the challenge posed by the creation in Europe of a customs union (which gradually progressed into the present European Union), Israel embarked upon a very gradual process of economic liberalization. This appeared first in foreign trade: quantitative restrictions on imports were replaced by tariff protection, which was slowly reduced, and both import-substitution and exports were encouraged by more realistic exchange rates rather than by protection and subsidies. Several partial trade agreements with the European Economic Community (EEC), starting in 1964, culminated in a free trade area agreement (FTA) in industrial goods in 1975, and an FTA agreement with the U.S. came into force in 1985.

By late 1977 a considerable degree of trade liberalization had taken place. In October of that year, Israel moved from a fixed exchange rate system to a floating rate system, and restrictions on capital movements were considerably liberalized. However, there followed a disastrous inflationary spiral which curbed the capital liberalization process. Capital flows were not completely liberalized until the beginning of the new century.

Throughout the 1980s and the 1990s there were additional liberalization measures: in monetary policy, in domestic capital markets, and in various instruments of governmental interference in economic activity. The role of government in the economy was considerably decreased. On the other hand, some governmental economic functions were increased: a national health insurance system was introduced, though private health providers continued to provide health services within the national system. Social welfare payments, such as unemployment benefits, child allowances, old age pensions and minimum income support, were expanded continuously, until they formed a major budgetary expenditure. These transfer payments compensated, to a large extent, for the continuous growth of income inequality, which had moved Israel from among the developed countries with the least income inequality to those with the most. By 2003, 15 percent of the government’s budget went to health services, 15 percent to education, and an additional 20 percent were transfer payments through the National Insurance Agency.

Beginning in 2003, the Ministry of Finance embarked upon a major effort to decrease welfare payments, induce greater participation in the labor force, privatize enterprises still owned by government, and reduce both the relative size of the government deficit and the government sector itself. These activities are the result of an ideological acceptance by the present policy makers of the concept that a truly free market economy is needed to fit into and compete in the modern world of globalization.

An important economic institution is the Histadrut, a federation of labor unions. What had made this institution unique is that, in addition to normal labor union functions, it encompassed agricultural and other cooperatives, major construction and industrial enterprises, and social welfare institutions, including the main health care provider. During the Mandatory period, and for many years thereafter, the Histadrut was an important factor in economic development and in influencing economic policy. During the 1990s, the Histadrut was divested of many of its non-union activities, and its influence in the economy has greatly declined. The major unions associated with it still have much say in wage and employment issues.

The Challenges Ahead

As it moves into the new century, the Israeli economy has proven to be prosperous, as it continuously introduces and applies economic innovation, and to be capable of dealing with economic fluctuations. However, it faces some serious challenges. Some of these are the same as those faced by most industrial economies: how to reconcile innovation, the switch from traditional activities which are no longer competitive, to more sophisticated, skill-intensive products, with the dislocation of labor it involves, and the income inequality it intensifies. Like other small economies, Israel has to see how it fits into the new global economy, marked by the two major markets of the EU and the U.S., and the emergence of China as a major economic factor.

Special issues relate to the relations of Israel with its Arab neighbors. First are the financial implications of continuous hostilities and military threats. Clearly, if peace can come to the region, resources can be transferred to more productive uses. Furthermore, foreign investment, so important for Israel’s future growth, is very responsive to political security. Other issues depend on the type of relations established: will there be the free movement of goods and workers between Israel and a Palestinian state? Will relatively free economic relations with other Arab countries lead to a greater integration of Israel in the immediate region, or, as is more likely, will Israel’s trade orientation continue to be directed mainly to the present major industrial countries? If the latter proves true, Israel will have to carefully maneuver between the two giants: the U.S. and the EU.

References and Recommended Reading

Ben-Bassat, Avi, editor. The Israeli Economy, 1985-1998: From Government Intervention to Market Economics. Cambridge, MA: MIT Press, 2002.

Ben-Porath, Yoram, editor. The Israeli Economy: Maturing through Crisis. Cambridge, MA: Harvard University Press, 1986.

Fischer, Stanley, Dani Rodrik and Elias Tuma, editors. The Economics of Middle East Peace. Cambridge, MA: MIT Press, 1993.

Halevi, Nadav and Ruth Klinov-Malul, The Economic Development of Israel. New York: Praeger, 1968.

Kleiman, Ephraim. “Palestinian Economic Viability and Vulnerability.” Paper presented at the UCLA Burkle Conference in Athens, August 2003. (Available at www.international.ucla.edu.)

Metz, Helen Chapin, editor. Israel: A Country Study. Washington: Library of Congress Country Studies, 1986.

Metzer, Jacob, The Divided Economy of Mandatory Palestine. Cambridge: Cambridge University Press, 1998.

Patinkin, Don. The Israel Economy: The First Decade. Jerusalem: Maurice Falk Institute for Economic Research in Israel, 1967.

Razin, Assaf and Efraim Sadka, The Economy of Modern Israel: Malaise and Promise. London: Chicago University Press, 1993.

World Bank. Developing the Occupied Territories: An Investment in Peace. Washington D.C.: The World Bank, September, 1993.

Citation: Halevi, Nadav. “A Brief Economic History of Modern Israel”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/a-brief-economic-history-of-modern-israel/

Islamic Economics: What It Is and How It Developed

M. Umer Chapra, Islamic Research and Training Institute

Islamic economics has been having a revival over the last few decades. However, it is still in a preliminary stage of development. In contrast with this, conventional economics has become a well-developed and sophisticated discipline after going through a long and rigorous process of development over more than a century. Is a new discipline in economics needed? If so, what is Islamic economics, how does it differ from conventional economics, and what contributions has it made over the centuries? This article tries to briefly answer these questions.

It is universally recognized that resources are scarce compared with the claims on them. However, it is also simultaneously recognized by practically all civilizations that the well-being of all human beings needs to be ensured. Given the scarcity of resources, the well-being of all may remain an unrealized dream if the scarce resources are not utilized efficiently and equitably. For this purpose, every society needs to develop an effective strategy, which is consciously or unconsciously conditioned by its worldview. If the worldview is flawed, the strategy may not be able to help the society actualize the well-being of all. Prevailing worldviews may be classified for the sake of ease into two board theoretical constructs (1) secular and materialist, and (2) spiritual and humanitarian.

The Role of the Worldview

Secular and materialist worldviews attach maximum importance to the material aspect of human well-being and tend generally to ignore the importance of the spiritual aspect. They often argue that maximum material well-being can be best realized if individuals are given unhindered freedom to pursue their self-interest and to maximize their want satisfaction in keeping with their own tastes and preferences.[1] In their extreme form they do not recognize any role for Divine guidance in human life and place full trust in the ability of human beings to chalk out a proper strategy with the help of their reason. In such a worldview there is little role for values or government intervention in the efficient and equitable allocation and distribution of resources. When asked about how social interest would be served when everyone has unlimited freedom to pursue his/her self-interest, the reply is that market forces will themselves ensure this because competition will keep self-interest under check.

In contrast with this, religious worldviews give attention to both the material as well as the spiritual aspects of human well-being. They do not necessarily reject the role of reason in human development. They, however, recognize the limitations of reason and wish to complement it by revelation. They do not also reject the need for individual freedom or the role that the serving of self-interest can play in human development They, however, emphasize that both freedom and the pursuit of self-interest need to be toned down by moral values and good governance to ensure that everyone’s well-being is realized and that social harmony and family integrity are not hurt in the process of everyone serving his/her self-interest.

Material and Spiritual Needs

Even though none of the major worldviews prevailing around the world is totally materialist and hedonist, there are, nevertheless, significant differences among them in terms of the emphasis they place on material or spiritual goals and the role of moral values and government intervention in ordering human affairs. While material goals concentrate primarily on goods and services that contribute to physical comfort and well-being, spiritual goals include nearness to God, peace of mind, inner happiness, honesty, justice, mutual care and cooperation, family and social harmony, and the absence of crime and anomie. These may not be quantifiable, but are, nevertheless, crucial for realizing human well-being. Resources being limited, excessive emphasis on the material ingredients of well-being may lead to a neglect of spiritual ingredients. The greater the difference in emphasis, the greater may be the difference in the economic disciplines of these societies. Feyerabend (1993) frankly recognized this in the introduction to the Chinese edition of his thought-provoking book, Against Method, by stating that “First world science is only one science among many; by claiming to be more it ceases to be an instrument of research and turns into a (political) pressure group” (p.3, parentheses are in the original).

The Enlightenment Worldview and Conventional Economics

There is a great deal that is common between the worldviews of most major religions, particularly those of Judaism, Christianity and Islam. This is because, according to Islam, there is a continuity and similarity in the value systems of all Revealed religions to the extent to which the Message has not been lost or distorted over the ages. The Qur’an clearly states that: “Nothing has been said to you [Muhammad] that was not said to the Messengers before you” (Al-Qur’an, 41:43). If conventional economics had continued to develop in the image of the Judeo-Christian worldview, as it did before the Enlightenment Movement of the seventeenth and eighteenth centuries, there may not have been any significant difference between conventional and Islamic economics. However, after the Enlightenment Movement, all intellectual disciplines in Europe became influenced by its secular, value-neutral, materialist and social-Darwinist worldview, even though this did not succeed fully. All economists did not necessarily become materialist or social-Darwinist in their individual lives and many of them continued to be attached to their religious worldviews. Koopmans (1969) has rightly observed that “scratch an economist and you will find a moralist underneath.” Therefore, while theoretically conventional economics adopted the secular and value neutral orientation of the Enlightenment worldview and failed to recognize the role of value judgments and good governance in the efficient and equitable allocation and distribution of resources, in practice this did not take place fully. The pre-Enlightenment tradition never disappeared completely (see Baeck, 1994, p. 11).

There is no doubt that, in spite of its secular and materialist worldview, the market system led to a long period of prosperity in the Western market-oriented economies. However, this unprecedented prosperity did not lead to the elimination of poverty or the fulfillment of everyone’s needs in conformity with the Judeo-Christian value system even in the wealthiest countries. Inequalities of income and wealth have also continued to persist and there has also been a substantial degree of economic instability and unemployment which have added to the miseries of the poor. This indicates that both efficiency and equity have remained elusive in spite of rapid development and phenomenal rise in wealth.

Consequently there has been persistent criticism of economics by a number of well-meaning scholars, including Thomas Carlyle (Past and Present, 1843), John Ruskin (Unto this Last, 1862) and Charles Dickens (Hard Times, 1854-55) in England, and Henry George (Progress and Poverty, 1879) in America. They ridiculed the dominant doctrine of laissez-faire with its emphasis on self-interest. Thomas Carlyle called economics a “dismal science” and rejected the idea that free and uncontrolled private interests will work in harmony and further the public welfare (see Jay and Jay, 1986). Henry George condemned the resulting contrast between wealth and poverty and wrote: “So long as all the increased wealth which modern progress brings goes but to build great fortunes, to increase luxury and make sharper the contrast between the House of Have and the House of Want, progress is not real and cannot be permanent” (1955, p. 10).

In addition to failing to fulfill the basic needs of a large number of people and increasing inequalities of income and wealth, modern economic development has been associated with the disintegration of the family and a failure to bring peace of mind and inner happiness (Easterlin 2001, 1995 and 1974; Oswald, 1997; Blanchflower and Oswald, 2000; Diener and Oshi, 2000; and Kenny, 1999). Due to these problems and others the laissez-faire approach lost ground, particularly after the Great Depression of the 1930s as a result of the Keynesian revolution and the socialist onslaught. However, most observers have concluded that government intervention alone cannot by itself remove all socio-economic ills. It is also necessary to motivate individuals to do what is right and abstain from doing what is wrong. This is where the moral uplift of society can be helpful. Without it, more and more difficult and costly regulations are needed. Nobel-laureate Amartya Sen has, therefore, rightly argued that “the distancing of economics from ethics has impoverished welfare economics and also weakened the basis of a good deal of descriptive and predictive economics” and that economics “can be made more productive by paying greater and more explicit attention to ethical considerations that shaped human behaviour and judgment” (1987, pp. 78-79). Hausman and McPherson also conclude in their survey article “Economics and Contemporary Moral Philosophy” that “An economy that is engaged actively and self-critically with the moral aspects of its subject matter cannot help but be more interesting, more illuminating and, ultimately, more useful than the one that tries not to be” (1993, p. 723).

Islamic Economics – and How It Differs from Conventional Economics

While conventional economics is now in the process of returning to its pre-Enlightenment roots, Islamic economics never got entangled in a secular and materialist worldview. It is based on a religious worldview which strikes at the roots of secularism and value neutrality. To ensure the true well-being of all individuals, irrespective of their sex, age, race, religion or wealth, Islamic economics does not seek to abolish private property, as was done by communism, nor does it prevent individuals from serving their self-interest. It recognizes the role of the market in the efficient allocation of resources, but does not find competition to be sufficient to safeguard social interest. It tries to promote human brotherhood, socio-economic justice and the well-being of all through an integrated role of moral values, market mechanism, families, society, and ‘good governance.’ This is because of the great emphasis in Islam on human brotherhood and socio-economic justice.

The Integrated Role of the Market, Families, Society, and Government

The market is not the only institution where people interact in human society. They also interact in the family, the society and the government and their interaction in all these institutions is closely interrelated. There is no doubt that the serving of self-interest does help raise efficiency in the market place. However, if self-interest is overemphasized and there are no moral restraints on individual behavior, other institutions may not work effectively – families may disintegrate, the society may be uncaring, and the government may be corrupt, partisan, and self-centered. Mutual sacrifice is necessary for keeping the families glued together. Since the human being is the most important input of not only the market, but also of the family, the society and the government, and the family is the source of this input, nothing may work if families disintegrate and are unable to provide loving care to children. This is likely to happen if both the husband and wife try to serve just their own self-interest and are not attuned to the making of sacrifices that the proper care and upbringing of children demands. Lack of willingness to make such sacrifice can lead to a decline in the quality of the human input to all other institutions, including the market, the society and the government. It may also lead to a fall in fertility rates below the replacement level, making it difficult for society not only to sustain its development but also its social security system.

The Role of Moral Values

While conventional economics generally considers the behavior and tastes and preferences of individuals as given, Islamic economics does not do so. It places great emphasis on individual and social reform through moral uplift. This is the purpose for which all God’s messengers, including Abraham, Moses, Jesus, and Muhammad, came to this world. Moral uplift aims at the change in human behavior, tastes and preferences and, thereby, it complements the price mechanism in promoting general well-being. Before even entering the market place and being exposed to the price filter, consumers are expected to pass their claims through the moral filter. This will help filter out conspicuous consumption and all wasteful and unnecessary claims on resources. The price mechanism can then take over and reduce the claims on resources even further to lead to the market equilibrium. The two filters can together make it possible to have optimum economy in the use of resources, which is necessary to satisfy the material as well as spiritual needs of all human beings, to reduce the concentration of wealth in a few hands, and to raise savings, which are needed to promote greater investment and employment. Without complementing the market system with morally-based value judgments, we may end up perpetuating inequities in spite of our good intentions through what Solo calls inaction, non-choice and drifting (Solo, 1981, p. 38)

From the above discussion, one may easily notice the similarities and differences between the two disciplines. While the subject matter of both is the allocation and distribution of resources and both emphasize the fulfillment of material needs, there is an equal emphasis in Islamic economics on the fulfillment of spiritual needs. While both recognize the important role of market mechanism in the allocation and distribution of resources, Islamic economics argues that the market may not by itself be able to fulfill even the material needs of all human beings. This is because it can promote excessive use of scarce resources by the rich at the expense of the poor if there is undue emphasis on the serving of self-interest. Sacrifice is involved in fulfilling our obligations towards others and excessive emphasis on the serving of self-interest does not have the potential of motivating people to make the needed sacrifice. This, however, raises the crucial question of why a rational person would sacrifice his self-interest for the sake of others?

The Importance of the Hereafter

This is where the concepts of the innate goodness of human beings and of the Hereafter come in – concepts which conventional economics ignores but on which Islam and other major religions place a great deal of emphasis. Because of their innate goodness, human beings do not necessarily always try to serve their self-interest. They are also altruistic and are willing to make sacrifices for the well-being of others. In addition, the concept of the Hereafter does not confine self-interest to just this world. It rather extends it beyond this world to life after death. We may be able to serve our self-interest in this world by being selfish, dishonest, uncaring, and negligent of our obligations towards our families, other human beings, animals, and the environment. However, we cannot serve our self-interest in the Hereafter except by fulfilling all these obligations.

Thus, the serving of self-interest receives a long-run perspective in Islam and other religions by taking into account both this world and the next. This serves to provide a motivating mechanism for sacrifice for the well-being of others that conventional economics fails to provide. The innate goodness of human beings along with the long-run perspective given to self-interest has the potential of inducing a person to be not only efficient but also equitable and caring. Consequently, the three crucial concepts of conventional economics – rational economic man, positivism, and laissez-faire – were not able to gain intellectual blessing in their conventional economics sense from any of the outstanding scholars who represent the mainstream of Islamic thought.

Rational Economic Man

While there is hardly anyone opposed to the need for rationality in human behavior, there are differences of opinion in defining rationality (Sen, 1987, pp. 11-14). However, once rationality has been defined in terms of overall individual as well as social well-being, then rational behavior could only be that which helps us realize this goal. Conventional economics does not define rationality in this way. It equates rationality with the serving of self-interest through the maximization of wealth and want satisfaction, The drive of self-interest is considered to be the “moral equivalent of the force of gravity in nature” (Myers, 1983, p. 4). Within this framework society is conceptualized as a mere collection of individuals united through ties of self-interest.

The concept of ‘rational economic man’ in this social-Darwinist, utilitarian, and material sense of serving self–interest could not find a foothold in Islamic economics. ‘Rationality’ in Islamic economics does not get confined to the serving of one’s self-interest in this world alone; it also gets extended to the Hereafter through the faithful compliance with moral values that help rein self-interest to promote social interest. Al-Mawardi (d. 1058) considered it necessary, like all other Muslim scholars, to rein individual tastes and preferences through moral values (1955, pp. 118-20). Ibn Khaldun (d.1406) emphasized that moral orientation helps remove mutual rivalry and envy, strengthens social solidarity, and creates an inclination towards righteousness (n.d., p.158).

Positivism

Similarly, positivism in the conventional economics sense of being “entirely neutral between ends” (Robbins, 1935, p. 240) or “independent of any particular ethical position or normative judgment” (Friedman, 1953) did not find a place in Muslim intellectual thinking. Since all resources at the disposal of human beings are a trust from God, and human beings are accountable before Him, there is no other option but to use them in keeping with the terms of trust. These terms are defined by beliefs and moral values. Human brotherhood, one of the central objectives of Islam, would be a meaningless jargon if it were not reinforced by justice in the allocation and distribution of resources.

Pareto Optimum

Without justice, it would be difficult to realize even development. Muslim scholars have emphasized this throughout history. Development Economics has also started emphasizing its importance, more so in the last few decades.[2] Abu Yusuf (d. 798) argued that: “Rendering justice to those wronged and eradicating injustice, raises tax revenue, accelerates development of the country, and brings blessings in addition to reward in the Hereafter” (1933/34, p. 111: see also pp. 3-17). Al-Mawardi argued that comprehensive justice “inculcates mutual love and affection, obedience to the law, development of the country, expansion of wealth, growth of progeny, and security of the sovereign” (1955, p. 27). Ibn Taymiyyah (d. 1328) emphasized that “justice towards everything and everyone is an imperative for everyone, and injustice is prohibited to everything and everyone. Injustice is absolutely not permissible irrespective of whether it is to a Muslim or a non-Muslim or even to an unjust person” (1961-63, Vol. 18, p. 166).

Justice and the well-being of all may be difficult to realize without a sacrifice on the part of the well-to-do. The concept of Pareto optimum does not, therefore, fit into the paradigm of Islamic economics. This is because Pareto optimum does not recognize any solution as optimum if it requires a sacrifice on the part of a few (rich) for raising the well-being of the many (poor). Such a position is in clear conflict with moral values, the raison d’être of which is the well-being of all. Hence, this concept did not arise in Islamic economics. In fact, Islam makes it a religious obligation of Muslims to make a sacrifice for the poor and the needy, by paying Zakat at the rate of 2.5 percent of their net worth. This is in addition to the taxes that they pay to the governments as in other countries.

The Role of State

Moral values may not be effective if they are not observed by all. They need to be enforced. It is the duty of the state to restrain all socially harmful behavior[3] including injustice, fraud, cheating, transgression against other people’s person, honor and property, and the non-fulfillment of contracts and other obligations through proper upbringing, incentives and deterrents, appropriate regulations, and an effective and impartial judiciary. The Qur’an can only provide norms. It cannot by itself enforce them. The state has to ensure this. That is why the Prophet Muhammad said: “God restrains through the sovereign more than what He restrains through the Qur’an” (cited by al-Mawardi, 1955, p. 121). This emphasis on the role of the state has been reflected in the writings of all leading Muslim scholars throughout history.[4] Al-Mawardi emphasized that an effective government (Sultan Qahir) is indispensable for preventing injustice and wrongdoing (1960, p. 5). Say’s Law could not, therefore, become a meaningful proposition in Islamic economics.

How far is the state expected to go in the fulfillment of its role? What is it that the state is expected to do? This has been spelled out by a number of scholars in the literature on what has come to be termed as “Mirrors for Princes.”[5] None of them visualized regimentation or the owning and operating of a substantial part of the economy by the state. Several classical Muslim scholars, including al-Dimashqi (d. after 1175) and Ibn Khaldun, clearly expressed their disapproval of the state becoming directly involved in the economy (Al-Dimashqi, 1977, pp. 12 and 61; Ibn Khaldun, pp. 281-83). According to Ibn Khaldun, the state should not acquire the character of a monolithic or despotic state resorting to a high degree of regimentation (ibid., p. 188). It should not feel that, because it has authority, it can do anything it likes (ibid, p. 306). It should be welfare-oriented, moderate in its spending, respect the property rights of the people, and avoid onerous taxation (ibid, p. 296). This implies that what these scholars visualized as the role of government is what has now been generally referred to as ‘good governance’.

Some of the Contributions Made by Islamic Economics

The above discussion should not lead one to an impression that the two disciplines are entirely different. One of the reasons for this is that the subject matter of both disciplines is the same, allocation and distribution of scarce resources. Another reason is that all conventional economists have never been value neutral. They have made value judgments in conformity with their beliefs. As indicated earlier, even the paradigm of conventional economics has been changing – the role of good governance has now become well recognized and the injection of a moral dimension has also become emphasized by a number of prominent economists. Moreover, Islamic economists have benefited a great deal from the tools of analysis developed by neoclassical, Keynesian, social, humanistic and institutional economics as well as other social sciences, and will continue to do so in the future.

The Fallacy of the ‘Great Gap’ Theory

A number of economic concepts developed in Islamic economics long before they did in conventional economics. These cover a number of areas including interdisciplinary approach; property rights; division of labor and specialization; the importance of saving and investment for development; the role that both demand and supply play in the determination of prices and the factors that influence demand and supply; the roles of money, exchange, and the market mechanism; characteristics of money, counterfeiting, currency debasement, and Gresham’s law; the development of checks, letters of credit and banking; labor supply and population; the role of the state, justice, peace, and stability in development; and principles of taxation.I t is not possible to provide comprehensive coverage of all the contributions Muslim scholars have made to economics. Only some of their contributions will be highlighted below to remove the concept of the “Great Gap” of “over 500 years” that exists in the history of conventional economic thought as a result of the incorrect conclusion by Joseph Schumpeter in History of Economic Analysis (1954), that the intervening period between the Greeks and the Scholastics was sterile and unproductive.[6] This concept has become well embedded in the conventional economics literature as may be seen from the reference to this even by the Nobel-laureate, Douglass North, in his December 1993 Nobel lecture (1994, p. 365). Consequently, as Todd Lowry has rightly observed, “the character and sophistication of Arabian writings has been ignored” (See his ‘Foreword’ in Ghazanfar, 2003, p. xi).

The reality, however, is that the Muslim civilization, which benefited greatly from the Chinese, Indian, Sassanian and Byzantine civilizations, itself made rich contributions to intellectual activity, including socio-economic thought, during the ‘Great Gap’ period, and thereby played a part in kindling the flame of the European Enlightenment Movement. Even the Scholastics themselves were greatly influenced by the contributions made by Muslim scholars. The names of Ibn Sina (Avicenna, d. 1037), Ibn Rushd (Averroes, d. 1198) and Maimonides (d. 1204, a Jewish philosopher, scientist, and physician who flourished in Muslim Spain) appear on almost every page of the thirteenth-century summa (treatises written by scholastic philosophers) (Pifer, 1978, p. 356).

Multidisciplinary Approach for Development

One of the most important contributions of Islamic economics, in addition to the above paradigm discussion, was the adoption of a multidisciplinary dynamic approach. Muslim scholars did not focus their attention primarily on economic variables. They considered overall human well-being to be the end product of interaction over a long period of time between a number of economic, moral, social, political, demographic and historical factors in such a way that none of them is able to make an optimum contribution without the support of the others. Justice occupied a pivotal place in this whole framework because of its crucial importance in the Islamic worldview There was an acute realization that justice is indispensable for development and that, in the absence of justice, there will be decline and disintegration.

The contributions made by different scholars over the centuries seem to have reached their consummation in Ibn Khaldun’s Maquddimah, which literally means ‘introduction,’ and constitutes the first volume of a seven-volume history, briefly called Kitab al-‘Ibar or the Book of Lessons [of History].[7] Ibn Khaldun lived at a time (1332-1406) when the Muslim civilization was in the process of decline. He wished to see a reversal of this tide, and, as a social scientist, he was well aware that such a reversal could not be envisaged without first drawing lessons (‘ibar) from history to determine the factors that had led the Muslim civilization to bloom out of humble beginnings and to decline thereafter. He was, therefore, not interested in knowing just what happened. He wanted to know the how and why of what happened. He wanted to introduce a cause and effect relationship into the discussion of historical phenomena. The Muqaddimah is the result of this desire. It tries to derive the principles that govern the rise and fall of a ruling dynasty, state (dawlah) or civilization (‘umran).

Since the centre of Ibn Khaldun’s analysis is the human being, he sees the rise and fall of dynasties or civilizations to be closely dependent on the well-being or misery of the people. The well-being of the people is in turn not dependent just on economic variables, as conventional economics has emphasized until recently, but also on the closely interrelated role of moral, psychological, social, economic, political, demographic and historical factors. One of these factors acts as the trigger mechanism. The others may, or may not, react in the same way. If the others do not react in the same direction, then the decay in one sector may not spread to the others and either the decaying sector may be reformed or the decline of the civilization may be much slower. If, however, the other sectors react in the same direction as the trigger mechanism, the decay will gain momentum through an interrelated chain reaction such that it becomes difficult over time to identify the cause from the effect. He, thus, seems to have had a clear vision of how all the different factors operate in an interrelated and dynamic manner over a long period to promote the development or decline of a society.

He did not, thus, adopt the neoclassical economist’s simplification of confining himself to primarily short-term static analysis of only markets by assuming unrealistically that all other factors remain constant. Even in the short-run, everything may be in a state of flux through a chain reaction to the various changes constantly taking place in human society, even though these may be so small as to be imperceptible. Therefore, even though economists may adopt the ceteris paribus assumption for ease of analysis, Ibn Khaldun’s multidisciplinary dynamics can be more helpful in formulating socio-economic policies that help improve the overall performance of a society. Neoclassical economics is unable to do this because, as North has rightly asked, “How can one prescribe policies when one does not understand how economies develop?” He, therefore, considers neoclassical economics to be “an inappropriate tool to analyze and prescribe policies that will induce development” (North, 1994, p. 549).

However, this is not all that Islamic economics has done. Muslim scholars, including Abu Yusuf (d. 798), al-Mawardi (d. 1058), Ibn Hazm (d. 1064), al-Sarakhsi (d. 1090), al-Tusi (d. 1093), al-Ghazali (d. 1111), al-Dimashqi (d. after 1175), Ibn Rushd (d. 1187), Ibn Taymiyyah (d.1328), Ibn al-Ukhuwwah (d. 1329), Ibn al-Qayyim (d. 1350), al-Shatibi (d. 1388), Ibn Khaldun (d. 1406), al-Maqrizi (d. 1442), al-Dawwani (d. 1501), and Shah Waliyullah (d. 1762) made a number of valuable contributions to economic theory. Their insight into some economic concepts was so deep that a number of the theories propounded by them could undoubtedly be considered the forerunners of some more sophisticated modern formulations of these theories.[8]

Division of Labor, Specialization, Trade, Exchange and Money and Banking

A number of scholars emphasized the necessity of division of labor for economic development long before this happened in conventional economics. For example, al-Sarakhsi (d. 1090) said: “the farmer needs the work of the weaver to get clothing for himself, and the weaver needs the work of the farmer to get his food and the cotton from which the cloth is made …, and thus everyone of them helps the other by his work…” (1978, Vol. 30, p. 264). Al-Dimashqi, writing about a century later, elaborates further by saying: “No individual can, because of the shortness of his life span, burden himself with all industries. If he does, he may not be able to master the skills of all of them from the first to the last. Industries are all interdependent. Construction needs the carpenter and the carpenter needs the ironsmith and the ironsmith needs the miner, and all these industries need premises. People are, therefore, necessitated by force of circumstances to be clustered in cities to help each other in fulfilling their mutual needs” (1977, p. 20-21).

Ibn Khaldun ruled out the feasibility or desirability of self-sufficiency, and emphasized the need for division of labor and specialization by indicating that: “It is well-known and well-established that individual human beings are not by themselves capable of satisfying all their individual economic needs. They must all cooperate for this purpose. The needs that can be satisfied by a group of them through mutual cooperation are many times greater than what individuals are capable of satisfying by themselves” (p. 360). In this respect he was perhaps the forerunner of the theory of comparative advantage, the credit for which is generally given in conventional economics to David Ricardo who formulated it in 1817.

The discussion of division of labor and specialization, in turn, led to an emphasis on trade and exchange, the existence of well-regulated and properly functioning markets through their effective regulation and supervision (hisbah), and money as a stable and reliable measure, medium of exchange and store of value. However, because of bimetallism (gold and silver coins circulating together) which then prevailed, and the different supply and demand conditions that the two metals faced, the rate of exchange between the two full-bodied coins fluctuated. This was further complicated by debasement of currencies by governments in the later centuries to tide over their fiscal problems. This had, according to Ibn Taymiyyah (d. 1328) (1961-63, Vol. 29, p. 649), and later on al-Maqrizi (d. 1442) and al-Asadi (d. 1450), the effect of bad coins driving good coins out of circulation (al-Misri, 1981, pp. 54 and 66), a phenomenon which was recognized and referred to in the West in the sixteenth century as Gresham’s Law. Since debasement of currencies is in sheer violation of the Islamic emphasis on honesty and integrity in all measures of value, fraudulent practices in the issue of coins in the fourteenth century and afterwards elicited a great deal of literature on monetary theory and policy. The Muslims, according to Baeck, should, therefore, be considered forerunners and critical incubators of the debasement literature of the fourteenth and fifteenth centuries (Baeck, 1994, p. 114).

To finance their expanding domestic and international trade, the Muslim world also developed a financial system, which was able to mobilize the “entire reservoir of monetary resources of the mediaeval Islamic world” for financing agriculture, crafts, manufacturing and long-distance trade (Udovitch, 1970, pp. 180 and 261). Financiers were known as sarrafs. By the time of Abbasid Caliph al-Muqtadir (908-32), they had started performing most of the basic functions of modern banks (Fischel, 1992). They had their markets, something akin to the Wall Street in New York and Lombard Street in London, and fulfilled all the banking needs of commerce, agriculture and industry (Duri, 1986, p. 898). This promoted the use of checks (sakk) and letters of credit (hawala). The English word check comes from the Arabic term sakk.

Demand and Supply

A number of Muslim scholars seem to have clearly understood the role of both demand and supply in the determination of prices. For example, Ibn Taymiyyah (d. 1328) wrote: “The rise or fall of prices may not necessarily be due to injustice by some people. They may also be due to the shortage of output or the import of commodities in demand. If the demand for a commodity increases and the supply of what is demanded declines, the price rises. If, however, the demand falls and the supply increases, the price falls” (1961-3, Vol. 8, p. 523).

Even before Ibn Taymiyyah, al-Jahiz (d. 869) wrote nearly five centuries earlier that: “Anything available in the market is cheap because of its availability [supply] and dear by its lack of availability if there is need [demand] for it” (1983, p. 13), and that “anything the supply of which increases, becomes cheap except intelligence, which becomes dearer when it increases” (ibid., p. 13).

Ibn Khaldun went even further by emphasizing that both an increase in demand or a fall in supply leads to a rise in prices, while a decline in demand or a rise in supply contributes to a fall in prices (pp. 393 and 396). He believed that while continuation of ‘excessively low’ prices hurts the craftsmen and traders and drives them out of the market, the continuation of ‘excessively high’ prices hurts the consumers. ‘Moderate’ prices in between the two extremes were, therefore, desirable, because they would not only allow the traders a socially-acceptable level of return but also lead to the clearance of the markets by promoting sales and thereby generating a given turnover and prosperity (ibid, p. 398). Nevertheless, low prices were desirable for necessities because they provide relief to the poor who constitute the majority of the population (ibid, p. 398). If one were to use modem terminology, one could say that Ibn Khaldun found a stable price level with a relatively low cost of living to be preferable, from the point of view of both growth and equity in comparison with bouts of inflation and deflation. The former hurts equity while the latter reduces incentive and efficiency. Low prices for necessities should not, however, be attained through the fixing of prices by the state; this destroys the incentive for production (ibid, pp. 279-83).

The factors which determined demand were, according to Ibn Khaldun, income, price level, the size of the population, government spending, the habits and customs of the people, and the general development and prosperity of the society (ibid, pp.398-404). The factors which determined supply were demand (ibid, pp. 400 and 403), order and stability (pp. 306-08), the relative rate of profit (ibid, pp. 395 and 398), the extent of human effort (p. 381), the size of the labor force as well as their knowledge and skill (pp. 363 and 399-400), peace and security (pp. 394-95 and 396), and the technical background and development of the whole society (pp. 399-403). All these constituted important elements of his theory of production. If the price falls and leads to a loss, capital is eroded, the incentive to supply declines, leading to a recession. Trade and crafts also consequently suffer (p. 398).

This is highly significant because the role of both demand and supply in the determination of value was not well understood in the West until the late nineteenth and the early twentieth centuries. Pre-classical English economists like William Petty (1623-87), Richard Cantillon (1680-1734), James Steuart (1712-80), and even Adam Smith (1723-90), the founder of the Classical School, generally stressed only the role of the cost of production, and particularly of labor, in the determination of value. The first use in English writings of the notions of both demand and supply was perhaps in 1767 (Thweatt, 1983). Nevertheless, it was not until the second decade of the nineteenth century that the role of both demand and supply in the determination of market prices began to be fully appreciated (Groenewegen, 1973). While Ibn Khaldun had been way ahead of conventional economists, he probably did not have any idea of demand and supply schedules, elasticities of demand and supply and most important of all, equilibrium price, which plays a crucial role in modern economic discussions.

Public Finance

Taxation

Long before Adam Smith (d. 1790), who is famous, among other things, for his canons of taxation (equality, certainty, convenience of payment, and economy in collection) (see Smith, 1937, pp. 777-79), the development of these canons can be traced in the writings of pre-Islamic as well as Muslim scholars, particularly the need for the tax system to be just and not oppressive. Caliphs Umar (d. 644), Ali (d. 661) and Umar ibn Abd al-Aziz (d. 720), stressed that taxes should be collected with justice and leniency and should not be beyond the ability of the people to bear. Tax collectors should not under any circumstances deprive the people of the necessities of life (Abu Yusuf, 1933/34, pp. 14, 16 and 86). Abu Yusuf, adviser to Caliph Harun al-Rashid (786-809), argued that a just tax system would lead not only to an increase in revenues but also to the development of the country (Abu Yusuf, 1933/34, p. 111; see also pp. 14, 16, 60, 85, 105-19 and 125). Al-Mawardi also argued that the tax system should do justice to both the taxpayer and the treasury – “taking more was iniquitous with respect to the rights of the people, while taking less was unfair with respect to the right of the public treasury” (1960, p. 209; see also pp. 142-56 and 215).[9]

Ibn Khaldun stressed the principles of taxation very forcefully in the Muqaddimah. He quoted from a letter written by Tahir ibn al-Husayn, Caliph al-Ma’mun’s general, advising his son, ‘Abdullah ibn Tahir, Governor of al-Raqqah (Syria): “So distribute [taxes] among all people making them general, not exempting anyone because of his nobility or wealth and not exempting even your own officials or courtiers or followers. And do not levy on anyone a tax which is beyond his capacity to pay” (p. 308).[10] In this particular passage, he stressed the principles of equity and neutrality, while in other places he also stressed the principles of convenience and productivity.

The effect of taxation on incentives and productivity was so clearly visualized by Ibn Khaldun that he seems to have grasped the concept of optimum taxation. He anticipated the gist of the Laffer Curve, nearly six hundred years before Arthur Laffer, in two full chapters of the Muqaddimah.[11] At the end of the first chapter, he concluded that “the most important factor making for business prosperity is to lighten as much as possible the burden of taxation on businessmen, in order to encourage enterprise by ensuring greater profits [after taxes]” (p. 280). This he explained by stating that “when taxes and imposts are light, the people have the incentive to be more active. Business therefore expands, bringing greater satisfaction to the people because of low taxes …, and tax revenues also rise, being the sum total of all assessments” (p. 279). He went on to say that as time passes the needs of the state increase and rates of taxation rise to increase the yield. If this rise is gradual people become accustomed to it, but ultimately there is an adverse impact on incentives. Business activity is discouraged and declines, and so does the yield of taxation (pp. 280-81). A prosperous economy at the beginning of the dynasty, thus, yields higher tax revenue from lower tax rates while a depressed economy at the end of the dynasty, yields smaller tax revenue from higher rates (p. 279). He explained the reasons for this by stating: “Know that acting unjustly with respect to people’s wealth, reduces their will to earn and acquire wealth … and if the will to earn goes, they stop working. The greater the oppression, the greater the effect on their effort to earn … and, if people abstain from earning and stop working, the markets will stagnate and the condition of people will worsen” (pp. 286-87); tax revenues will also decline (p. 362). He, therefore, advocated justice in taxation (p. 308).

Public Expenditure

For Ibn Khaldun the state was also an important factor of production. By its spending it promotes production and by its taxation it discourages production (pp. 279-81). Since the government constitutes the greatest market for goods and services, and is a major source of all development (pp. 286 and 403), a decrease in its spending leads to not only a slackening of business activity and a decline in profits but also a decline in tax revenue (p. 286). The more the government spends, the better it may be for the economy (p. 286).[12] Higher spending enables the government to do the things that are needed to support the population and to ensure law and order and political stability (pp. 306 and 308). Without order and political stability, the producers have no incentive to produce. He stated that “the only reason [for the accelerated development of cities] is that the government is near them and pours its money into them, like the water [of a river] that makes green everything around it, and irrigates the soil adjacent to it, while in the distance everything remains dry” (p. 369).

Ibn Khaldun also analyzed the effect of government expenditure on the economy and is, in this respect, a forerunner of Keynes. He stated: “A decrease in government spending leads to a decline in tax revenues. The reason for this is that the state represents the greatest market for the world and the source of civilization. If the ruler hoards tax revenues, or if these are lost, and he does not spend them as they should be, the amount available with his courtiers and supporters would decrease, as would also the amount that reaches through them to their employees and dependents [the multiplier effect]. Their total spending would, therefore, decline. Since they constitute a significant part of the population and their spending constitutes a substantial part of the market, business will slacken and the profits of businessmen will decline, leading also to a decline in tax revenues … Wealth tends to circulate between the people and the ruler, from him to them and from them to him. Therefore, if the ruler withholds it from spending, the people would become deprived of it” (p. 286).

Economic Mismanagement and Famine

Ibn Khaldun established the causal link between bad government and high grain prices by indicating that in the later stage of the dynasty, when public administration becomes corrupt and inefficient, and resorts to coercion and oppressive taxation, incentive is adversely affected and the farmers refrain from cultivating the land. Grain production and reserves fail to keep pace with the rising population. The absence of reserves causes supply shortages in the event of a famine and leads to price escalation (pp. 301-02).

Al-Maqrizi (d. 1442) who, as muhtasib (market supervisor), had intimate knowledge of the economic conditions during his times, applied Ibn Khaldun’s analysis in his book (1956) to determine the reasons for the economic crisis of Egypt during the period 1403-06. He identified that the political administration had become very weak and corrupt during the Circassian period. Public officials were appointed on the basis of bribery rather than ability.[13] To recover the bribes, officials resorted to oppressive taxation. The incentive to work and produce was adversely affected and output declined. The crisis was further intensified by debasement of the currency through the excessive issue of copper fulus, or fiat money, to cover state budgetary deficits. All these factors joined hands with the famine to lead to a high degree of inflation, misery of the poor, and impoverishment of the country.

Hence, al-Maqrizi laid bare the socio-political determinants of the prevailing ‘system crisis’ by taking into account a number of variables like corruption, bad government policies, and weak administration. All of these together played a role in worsening the impact of the famine, which could otherwise have been handled effectively without a significant adverse impact on the population. This is clearly a forerunner of Sen’s entitlement theory, which holds the economic mismanagement of illegitimate governments to be responsible for the poor people’s misery during famines and other natural disasters (Sen, 1981). What al-Maqrizi wrote of the Circassian Mamluks was also true of the later Ottoman period (See Meyer, 1989).

Stages of Development

Ibn Khaldun stated the stages of development through which every society passes, moving from the primitive Bedouin stage to the rise of village, towns and urban centers with an effective government, development of agriculture, industry and sciences, and the impact of values and environment on this development ( Muqaddimah, pp. 35, 41-44, 87-95, 120-48, 172-76). Walliyullah[14] (d. 1762) later analyzed the development of society through four different stages from primitive existence to a well-developed community with khilafah (morally-based welfare state), which tries to ensure the spiritual as well as material well-being of the people. Like Ibn Khaldun, he considered political authority to be indispensable for human well-being. To be able to serve as a source of well-being for all and not of burden and decay, it must have the characteristics of the khilafah. He applied this analysis in various writings to the conditions prevailing during his life-time. He found that the luxurious life style of the rulers, along with their exhausting military campaigns, the increasing corruption and inefficiency of the civil service, and huge stipends to a vast retinue of unproductive courtiers, led them to the imposition of oppressive taxes on farmers, traders and craftsmen, who constituted the main productive section of the population. These people had, therefore, lost interest in their occupations, output had slowed down, state financial resources had declined, and the country had become impoverished (Waliyullah, 1992, Vol. I, pp. 119-52). Thus, in step with Ibn Khaldun and other Muslim scholars, al-Maqrizi and Waliyullah combined moral, political, social and economic factors to explain the economic phenomena of their times and the rise and fall of their societies.

Muslim Intellectual Decline

Unfortunately, the rich theoretical contribution made by Muslim scholars up until Ibn Khaldun did not get fertilized and irrigated by later scholars to lead to the development of Islamic economics, except by a few isolated scholars like al-Maqrizi, al-Dawwani (d. 1501), and Waliyullah. Their contributions were, however, only in specific areas and did not lead to a further development of Ibn Khaldun’s model of socio-economic and political dynamics. Islamic economics did not, therefore, develop as a separate intellectual discipline in conformity with the Islamic paradigm along the theoretical foundations and method laid down by Ibn Khaldun and his predecessors. It continued to remain an integral part of the social and moral philosophy of Islam.

One may ask here why the rich intellectual contributions made by Muslim scholars did not continue after Ibn Khaldun. The reason may be that, as indicated earlier, Ibn Khaldun lived at a time when the political and socio-economic decline of the Muslim world was underway.[15] He was perhaps “the sole point of light in his quarter of the firmament” (Toynbee, 1935, Vol. 3, p. 321). According to Ibn Khaldun himself, sciences progress only when a society is itself progressing (p. 434). This theory is clearly upheld by Muslim history. Sciences progressed rapidly in the Muslim world for four centuries from the middle of the eighth century to the middle of the twelfth century and continued to do so at a substantially decelerated pace for at least two more centuries, tapering off gradually thereafter (Sarton 1927, Vol. 1 and Book 1 of Vol. 2). Once in a while there did appear a brilliant star on an otherwise unexciting firmament. Economics was no exception. It also continued to be in a state of limbo in the Muslim world. No worthwhile contributions were made after Ibn Khaldun.

The trigger mechanism for this decline was, according to Ibn Khaldun, the failure of political authority to provide good governance. Political illegitimacy, which started after the end of khilafah in 661 gradually led to increased corruption and the use of state resources for private benefit at the neglect of education and other nation-building functions of the state. This gradually triggered the decline of all other sectors of the society and economy.[16]

The rapidly rising Western civilization took over the torch of knowledge from the declining Muslim world and has kept it burning with even greater brightness. All sciences, including the social sciences, have made phenomenal progress. Conventional economics became a separate academic discipline after the publication of Alfred Marshall’s great treatise, Principles of Economics, in 1890 (Schumpeter, 1954, p.21),[17] and has continued to develop since then at a remarkable speed. With such a great achievement to its credit, there is no psychological need to allow the ‘Great Gap’ thesis to persist. It would help promote better understanding of Muslim civilization in the West if textbooks started giving credit to Muslim scholars. They were “the torchbearers of ancient learning during the medieval period” and “it was from them that the Renaissance was sparked and the Enlightenment kindled” (Todd Lowry in his ‘Foreword’ in Ghazanfar, 2003, p. xi). Watt has been frank enough to admit that, “the influence of Islam on Western Christendom is greater than is usually realized” and that, “an important task for Western Europeans, as we move into the era of the one world, is … to acknowledge fully our debt to the Arab and Islamic world” (Watt, 1972, p. 84).

Conventional economics, however, took a wrong turn after the Enlightenment Movement by stripping itself of the moral basis of society emphasized by Aristotelian and Judeo-Christian philosophies. This deprived it of the role that moral values and good governance can play in helping society raise both efficiency and equity in the allocation and distribution of scarce resources needed for promoting the well-being of all. However, this has been changing. The role of good governance has already been recognized and that of moral values is gradually penetrating the economics orthodoxy. Islamic economics is also reviving now after the independence of Muslim countries from foreign domination. It is likely that the two disciplines will converge and become one after a period of time. This will be in keeping with the teachings of the Qur’an, which clearly states that mankind was created as one but became divided as a result of their differences and transgression against each other (10:19, 2:213 and 3: 19). This reunification [globalization, as it is new called], if reinforced by justice and mutual care, should help promote peaceful coexistence and enable mankind to realize the well-being of all, a goal the realization of which we are all anxiously looking forward to.

References

Abu Yusuf, Ya ‘qub ibn Ibrahim. Kitab al-Kharaj. Cairo: al-Matab‘ah al-Salafiyyah, second edition, 1933/34. (This book has been translated into English by A. Ben Shemesh. Taxation in Islam. Leiden: E. J. Brill, 1969.)

Allouche, Adel. Mamluk Economics: A Study and Translation of Al-Maqrizi’s Ighathah. Salt Lake City: University of Utah Press, 1994.

Baeck Louis. The Mediterranean Tradition in Economic Thought. London: Routledge, 1994.

Blanchflower, David, and Andrew Oswald. “Well-being over Time in Britain and USA.” NBER, Working Paper No. 7487, 2000.

Blaug Mark. Economic Theory in Retrospect. Cambridge: Cambridge University Press, 1985.

Boulakia, Jean David C. “Ibn Khaldun: A Fourteenth-Century Economist.” Journal of Political Economy 79, no. 5 (1971): 1105-18.

Chapra, M. Umer. The Future of Economics: An Islamic Perspective. Leicester, UK: The Islamic Foundation, 2000.

Cline, William R. Potential Effects of Income Redistribution on Economic Growth. New York: Praeger, 1973.

DeSmogyi, Joseph N. “Economic Theory in Classical Arabic Literature.” Studies in Islam (Delhi), (1965): 1-6.

Diener E., and Shigehiro Oshi. “Money and Happiness: Income and Subjective Well-being.” In Culture and Subjective Well-being, edited by E. Diener and E. Suh. Cambridge, MA: MIT Press, 2000.

Dimashqi, Abu al-Fadl Ja‘far ibn ‘Ali al-. Al-Isharah ila Mahasin al-Tijarah, Al-Bushra al-Shurbaji, editor. Cairo: Maktabah al-Kulliyat al-Azhar, 1977.

Duri, A.A. “Baghdad.” The Encyclopedia of Islam, 894-99. Leiden: Brill, 1986.

Easterlin, Richard. “Does Economic Growth Improve the Human Lot? Some Empirical Evidence.” In Nations and Households in Economic Growth: Essays in Honor of Moses Abramowitz, edited by Paul David and Melvin Reder. New York: Academic Press, 1974.

Easterlin, Richard. “Will Raising the Income of All Increase the Happiness of All?” Journal of Economic Behavior and Organization 27, no. 1 (1995): 35-48.

Easterlin, Richard (2001), “Income and Happiness: Towards a Unified Theory” in Economic Journal, 111: 473 (2001).

Essid, M. Yassine. A Critique of the Origins of Islamic Economic Thought. Leiden: Brill, 1995.

Feyerabend, Paul. Against Method: Outline of an Anarchistic Theory of Knowledge. London: Verso, third edition, 1993.

Fischel, W.J. “Djahbadh.” In Encyclopedia of Islam, volume 2, 382-83. Leiden: Brill, 1992.

Friedman, Milton. Essays in Positive Economics. Chicago: University of Chicago Press, 1953.

George, Henry. Progress and Poverty. New York: Robert Schalkenback Foundation, 1955.

Ghazanfar, S.M. Medieval Islamic Economic Thought: Filling the Great Gap in European Economics. London: Routledge Curzon, 2003.

Groenewegen, P.D. “A Note on the Origin of the Phrase, ‘Supply and Demand.’” Economic Journal 83, no. 330 (1973): 505-09.

Hausman, Daniel, and Michael McPherson. “Taking Ethics Seriously: Economics and Contemporary Moral Philosophy.” Journal of Economic Literature 31, no. 2 (1993): 671-731.

Ibn Khaldun. Muqaddimah. Cairo: Al-Maktabah al-Tijariyyah al-Kubra. See also its translation under Rosenthal (1967) and selections from it under Issawi (1950).

Ibn Taymiyyah. Majmu‘ Fatawa Shaykh al-Islam Ahmad Ibn Taymiyyah. ‘Abd al-Rahman al-‘Asimi, editor. Riyadh: Matabi‘ al-Riyad, 1961-63.

Islahi, A. Azim. History of Economic Thought in Islam. Aligharh, India: Department of Economics, Aligharh Muslim University, 1996.

Issawi, Charles. An Arab Philosophy of History: Selections from the Prolegomena of Ibn Khaldun of Tunis (1332-1406). London: John Muray, 1950.

Jahiz, Amr ibn Bahr al-. Kitab al-Tabassur bi al-Tijarah. Beirut: Dar al-Kitab al-Jadid, 1983.

Jay, Elizabeth, and Richard Jay. Critics of Capitalism: Victorian Reactions to Political Economy. Cambridge: Cambridge University Press, 1986.

Kenny, Charles. “Does Growth Cause Happiness, or Does Happiness Cause Growth?” Kyklos 52, no. 1 (1999): 3-26.

Koopmans, T.C. (1969), “Inter-temporal Distribution and ‘Optimal’ Aggregate Economic Growth”, in Fellner et. al., Ten Economic Studies in the Tradition of Irving Fisher (John Willey and Sons).

Mahdi, Mohsin. Ibn Khaldun’s Philosophy of History. Chicago: University of Chicago Press, 1964.

Maqrizi, Taqi al-Din Ahmad ibn Ali al-. Ighathah al-Ummah bi Kashf al-Ghummah. Hims, Syria: Dar ibn al-Wahid, 1956. (See its English translation by Allouche, 1994).

Mawardi, Abu al-Hasan ‘Ali al-. Adab al-Dunya wa al-Din. Mustafa al Saqqa, editor. Cairo: Mustafa al-Babi al Halabi, 1955.

Mawardi, Abdu al-Hasan. Al-Ahkam al-Sultaniyyah wa al-Wilayat al-Diniyyah. Cairo: Mustafa al-Babi al-Halabi, 1960. (The English translation of this book by Wafa Wahba has been published under the title, The Ordinances of Government. Reading: Garnet, 1996.)

Mirakhor, Abbas. “The Muslim Scholars and the History of Economics: A Need for Consideration.” American Journal of Islamic Social Sciences (1987): 245-76.

Misri Rafiq Yunus al-. Al-Islam wa al-Nuqud. Jeddah: King Abdulaziz University, 1981.

Meyer, M.S. “Economic Thought in the Ottoman Empire in the 14th – Early 19th Centuries.” Archiv Orientali 4, no. 57 (1989): 305-18.

Myers, Milton L. The Soul of Modern Economic Man: Ideas of Self-Interest, Thomas Hobbes to Adam Smith. Chicago: University of Chicago Press, 1983.

North, Douglass C. Structure and Change in Economic History. New York: W.W. Norton, 1981.

North, Douglass C. “Economic Performance through Time.” American Economic Review 84, no. 2 (1994): 359-68.

Oswald, A.J. “Happiness and Economic Performance,” Economic Journal 107, no. 445 (1997): 1815-31.

Pifer, Josef. “Scholasticism.” Encyclopedia Britannica 16 (1978): 352-57.

Robbins, Lionel. An Essay on the Nature and Significance of Economic Science. London: Macmillan, second edition, 1935.

Rosenthal, Franz. Ibn Khaldun: The Muqaddimah, An Introduction to History. Princeton, NJ: Princeton University Press, 1967.

Sarakhsi, Shams al-Din al-. Kitab al-Mabsut. Beirut: Dar al-Ma‘rifah, third edition, 1978 (particularly “Kitab al-Kasb” of al-Shaybani in Vol. 30: 245-97).

Sarton, George. Introduction to the History of Science. Washington, DC: Carnegie Institute (three volumes issued between 1927 and 1948, each of the second and third volumes has two parts).

Schumpeter, Joseph A. History of Economic Analysis. New York: Oxford University Press, 1954.

Sen, Amartya. Poverty and Famines: An Essay on Entitlement and Deprivation. Oxford: Clarendon Press, 1981.

Sen, Amartya. On Ethics and Economics. Oxford: Basil Blackwell, 1987.

Siddiqi, M. Nejatullah. “History of Islamic Economic Thought.” In Lectures on Islamic Economics, Ausaf Ahmad and K.R. Awan, 69-90. Jeddah: IDB/IRTI, 1992.

Smith, Adam. An Inquiry into the Nature and Causes of the Wealth of Nations. New York: Modern Library, 1937.

Solo, Robert A. “Values and Judgments in the Discourse of the Sciences.” In Value Judgment and Income Distribution, edited by Robert A. Solo and Charles A. Anderson, 9-40. New York, Praeger, 1981.

Spengler, Joseph. “Economic Thought in Islam: Ibn Khaldun.” Comparative Studies in Society and History (1964): 268-306.

Thweatt, W.O. “Origins of the Terminology, Supply and Demand.” Scottish Journal of Political Economy (1983): 287-94.

Toynbee, Arnold J. A Study of History. London: Oxford University Press, second edition, 1935.

Udovitch, Abraham L. Partnership and Profit in Medieval Islam. Princeton; NJ: Princeton University Press, 1970.

Waliyullah, Shah. Hujjatullah al-Balighah. M.Sharif Sukkar, editor. Beirut: Dar Ihya al- Ulum, second edition, two volumes, 1992. (An English translation of this book by Marcia K. Hermansen was published bu Brill, Leiden, 1966.)

Watt, W. Montgomery. The Influence of Islam on Medieval Europe. Edinburgh: Edinburgh University Press, 1972.


[1] This is the liberal version of the secular and materialist worldviews. There is also the totalitarian version which does not have faith in the individuals’ ability to manage private property in a way that would ensure social well-being. Hence its prescription is to curb individual freedom and to transfer all means of production and decision making to a totalitarian state. Since this form of the secular and materialist worldview failed to realize human well-being and has been overthrown practically everywhere, it is not discussed in this paper.

[2] The literature on economic development is full of assertions that improvement in income distribution is in direct conflict with economic growth. For a summary of these views, see Cline, 1973, Chapter 2. This has, however, changed and there is hardly any development economist now who argues that injustice can help promote development.

[3] North has used the term ‘nasty’ for all such behavior. See the chapter “Ideology and Free Rider,” in North, 1981.

[4] Some of these scholars include Abu Yusuf (d. 798), al-Mawardi (d. 1058), Abu Ya’la (d. 1065), Nazam al-Mulk (d.1092), al-Ghazali (d. 1111), Ibn Taymiyyah (d. 1328), Ibn Khaldun (d. 1406), Shah Walliyullah (d. 1762), Jamaluddin al-Afghani (d. 1897), Muhammad ‘Abduh (d. 1905), Muhammad Iqbal (d. 1938), Hasan al-Banna (d. 1949), Sayyid Mawdudi (d. 1979), and Baqir al-Sadr (d. 1980).

[5] Some of these authors include al-Katib (d. 749), Ibn al-Muqaffa (d. 756) al-Nu‘man (d. 974), al-Mawardi (d. 1058), Kai Ka’us (d. 1082), Nizam al-Mulk (d. 1092), al-Ghazali (d. 1111), al-Turtushi (d. 1127). (For details, see Essid, 1995, pp.19-41.)

[6] For the fallacy of the Great Gap thesis, see Mirakhor (1987) and Ghazanfar (2003), particularly the “Foreword” by Todd Lowry and the “Introduction” by Ghazanfar.

[7] The full name of the book (given in the bibliography) may be freely translated as “The Book of Lessons and the Record of Cause and Effect in the History of Arabs, Persians and Berbers and their Powerful Contemporaries.” Several different editions of the Muqaddimah are now available in Arabic. The one I have used is that published in Cairo by al-Maktabah al-Tijarriyah al-Kubra without any indication of the year of publication. It has the advantage of showing all vowel marks, which makes the reading relatively easier. The Muqaddimah was translated into English in three volumes by Franz Rosenthal. Its first edition was published in 1958 and the second edition in 1967. Selections from the Muqaddimah by Charles Issawi were published in 1950 under the title, An Arab Philosophy of History: Selections from the Prolegomena of Ibn Khaldun of Tunis (1332-1406).

A considerable volume of literature is now available on Ibn Khaldun. This includes Spengler, 1964; Boulakia, 1971; Mirakhor, 1987; and Chapra, 2000.

[8] For some of these contributions, see Spengler, 1964; DeSmogyi, 1965; Mirakhor, 1987; Siddiqi, 1992; Essid, 1995; Islahi, 1996; Chapra, 2000; and Ghazanfar, 2003.

[9] For a more detailed discussion of taxation by various Muslim scholars, see the section on “Literature on Mirrors for Princes” in Essid, 1995, pp. 19-41.

[10] This letter is a significant development over the letter of Abu Yusuf to Caliph Harun al-Rashid (1933/34, pp. 3-17). It is more comprehensive and covers a larger number of topics.

[11] These are “On tax revenues and the reason for their being low and high” (pp. 279-80) and “Injustice ruins development” (pp. 286-410).

[12] Bear in mind the fact that this was stated at the time when commodity money, which it is not possible for the government to ‘create,’ was used, and fiduciary money, had not become the rule of the day.

[13] This was during the Slave (Mamluk) Dynasty in Egypt, which is divided into two periods. The first period was that of the Bahri (or Turkish) Mamluks (1250-1382), who have generally received praise in the chronicles of their contemporaries. The second period was that of the Burji Mamluks (Circassians, 1382-1517). This period was beset by a series of severe economic crises. (For details see Allouche, 1994.)

[14] Shah Walliyullah al-Dihlawi, popularly known as Walliyullah, was born in 1703, four years before the death of the Mughal Emperor, Aurangzeb (1658-1707). Aurangzeb’s rule, spanning a period of forty-nine years, was followed by a great deal of political instability – ten different changes in rulers during Walliyullah’s life-span of 59 years – leading ultimately to the weakening and decline of the Mughal Empire.

[15] For a brief account of the general decline and disintegration of the Muslim world during the fourteenth century, see Muhsin Mahdi, 1964, pp. 17-26.

[16] For a discussion of the causes of Muslim decline, see Chapra, 2000, pp. 173-252.

[17] According to Blaug (1985), economics became an academic discipline in the 1880s (p. 3).

Citation: Chapra, M. “Islamic Economics: What It Is and How It Developed”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/islamic-economics-what-it-is-and-how-it-developed/

Industrial Sickness Funds

John E. Murray, University of Toledo

Overview and Definition

Industrial sickness funds provided an early form of health insurance. They were financial institutions that extended cash payments and in some cases medical benefits to members who became unable to work due to sickness or injury. The term industrial sickness funds is a later construct which describes funds organized by companies, which were also known as establishment funds, and by labor unions. These funds were widespread geographically in the United States; the 1890 Census of Insurance found 1,259 nationwide, with concentrations in the Northeast, Midwest, California, Texas, and Louisiana (U.S. Department of the Interior, 1895). By the turn of the twentieth century, some industrial sickness funds had accumulated considerable experience at managing sickness benefits. A few predated the Civil War. When the U. S. Commissioner of Labor surveyed a sample of sickness funds in 1908, it found 867 non-fraternal funds nationwide that provided temporary disability benefits (U.S. Commissioner of Labor, 1909). By the time of World War I, these funds, together with similar funds sponsored by fraternal societies, covered 30 to 40 percent of non-agricultural wage workers in the more industrialized states, or by extension, eight to nine million nationwide (Murray 2007a). Sickness funds were numerous, widespread, and in general carefully operated.

Industrial sickness funds were among the earliest providers of any type of health or medical benefits in the United States. In fact, their earliest product was called “workingman’s insurance” or “sickness insurance,” terms that described their clientele and purpose accurately. In the late Progressive Era, reformers promoted government insurance programs that would supplant the sickness funds. To sound more British, they used the term “health insurance,” and that is the phrase we still use for this kind of insurance contract (Numbers 1978). In the history of health insurance, the funds were contemporary with benefit operations of fraternal societies (see fraternal sickness insurance) and led into the period of group health insurance (see health insurance, U. S.). They should be distinguished from the sickness benefits provided by some industrial insurance policies, which required weekly premium payments and paid a cash benefit upon death, which was intended to cover burial expenses.

Many written histories of health insurance have missed the important role industrial sickness funds played in both relief of worker suffering and in the political process. Recent historians have tended to criticize, patronize, or ignore sickness funds. Lubove (1986) complained that they stood in the way of government insurance for all workers. Klein (2003) claimed that they were inefficient, without making explicit her standard for that judgment. Quadagno (2005) simply asserted that no one had thought of health insurance before the 1920s. Contemporary commentators such as I. M. Rubinow and Irving Fisher criticized workers who preferred “hopelessly inadequate” sickness fund insurance over government insurance as “infantile” (Derickson 2005). But these criticisms stemmed more from their authors’ ideological preconceptions than from close study of these institutions.

Rise and Operations of Industrial Sickness Funds

The period of their greatest extent and importance was from the 1880s to around 1940. The many state labor bureau surveys of individual workers, since digitized by the University of California’s Historical Labor Statistics Project and available for download at EH.net, often asked questions such as “do you belong to a benefit society,” meaning a fraternal sickness benefit fund or an industrial sickness fund. Of the surveys from the early 1890s that included this question, around a quarter of respondents indicated that they belonged to such societies. Later, closer to 1920, several states examined the extent of sickness insurance coverage in response to movements to create governmental health insurance for workers (Table 1). These later studies indicated that in the Northeast, Midwest, and California, between thirty and forty percent of non-agricultural workers were covered. Thus, remarkably, these societies had actually increased their market share over a three decade period in which the labor force itself grew from 13 to 30 million workers (Murray 2007a). Industrial sickness funds were dynamic institutions, capable of dealing with an ever expanding labor market

Table 1:
Sources of Insurance in Three States (thousands of workers)

Source/state Illinois Ohio California
Fraternal society 250 200 291
Establishment fund 116 130 50
Union fund 140 85 38
Other sick fund 12 N/a 35
Commercial insurance 140 85 2 (?)
Total 660 500 416
Eligible labor force 1,850 1,500 995
Share insured 36% 33% 42%
Sources: Illinois (1919), Ohio, (1919), California (1917), Lee et al. (1957).

Industrial sickness funds operated in a relatively simple fashion, but one that enabled them to mitigate the usual information problems that emerge in insurance markets. The process of joining a fund and making a claim typically worked as follows. A newly hired worker in a plant with such a fund explicitly applied to join, often after a probationary period during which fund managers could observe his baseline health and work habits. After admission to the fund, he paid an entrance fee followed by weekly dues. Since the average industrial worker in the 1910s earned about ten dollars a week, the entrance fee of one dollar was a half-day’s pay and the dues of ten cents made the cost to the worker around one percent of his pay packet.

A member who was unable to work contacted his fund, which then sent either a committee of fellow fund members, a physician, or both to check on the member-now-claimant. If they found him as sick as he had said he was, and in their judgment he was unable to work, after a one week waiting period he received around half his weekly pay. The waiting period was intended to let transient, less serious illnesses resolve so that the fund could support members with longer-term medical problems. To continue receiving the sick pay the claimant needed to allow periodic examinations by a physician or visiting committee. In rough terms, the average worker missed two percent of a work year, or about a week every year, a rate that varied by age and industry. The quarter of all workers who missed any work lost on average one month’s pay; thus a typical incapacitated worker received three and a half weeks of benefit per year. Comparing the cost of dues and expected value of benefits shows that the sickness funds were close to an actuarially fair bet: $5.00 in annual dues compared to (0.25 chance of falling ill) x (3.5 weeks of benefits) x ($5.00 weekly benefit), or about four and a half dollars in expected benefits. Thus, sickness funds appear to have been a reasonably fair deal for workers.

Establishment funds did not invent sickness benefits by any means. Rather, they systematized previous arrangements for supporting sick workers or the survivors of deceased workers. The old way was to pass the hat, which was characterized by random assessments and arbitrary financial awards. Workers and employers both observed that contributors and beneficiaries alike detested passing the hat. Fellow workers complained about the surprise nature of the hat’s appearance, and beneficiaries faced humiliation upon grief when the hat contained less money than had been collected for a more popular co-worker. Eventually rules replaced discretion, and benefits were paid according to a published schedule, either as a flat rate per diem or as a percentage of wages. The 1890 Census of Insurance reported that only a few funds extended benefits “at the discretion of the society,” and by the time of the 1908 Commissioner of Labor survey the practice had disappeared (Murray 2007).

Labor union funds began in the early nineteenth century. In the earliest union funds, members of craft unions pledged to complete jobs that ill brothers had contracted to perform but could not finish due to illness. Eventually cash benefit payments replaced the in-kind promises of labor, accompanied by cash premium payments into the union’s kitty. While criticized by many observers as unstable, labor union funds actually operated in transparent fashion. Even funds that offered unemployment benefits survived the depression of the mid-1890s by reducing benefit payments and enacting other conservative measures. Another criticism was that their benefits were too small in amount and too brief in duration, but according to the 1908 Commissioner of Labor survey, labor union funds and establishment funds offered similar levels of benefits. The cost-benefit ratio did favor establishment funds, but establishment fund membership ended with employment at a particular company, while union funds offered the substantial attraction of benefits that were portable from job to job.

The cash payment to sick workers created an incentive to take sick leave that workers without sickness insurance did not face; this is the moral hazard of sick pay. Further, workers who believed that they were more likely to make a sick claim would have a stronger incentive to join a sickness fund than a worker in relatively good health; this is called adverse selection. Early twentieth century commentators on government sickness insurance disagreed on the extent and even the existence of moral hazard and adverse selection in sickness insurance. Later statistical studies found evidence for both in establishment funds. However, the funds themselves had understood the potential financial damage each could wreak and strategized to mitigate such losses. The magnitude of the sick pay moral hazard was small, and affected primarily the tendency of the worker to make a claim in the first place. Many sickness funds limited their liability here by paying for the physician who examined the claimant and thus was responsible for approving extended sickness payments. Physicians appear to have paid attention to the wishes of those who paid them. Among claimants in funds that paid the examining physician directly, the average duration of their illness ended significantly earlier. By the same token, physicians who were paid by the worker tended to approve longer absences for that worker—a sign that physicians too responded to incentives.

Testing for adverse selection depends on whether membership in a company’s fund was the worker’s choice (that is, it was voluntary) or the company’s choice (that is, it was compulsory). In fact among establishment funds in which membership was voluntary, claim rates per member were significantly higher than in mandatory membership funds. This indicates that voluntary funds were especially attractive to sicker workers, which is the essence of adverse selection. To reduce the risks of adverse selection, funds imposed age limits to keep out older applicants, physical examinations to discourage the obviously ill, probationary periods to reveal chronic illness, and pre-existing condition clauses to avoid paying for such conditions (Murray 2007a). Sickness funds thus cleverly managed information problems typical of insurance markets.

Industrial Sickness Funds and Progressive Era Politics

Industrial sickness funds were the linchpin of efforts to promote and to oppose the Progressive campaign for state-level mandatory government sickness insurance. One consistent claim made by government insurance supporters was that workers could neither afford to pay for sickness insurance nor to save in advance of financially damaging health problems. The leading advocacy organization, the American Association for Labor Legislation (AALL), reported in its magazine that “Savings of Wage-Earners Are Insufficient to Meet this Loss,” meaning lost income during sickness (American Association for Labor Legislation 1916a). However, worker surveys of savings, income, and insurance holdings revealed that workers rationally strategized according to their varying needs and abilities across the life-cycle. Young workers saved little and were less likely to belong to industrial sickness funds—but were less likely to miss work due to illness as well. Middle aged workers, married with families to support, were relatively more likely to belong to a sickness fund. Older workers pursued a different strategy, saving more and relying on sickness funds less; among other factors, they wanted greater liquidity in their financial assets (Murray 2007a). Worker strategies reflected varying needs at varying stages of life, some (but not all) of which could be adequately addressed by membership in sickness funds.

Despite claims to the contrary by some historians, there was little popular support for government sickness insurance in early twentieth century America. Lobbying by the AALL led twelve states to charge investigatory commissions with determining the need for and feasibility of government sickness insurance (Moss 1996). The AALL offered a basic bill that could be adjusted to meet a state’s particular needs (American Association for Labor Legislation 1916b). Typically the Association prodded states to adopt a version of German insurance, which would keep the many small industrial sickness funds while forcing new members into some and creating new funds for other workers. However, these bills met consistent defeat in statehouses, earning only a fleeting victory in the New York Senate in 1919, which was followed by the bill’s death in an Assembly committee (Hoffman 2001). In the previous year a California referendum on a constitutional amendment that would allow the government to provide sickness insurance lost by nearly three to one (Costa 1996).

After the Progressive campaign exhausted itself, industrial sickness funds continued to grow through the 1920s, but the Great Depression exposed deep flaws in their structure. Many labor union funds, without a sponsoring firm to act as lender of last resort, dissolved. Establishment funds failed at a surprisingly low rate, but their survival was made possible by the tendency of firms to fire less healthy workers. Federal surveys in Minnesota found that ill-health led to earlier job loss in the Depression, and comparisons of self reported health in later surveys indicated that the unemployed were in fact in poorer health than the employed, and the disparity grew as the Depression deepened. Thus, industrial sickness funds paradoxically enjoyed falling claim rates (and thus reduced expenses) as the economy deteriorated (Murray 2007).

Decline and Rebirth of Sickness Funds

At the same time, commercial insurers had been engaging in ever more productive research into the actuarial science of group health insurance. Eventually the insurers cut premium rates while offering benefits comparable to those available through sickness funds. As a result, the commercial insurers and Blue CrossBlue Shield came to dominate the market for health benefits. A federal survey that covered the early 1930s found more firms with group health than with mutual benefit societies but the benefit societies still insured more than twice as many workers (Sayers, et al 1937). By the later 1930s that gap in the number of firms had widened in favor of group health (Figure 1), and the number of workers insured was about equal. After the mid-1940s, industrial sickness funds were no longer a significant player in markets for health insurance (Murray 2007a).

Figure 1: Health Benefit Provision and Source
Source: Dobbin (1992) citing National Industrial Conference Board surveys.

More recently, a type of industrial sickness fund has begun to stage a comeback. Voluntary employee beneficiary associations (VEBAs) fall under a 1928 federal law that was created to govern industrial sickness funds. VEBAs are trusts set up to pay employee benefits without earning profits for the company. In late 2007, the Big Three automakers each contracted with the United Auto Workers (UAW) to operate a VEBA that would provide health insurance for UAW members. If the automakers and their workers succeed in establishing VEBAs that stand the test of time, they will have resurrected a once-successful financial institution previously thought relegated to the pre-World War II economy (Murray 2007b).

References

American Association for Labor Legislation. “Brief for Health Insurance.” American Labor Legislation Review 6 (1916a): 155–236.

American Association for Labor Legislation. “Tentative Draft of an Act.” American Labor Legislation Review 6 (1916b): 239–68.

California Social Insurance Commission. Report of the Social Insurance Commission of the State of California, January 25, 1917. Sacramento: California State Printing Office, 1917.

Costa, Dora L. “Demand for Private and State Provided Health Insurance in the 1910s: Evidence from California.” Photocopy, MIT, 1996.

Derickson, Alan. Health Security for All: Dreams of Universal Health Care in America. Baltimore: Johns Hopkins University Press, 2005.

Dobbin, Frank. “The Origins of Private Social Insurance: Public Policy and Fringe Benefits in America, 1920-1950,” American Journal of Sociology 97 (1992): 1416-50.

Hoffman, Beatrix. The Wages of Sickness: The Politics of Health Insurance in Progressive America. Chapel Hill: University of North Carolina Press, 2001.

Klein, Jennifer. For All These Rights: Business, Labor, and the Shaping of America’s Public-Private Welfare State. Princeton: Princeton University Press, 2003.

Lee, Everett S., Ann Ratner Miller, Carol P. Brainerd, and Richard A. Easterlin, under the direction of Simon Kuznets and Dorothy Swaine Thomas. Population Redistribution and Economic Growth, 1870-1950: Volume I, Methodological Considerations and Reference Tables. Philadelphia: Memoirs of the American Philosophical Society 45, 1957.

Lubove, Roy. The Struggle for Social Security, 1900-1930. Second edition. Pittsburgh: University of Pittsburgh Press, 1986.

Moss, David. Socializing Security: Progressive-Era Economists and the Origins of American Social Policy. Cambridge: Harvard University Press, 1996.

Murray, John E. Origins of American Health Insurance: A History of Industrial Sickness Funds. New Haven: Yale University Press, 2007a.

Murray, John E. “UAW Members Must Treat Health Care Money as Their Own,” Detroit Free Press, 21 November 2007b.

Ohio Health and Old Age Insurance Commission. Health, Health Insurance, Old Age Pensions: Report, Recommendations, Dissenting Opinions. Columbus: Heer, 1919.

Quadagno, Jill. One Nation, Uninsured: Why the U. S. Has No National Health Insurance. New York: Oxford University Press, 2005.

Sayers, R. R., Gertrud Kroeger, and W. M. Gafafer. “General Aspects and Functions of the Sick Benefit Organization.” Public Health Reports 52 (November 5, 1937): 1563–80.

State of Illinois. Report of the Health Insurance Commission of the State of Illinois, May 1, 1919. Springfield: State of Illinois, 1919.

U.S. Department of the Interior. Report on Insurance Business in the United States at the Eleventh Census: 1890; pt. 2, “Life Insurance.” Washington, DC: GPO, 1895.

U.S. Commissioner of Labor. Twenty-third Annual Report of the Commissioner of Labor, 1908: Workmen’s Insurance and Benefit Funds in the United States. Washington, DC: GPO, 1909.

Citation: Murray, John. “Industrial Sickness Funds, US”. EH.Net Encyclopedia, edited by Robert Whaples. June 5, 2008. URL http://eh.net/encyclopedia/industrial-sickness-funds/

Immigration to the United States

Raymond L. Cohn, Illinois State University (Emeritus)

For good reason, it is often said the United States is a nation of immigrants. Almost every person in the United States is descended from someone who arrived from another country. This article discusses immigration to the United States from colonial times to the present. The focus is on individuals who paid their own way, rather than slaves and indentured servants. Various issues concerning immigration are discussed: (1) the basic data sources available, (2) the variation in the volume over time, (3) the reasons immigration occurred, (4) nativism and U.S. immigration policy, (5) the characteristics of the immigrant stream, (6) the effects on the United States economy, and (7) the experience of immigrants in the U.S. labor market.

For readers who wish to further investigate immigration, the following works listed in the Reference section of this entry are recommended as general histories of immigration to the United States: Hansen (1940); Jones (1960); Walker (1964); Taylor (1971); Miller (1985); Nugent (1992); Erickson (1994); Hatton and Williamson (1998); and Cohn (2009).

The Available Data Sources

The primary source of data on immigration to the United States is the Passenger Lists, though U.S. and state census materials, Congressional reports, and company records also contain material on immigrants. In addition, the Integrated Public Use Microdata Series (IPUMS) web site at the University of Minnesota (http://www.ipums.umn.edu/usa/) contains data samples drawn from a number of federal censuses. Since the samples are of individuals and families, the site is useful in immigration research. A number of the countries from which the immigrants left also kept records about the individuals. Many of these records were originally summarized in Ferenczi (1970). Although records from other countries are useful for some purposes, the U.S. records are generally viewed as more complete, especially for the period before 1870. It is worthy of note that comparisons of the lists between countries often lead to somewhat different results. It is also probable that, during the early years, a few of the U.S. lists were lost or never collected.

Passenger Lists

The U.S. Passenger Lists resulted from an 1819 law requiring every ship carrying passengers that arrived in the United States from a foreign port to file with the port authorities a list of all passengers on the ship. These records are the basis for the vast majority of the historical data on immigration. For example, virtually all of the tables in the chapter on immigration in Carter et. al (2006) are based on these records. The Passenger Lists recorded a great deal of information. Each list indicates the name of the ship, the name of the captain, the port(s) of embarkation, the port of arrival, and the date of arrival. Following this information is a list of the passengers. Each person’s name is listed, along with age, gender, occupation, country of origin, country of destination, and whether or not the person died on the voyage. It is often possible to distinguish family groups since family members were usually grouped together and, to save time, the compilers frequently used ditto marks to indicate the same last name. Various data based on the lists were published in Senate or Congressional Reports at the time. Due to their usefulness in genealogical research, the lists are now widely available on microfilm and are increasingly available on CD-rom. Even a few public libraries in major cities have full or partial collections of these records. Most of the ship lists are also available on-line at various web sites.

The Volume of Immigration

Both the total volume of immigration to the United States and the immigrants’ countries of origins varied substantially over time. Table 1 provides the basic data on total immigrant volume by time period broken down by country or area of origin. The column “Average Yearly Total – All Countries” presents the average yearly total immigration to the United States in the time period given. Immigration rates – the average number of immigrants entering per thousand individuals in the U.S. population – are shown in the next column. The columns headed by country or area names show the percentage of immigrants coming from that place. The time periods in Table 1 have been chosen for illustrative purposes. A few things should be noted concerning the figures in Table 1. First, the estimates for much of the period since 1820 are based on the original Passenger Lists and are subject to the caveats discussed above. The estimates for the period before 1820 are the best currently available but are less precise than those after 1820. Second, though it was legal to import slaves into the United States (or the American colonies) before 1808, the estimates presented exclude slaves. Third, though illegal immigration into the United States has occurred, the figures in Table 1 include only legal immigrants. In 2015, the total number of illegal immigrants in the United States is estimated at around 11 million. These individuals were mostly from Mexico, Central America, and Asia.

Trends over Time

From the data presented in Table 1, it is apparent that the volume of immigration and its rate relative to the U.S. population varied over time. Immigration was relatively small until a noticeable increase occurred in the 1830s and a huge jump in the 1840s. The volume passed 200,000 for the first time in 1847 and the period between 1847 and 1854 saw the highest rate of immigration in U.S. history. From the level reached between 1847 and 1854, volume decreased and increased over time through 1930. For the period from 1847 through 1930, the average yearly volume was 434,000. During these years, immigrant volume peaked between 1900 and 1914, when an average of almost 900,000 immigrants arrived in the United States each year. This period is also second in terms of the rate of immigration relative to the U.S. population. The volume and rate fell to low levels between 1931 and 1946, though by the 1970s the volume had again reached that experienced between 1847 and 1930. The rise in volume continued through the 1980s and 1990s, though the rate per one thousand American residents has remained well below that experienced before 1915. It is notable that since about 1990, the average yearly volume of immigration has surpassed the previous peak experienced between 1900 and 1914. In 2015, reflecting the large volume of immigration, about 15 percent of the U.S. population was foreign-born.

Table 1
Immigration Volume and Rates

Years Average Yearly Total – All Countries Immigration Rates (Per 1000 Population) Percent of Average Yearly Total
Great Britain Ireland Scandinavia and Other NW Europe Germany Central and Eastern Europe Southern Europe Asia Africa Australia and Pacific Islands Mexico Other America
1630‑1700 2,200 —- —- —- —- —- —- —- —- —- —- —- —-
1700-1780 4,325 —- —- —- —- —- —- —- —- —- —- —- —-
1780-1819 9,900 —- —- —- —- —- —- —- —- —- —- —- —-
1820-1831 14,538 1.3 22 45 12 8 0 2 0 0 —- 4 6
1832-1846 71,916 4.3 16 41 9 27 0 1 0 0 —- 1 5
1847-1854 334,506 14.0 13 45 6 32 0 0 1 0 —- 0 3
1855-1864 160,427 5.2 25 28 5 33 0 1 3 0 —- 0 4
1865-1873 327,464 8.4 24 16 10 34 1 1 3 0 0 0 10
1874-1880 260,754 5.6 18 15 14 24 5 3 5 0 0 0 15
1881-1893 525,102 8.9 14 12 16 26 16 8 1 0 0 0 6
1894-1899 276,547 3.9 7 12 12 11 32 22 3 0 0 0 2
1900-1914 891,806 10.2 6 4 7 4 45 26 3 0 0 1 5
1915-1919 234,536 2.3 5 2 8 1 7 21 6 0 1 8 40
1920-1930 412,474 3.6 8 5 8 9 14 16 3 0 0 11 26
1931-1946 50,507 0.4 10 2 9 15 8 12 3 1 1 6 33
1947-1960 252,210 1.5 7 2 6 8 4 10 8 1 1 15 38
1961-1970 332,168 1.7 6 1 4 6 4 13 13 1 1 14 38
1971-1980 449,331 2.1 3 0 1 2 4 8 35 2 1 14 30
1981-1990 733,806 3.1 2 0 1 1 3 2 37 2 1 23 27
1991-2000 909,264 3.4 2 1 1 1 11 2 38 5 1 30 9
2001-2008 1,040,951 4.4 2 0 1 1 9 1 35 7 1 17 27
2009-2015 1,046,459 4.8 1 0 1 1 5 1 40 10 1 14 27

Sources: Years before 1820: Grabbe (1989). 1820-1970: Historical Statistics (1976). Years since 1970: U.S. Immigration and Naturalization Service (various years). Note: Entries with a zero indicate less than one-half of one percent. Entries with dashes indicate no information or no immigrants. 2002-2015: Department of Homeland Security: Office of Immigration Statistics (various years).

Sources of Immigration

The sources of immigration have changed a number of times over the years. In general, four relatively distinct periods can be identified in Table 1. Before 1881, the vast majority of immigrants, almost 86% of the total, arrived from northwest Europe, principally Great Britain, Ireland, Germany, and Scandinavia. During the colonial period, though the data do not allow an accurate breakdown, most immigrants arrived from Britain, with smaller numbers coming from Ireland and Germany. The years between 1881 and 1893 saw a transition in the sources of U.S. immigrants. After 1881, immigrant volume from central, eastern, and southern Europe began to increase rapidly. Between 1894 and 1914, immigrants from southern, central, and eastern Europe accounted for 69% of the total. With the onset of World War I in 1914, the sources of U.S. immigration again changed. From 1915 to the present day, a major source of immigrants to the United States has been the Western Hemisphere, accounting for 46% of the total. In the period between 1915 and 1960, virtually all of the remaining immigrants came from Europe, though no specific part of Europe was dominant. Beginning in the 1960s, immigration from Europe fell off substantially and was replaced by a much larger percentage of immigrants from Asia. Also noteworthy is the rise in immigration from Africa in the twenty-first century. Thus, over the course of U.S. history, the sources of immigration changed from northwestern Europe to southern, central and eastern Europe to the Americas in combination with Europe to the current situation where most immigrants come from the Americas, Asia and Africa.

Duration of Voyage and Method of Travel

Before the 1840s, immigrants arrived on sailing ships. General information on the length of the voyage is unavailable for the colonial and early national periods. By the 1840s, however, the average voyage length for ships from the British Isles was five to six weeks, with those from the European continent taking a week or so longer. In the 1840s, a few steamships began to cross the Atlantic. Over the course of the 1850s, steamships began to account for a larger, though still minority, percentage of immigrant travel. By 1873, virtually all immigrants arrived on steamships (Cohn 2005). As a result, the voyage time fell initially to about two weeks and it continued to decline into the twentieth century. Steamships remained the primary means of travel until after World War II. As a consequence of the boom in airplane travel over the last few decades, most immigrants now arrive via air.

Place of Arrival

Where immigrants landed in the United States varied, especially in the period before the Civil War. During the colonial and early national periods, immigrants arrived not only at New York City but also at a variety of other ports, especially Philadelphia, Boston, New Orleans, and Baltimore. Over time, and especially when most immigrants began arriving via steamship, New York City became the main arrival port. No formal immigration facilities existed at any of the ports until New York City established Castle Garden as its landing depot in 1855. This facility, located at the tip of Manhattan, was replaced in 1892 with Ellis Island, which in turn operated until 1954.

Death Rates during the Voyage

A final aspect to consider is the mortality experienced by the individuals on board the ships. Information taken from the Passenger Lists for the period of the sailing ship between 1820 and 1860 finds a loss rate of one to two percent of the immigrants who boarded (Cohn, 2009). Given the length of the trip and taking into account the ages of the immigrants, this rate represents mortality approximately four times higher than that experienced by non-migrants. Mortality was mainly due to outbreaks of cholera and typhus on some ships, leading to especially high death rates among children and the elderly. There appears to have been little trend over time in mortality or differences in the loss rate by country of origin, though some evidence suggests the loss rate may have differed by port of embarkation. In addition, the best evidence from the colonial period finds a loss rate only slightly higher than that of the antebellum years. In the period after the Civil War, with the change to steamships and the resulting shorter travel time and improved on-board conditions, mortality on the voyages fell, though exactly how much has not been determined.

The Causes of Immigration

Economic historians generally believe no single factor led to immigration. In fact, different studies have tried to explain immigration by emphasizing different factors, with the first important study being done by Thomas (1954). The most recent attempt to comprehensively explain immigration has been by Hatton and Williamson (1998), who focus on the period between 1860 and 1914. Massey (1999) expresses relatively similar views. Hatton and Williamson view immigration from a country during this time as being caused by up to five different factors: (a) the difference in real wages between the country and the United States; (b) the rate of population growth in the country 20 or 30 years before; (c) the degree of industrialization and urbanization in the home country; (d) the volume of previous immigrants from that country or region; and (e) economic and political conditions in the United States. To this list can be added factors not relevant during the 1860 to 1914 period, such as the potato famine, the movement from sail to steam, and the presence or absence of immigration restrictions. Thus, a total of at least eight factors affected immigration.

Causes of Fluctuations in Immigration Levels over Time

As discussed above, the total volume of immigration trended upward until World War I. The initial increase in immigration during the 1830s and 1840s was caused by improvements in shipping, more rapid population growth in Europe, and the potato famine in the latter part of the 1840s, which affected not only Ireland but also much of northwest Europe. As previously noted, the steamship replaced the sailing ship after the Civil War. By substantially reducing the length of the trip and increasing comfort and safety, the steamship encouraged an increase in the volume of immigration. Part of the reason volume increased was that temporary immigration became more likely. In this situation, an individual came to the United States not planning to stay permanently but instead planning to work for a period of time before returning home. All in all, the period from 1865 through 1914, when immigration was not restricted and steamships were dominant, saw an average yearly immigrant volume of almost 529,000. In contrast, average yearly immigration between 1820 and 1860 via sailing ship was only 123,000, and even between 1847 and 1860 was only 266,000.

Another feature of the data in Table 1 is that the yearly volume of immigration fluctuated quite a bit in the period before 1914. The fluctuations are mainly due to changes in economic and political conditions in the United States. Essentially, periods of low volume corresponded with U.S. economic depressions or times of widespread opposition to immigrants. In particular, volume declined during the nativist outbreak in the 1850s and the major depressions of the 1870s and 1890s and the Great Depression of the 1930s. As discussed in the next section, the United States imposed widespread restrictions on immigration beginning in the 1920s. Since then, the volume has been subject to more direct determination by the United States government. Thus, fluctuations in the total volume of immigration over time are due to four of the eight factors discussed in the first paragraph of this section: the potato famine, the movement from sail to steam, economic and political conditions in the United States, and the presence or absence of immigration restrictions.

Factors Influencing Immigration Rates from Particular Countries

The other four factors are primarily used to explain changes in the source countries of immigration. A larger difference in real wages between the country and the United States increased immigration from the country because it meant immigrants had more to gain from the move. Because most immigrants were between 15 and 35 years old, a higher population growth 20 or 30 years earlier meant there were more individuals in the potential immigrant group. In addition, a larger volume of young workers in a country reduced job prospects at home and further encouraged immigration. A greater degree of industrialization and urbanization in the home country typically increased immigration because traditional ties with the land were broken during this period, making laborers in the country more mobile. Finally, the presence of a larger volume of previous immigrants from that country or region encouraged more immigration because potential immigrants now had friends or relatives to stay with who could smooth their transition to living and working in the United States.

Based on these four factors, Hatton and Williamson explain the rise and fall in the volume of immigration from a country to the United States. Immigrant volume initially increased as a consequence of more rapid population growth and industrialization in a country and the existence of a large gap in real wages between the country and the United States. Within a number of years, volume increased further due to the previous immigration that had occurred. Volume remained high until various changes in Europe caused immigration to decline. Population growth slowed. Most of the countries had undergone industrialization. Partly due to the previous immigration, real wages rose at home and became closer to those in the United States. Thus, each source country went through stages where immigration increased, reached a peak, and then declined.

Differences in the timing of these effects then led to changes in the source countries of the immigrants. The countries of northwest Europe were the first to experience rapid population growth and to begin industrializing. By the latter part of the nineteenth century, immigration from these countries was in the stage of decline. At about the same time, countries in central, eastern, and southern Europe were experiencing the beginnings of industrialization and more rapid population growth. This model holds directly only through the 1920s, because U.S. government policy changed. At that point, quotas were established on the number of individuals allowed to immigrate from each country. Even so, many countries, especially those in northwest Europe, had passed the point where a large number of individuals wanted to leave and thus did not fill their quotas. The quotas were binding for many other countries in Europe in which pressures to immigrate were still strong. Even today, the countries providing the majority of immigrants to the United States, those south of the United States and in Asia and Africa, are places where population growth is high, industrialization is breaking traditional ties with the land, and real wage differentials with the United States are large.

Immigration Policy and Nativism

This section summarizes the changes in U.S. immigration policy. Only the most important policy changes are discussed and a number of relatively minor changes have been ignored. Interested readers are referred to Le May (1987) and Briggs (1984) for more complete accounts of U.S. immigration policy.

Few Restrictions before 1882

Immigration into the United States was subject to virtually no legal restrictions before 1882. Essentially, anyone who wanted to enter the United States could and, as discussed earlier, no specified arrival areas existed until 1855. Individuals simply got off the ship and went about their business. Little opposition among U.S. citizens to immigration is apparent until about the 1830s. The growing concern at this time was due to the increasing volume of immigration in both absolute terms and relative to the U.S. population, and the facts that more of the arrivals were Catholic and unskilled. The nativist feeling burst into the open during the 1850s when the Know-Nothing political party achieved a great deal of political success in the 1854 off-year elections. This party did not favor restrictions on the number of immigrants, though they did seek to restrict their ability to quickly become voting citizens. For a short period of time, the Know-Nothings had an important presence in Congress and many state legislatures. With the downturn in immigration in 1855 and the nation’s attention turning more to the slavery issue, their influence receded.

Chinese Exclusion Act

The first restrictive immigration laws were directed against Asian countries. The first law was the Chinese Exclusion Act of 1882. This law essentially prohibited the immigration of Chinese citizens and it stayed in effect until it was removed during World War II. In 1907, Japanese immigration was substantially reduced through a Gentlemen’s Agreement between Japan and the United States. It is noteworthy that the Chinese Exclusion Act also prohibited the immigration of “convicts, lunatics, idiots” and those individuals who might need to be supported by government assistance. The latter provision was used to some extent during periods of high unemployment, though as noted above, immigration fell anyway because of the lack of jobs.

Literacy Test Adopted in 1917

The desire to restrict immigration to the United States grew over the latter part of the nineteenth century. This growth was due partly to the high volume and rate of immigration and partly to the changing national origins of the immigrants; more began arriving from southern, central, and eastern Europe. In 1907, Congress set up the Immigration Commission, chaired by Senator William Dillingham, to investigate immigration. This body issued a famous report, now viewed as flawed, concluding that immigrants from the newer parts of Europe did not assimilate easily and, in general, blaming them for various economic ills. Attempts at restricting immigration were initially made by proposing a law requiring a literacy test for admission to the United States, and such a law was finally passed in 1917. This same law also virtually banned immigration from any country in Asia. Restrictionists were no doubt displeased when the volume of immigration from Europe resumed its former level after World War I in spite of the literacy test. The movement then turned to explicitly limiting the number of arrivals.

1920s: Quota Act and National Origins Act

The Quota Act of 1921 laid the framework for a fundamental change in U.S. immigration policy. It limited the number of immigrants from Europe to a total of about 350,000 per year. National quotas were established in direct proportion to each country’s presence in the U.S. population in 1910. In addition, the act assigned Asian countries quotas near zero. Three years later in 1924, the National Origins Act instituted a requirement that visas be obtained from an American consulate abroad before immigrating, reduced the total European quota to about 165,000, and changed how the quotas were determined. Now, the quotas were established in direct proportion to each country’s presence in the U.S. population in 1890, though this aspect of the act was not fully implemented until 1929. Because relatively few individuals immigrated from southern, central, and eastern Europe before 1890, the effect of the 1924 law was to drastically reduce the number of individuals allowed to immigrate to the United States from these countries. Yet total immigration to the United States remained fairly high until the Great Depression because neither the 1921 nor the 1924 law restricted immigration from the Western Hemisphere. Thus, it was the combination of the outbreak of World War I and the subsequent 1920s restrictions that caused the Western Hemisphere to become a more important source of immigrants to the United States after 1915, though it should be recalled the rate of immigration fell to low levels after 1930.

Immigration and Nationality Act of 1965

The last major change in U.S. immigration policy occurred with the passage of the Immigration and Nationality Act of 1965. This law abolished the quotas based on national origins. Instead, a series of preferences were established to determine who would gain entry. The most important preference was given to relatives of U.S. citizens and permanent resident aliens. By the twenty-first century, about two-thirds of immigrants came through these family channels. Preferences were also given to professionals, scientists, artists, and workers in short supply. The 1965 law kept an overall quota on total immigration for Eastern Hemisphere countries, originally set at 170,000, and no more than 20,000 individuals were allowed to immigrate to the United States from any single country. This law was designed to treat all countries equally. Asian countries were treated the same as any other country, so the virtual prohibition on immigration from Asia disappeared. In addition, for the first time the law also limited the number of immigrants from Western Hemisphere countries, with the original overall quota set at 120,000. It is important to note that neither quota was binding because immediate relatives of U.S. citizens, such as spouses, parents, and minor children, were exempt from the quota. In addition, the United States has admitted large numbers of refugees at different times from Vietnam, Haiti, Cuba, and other countries. Finally, many individuals enter the United States on student visas, enroll in colleges and universities, and eventually get companies to sponsor them for a work visa. Thus, the total number of legal immigrants to the United States since 1965 has always been larger than the combined quotas. This law has led to an increase in the volume of immigration and, by treating all countries the same, has led to Asia recently becoming a more important source of U.S. immigrants.

Though features of the 1965 law have been modified since it was enacted, this law still serves as the basis for U.S. immigration policy today. The most important modifications occurred in 1986 when employer sanctions were adopted for those hiring illegal workers. On the other hand, the same law also gave temporary resident status to individuals who had lived illegally in the United States since before 1982. The latter feature led to very high volumes of legal immigration being recorded in 1989, 1990, and 1991.

The Characteristics of the Immigrants

In this section, various characteristics of the immigrant stream arriving at different points in time are discussed. The following characteristics of immigration are analyzed: gender breakdown, age structure, family vs. individual migration, and occupations listed. Virtually all the information is based on the Passenger Lists, a source discussed above.

Gender and Age

Data are presented in Table 2 on the gender breakdown and age structure of immigration. The gender breakdown and age structure remain fairly consistent in the period before 1930. Generally, about 60% of the immigrants were male. As to age structure, about 20% of immigrants were children, 70% were adults up to age 44, and 10% were older than 44. In most of the period and for most countries, immigrants were typically young single males, young couples, or, especially in the era before the steamship, families. For particular countries, such as Ireland, a large number of the immigrants were single women (Cohn, 1995). The primary exception to this generalization was the 1899-1914 period, when 68% of the immigrants were male and adults under 45 accounted for 82% of the total. This period saw the immigration of a large number of single males who planned to work for a period of months or years and return to their homeland, a development made possible by the steamship shortening the voyage and reducing its cost (Nugent, 1992). The characteristics of the immigrant stream since 1930 have been somewhat different. Males have comprised less than one-half of all immigrants. In addition, the percentage of immigrants over age 45 has increased at the expense of those between the ages of 14 and 44.

Table 2
Immigration by Gender and Age

Percent Males Percent under 14 years Percent 14–44 years Percent 45 years and over
Years
1820-1831 70 19 70 11
1832-1846 62 24 67 10
1847-1854 59 23 67 10
1855-1864 58 19 71 10
1865-1873 62 21 66 13
1873-1880 63 19 69 12
1881-1893 61 20 71 10
1894-1898 57 15 77 8
1899-1914 68 12 82 5
1915-1917 59 16 74 10
1918-1930 56 18 73 9
1931-1946 40 15 67 17
1947-1960 45 21 64 15
1961-1970 45 25 61 14
1971-1980 46 24 61 15
1981-1990 52 18 66 16
1991-2000 51 17 65 18
2001-2008 45 15 64 21
2009-2015 45 15 61 24

Notes: From 1918-1970, the age breakdown is “Under 16” and “16-44.” From 1971 to 1998, the age breakdown is “Under 15” and “15-44.” For 2001-2015, it is again “Under 16” and “16-44.”

Sources: 1820-1970: Historical Statistics (1976). Years since 1970: U.S. Immigration and Naturalization Service (various years). 2002-2015: Department of Homeland Security: Office of Immigration Statistics (various years).

Occupations

Table 3 presents data on the percentage of immigrants who did not report an occupation and the percentage breakdown of those reporting an occupation. The percentage not reporting an occupation declined through 1914. The small percentages between 1894 and 1914 are a reflection of the large number of single males who arrived during this period. As is apparent, the classification scheme for occupations has changed over time. Though there is no perfect way to correlate the occupation categories used in the different time periods, skilled workers comprised about one-fourth of the immigrant stream through 1970. The immigration of farmers was important before the Civil War but declined steadily over time. The percentage of laborers has varied over time, though during some time periods they comprised one-half or more of the immigrants. The highest percentages of laborers occurred during good years for the U.S. economy (1847-54, 1865-73, 1881-93, 1899-1914), because laborers possessed the fewest skills and would have an easier time finding a job when the U.S. economy was strong. Commercial workers, mainly merchants, were an important group of immigrants very early when immigrant volume was low, but their percentage fell substantially over time. Professional workers were always a small part of U.S. immigration until the 1930s. Since 1930, these workers have comprised a larger percentage of immigrants reporting an occupation.

Table 3
Immigration by Occupation

Year Percent with no occup. listed Percent of immigrants with an occupation in each category
Professional Commercial Skilled Farmers Servants Laborers Misc.
1820-1831 61 3 28 30 23 2 14
1832-1846 56 1 12 27 33 2 24
1847-1854 54 0 6 18 33 2 41
1855-1864 53 1 12 23 23 4 37 0
1865-1873 54 1 6 24 18 7 44 1
1873-1880 47 2 4 24 18 8 40 5
1881-1893 49 1 3 20 14 9 51 3
1894-1898 38 1 4 25 12 18 37 3
Professional, technical, and kindred workers Farmers and farm managers Managers, officials, and proprietors, exc. farm Clerical, sales, and kindred workers Craftsmen, foremen, operatives, and kindred workers Private HH workers Service workers, exc. private household Farm laborers and foremen Laborers, exc. farm and mine
1899-1914 26 1 2 3 2 18 15 2 26 33
1915-1919 37 5 4 5 5 21 15 7 11 26
1920-1930 39 4 5 4 7 24 17 6 8 25
1931-1946 59 19 4 15 13 21 13 6 2 7
1947-1960 53 16 5 5 17 31 8 6 3 10
1961-1970 56 23 2 5 17 25 9 7 4 9
1971-1980 59 25 — a 8 12 36 — b 15 5 — c
1981-1990 56 14 — a 8 12 37 — b 22 7 — c
1991-2000 61 17 — a 7 9 23 — b 14 30 — c
2001-2008 76 45 — a — d 14 21 — b 18 5 — c
2009-2015 76 46 — a — d 12 19 — b 19 5 — c

a – included with “Farm laborers and foremen”; b – included with “Service workers, etc.”; c – included with “Craftsmen, etc.”; d – included with “Professional.”

Sources: 1820-1970: Historical Statistics (1976). Years since 1970: U.S. Immigration and Naturalization Service (various years). 2002-2015: Department of Homeland Security: Office of Immigration Statistics (various years). From 1970 through 2001, the INS has provided the following occupational categories: Professional, specialty, and technical (listed above under “Professional”); Executive, administrative, and managerial (listed above under “Managers, etc.”); Sales; Administrative support (these two are combined and listed above under “Clerical, etc.”); Precision production, craft, and repair; Operator, fabricator, and laborer (these two are combined and listed above under “Craftsmen, etc.”); Farming, forestry, and fishing (listed above under “Farm laborers and foremen”); and Service (listed above under “Service workers, etc.). Since 2002, the Department of Homeland Security has combined the Professional and Executive categories.  Note: Entries with a zero indicate less than one-half of one percent. Entries with dashes indicate no information or no immigrants.

Skill Levels

The skill level of the immigrant stream is important because it potentially affects the U.S. labor force, an issue considered in the next section. Before turning to this issue, a number of comments can be made concerning the occupational skill level of the U.S. immigration stream. First, skill levels fell substantially in the period before the Civil War. Between 1820 and 1831, only 39% of the immigrants were farmers, servants, or laborers, the least skilled groups. Though the data are not as complete, immigration during the colonial period was almost certainly at least this skilled. By the 1847-54 period, however, the less-skilled percentage had increased to 76%. Second, the less-skilled percentage did not change dramatically late in the nineteenth century when the source of immigration changed from northwest Europe to other parts of Europe. Comparing 1873-80 with 1899-1914, both periods of high immigration, farmers, servants, and laborers accounted for 66% of the immigrants in the former period and 78% in the latter period. The second figure is, however, similar to that during the 1847-54 period. Third, the restrictions on immigration imposed during the 1920s had a sizable effect on the skill level of the immigrant stream. Between 1930 and 1970, only 31-34% of the immigrants were in the least-skilled group.

Fourth, a deterioration in immigrant skills appears in the numbers in the 1980s and 1990s, and then an improvement appears since 2001. Both changes may be an illusion.. In Table 3 for the 1980s and 1990s, the percentage in the “Professional” category falls while the percentages in the “Service” and “Farm workers” categories rise. These changes are, however, due to the amnesty for illegal immigrants resulting from the 1986 law. The amnesty led to the recorded volume of immigration in 1989, 1990, and 1991 being much higher than typical, and most of the “extra” immigrants recorded their occupation as “Service” or “Farm laborer.” If these years are ignored, then little change occurred in the occupational distribution of the immigrant stream during the 1980s and 1990s. Two caveats, however, should be noted. First, the illegal immigrants can not, of course, be ignored. Second, the skill level of the U.S. labor force was improving over the same period. Thus, relative to the U.S. labor force and including illegal immigration, it is apparent the occupational skill level of the U.S. immigrant stream declined during the 1980s and 1990s.  Turning to the twenty-first century, the percentage of the legal immigrant stream in the highest-skilled category appears to have increased. This conclusion is also not certain because the changes that occurred in how occupations were categorized beginning in 2001 make a straightforward comparison potentially inexact. This uncertainty is increased by the growing percentage of immigrants for which no occupation is reported. It is not clear whether a larger percentage of those arriving actually did not work (recall that a growing percentage of legal immigrants are somewhat older) or if more simply did not list an occupation. Overall, detecting changes in the skill level of the legal immigrant stream since about 1930 is fraught with difficulty.

The Effects of Immigration on the United States Economy

Though immigration has effects on the country from which the immigrants leave, this section only examines the effects on the United States, mainly those occurring over longer periods of time. Over short periods of time, sizeable and potentially negative effects can occur in a specific area when there is a huge influx of immigrants. A large number of arrivals in a short period of time in one city can cause school systems to become overcrowded, housing prices and welfare payments to increase, and jobs to become difficult to obtain. Yet most economists believe the effects of immigration over time are much less harmful than commonly supposed and, in many ways, are beneficial. . The following longer-term issues are discussed: the effects of immigration on the overall wage rate of U.S. workers; the effects on the wages of particular groups of workers, such as those who are unskilled; and the effects on the rate of economic growth, that is, the standard of living, in the United States. Determining the effects of immigration on the United States is complex and virtually none of the conclusions presented here are without controversy.

Immigration’s Impact on Overall Wage Rates

Immigration is popularly thought to lower the overall wage rate in the United States by increasing the supply of individuals looking for jobs. This effect may occur in an area over a fairly short period of time. Over longer time periods, however, wages will only fall if the amounts of other resources don’t change. Wages will not fall if the immigrants bring sufficient amounts of other resources with them, such as capital, or cause the amount of other resources in the economy to increase sufficiently. For example, historically the large-scale immigration from Europe contributed to rapid westward expansion of the United States during most of the nineteenth century. The westward expansion, however, increased the amounts of land and natural resources that were available, factors that almost certainly kept immigration from lowering wage rates. Immigrants also increase the amounts of other resources in the economy through running their own businesses, which both historically and in recent times has occurred at a greater rate among immigrants than native workers. By the beginning of the twentieth century, the westward frontier had been settled. A number of researchers have estimated that immigration did lower wages at this time (Hatton and Williamson, 1998; Goldin, 1994), though others have criticized these findings (Carter and Sutch, 1999). For the recent time period, most studies have found little effect of immigration on the level of wages, though a few have found an effect (Borjas, 1999).

Even if immigration leads to a fall in the wage rate, it does not follow that individual workers are worse off. Workers typically receive income from sources other than their own labor. If wages fall, then many other resource prices in the economy rise. For example, immigration increases the demand for housing and land and existing owners benefit from an increase in the current value of their property. Whether any individual worker is better off or worse off in this case is not easy to determine. It depends on the amounts of other resources each individual possesses.

Immigration’s Impact on Wages of Unskilled Workers

Consider the second issue, the effects of immigration on the wages of unskilled workers. If the immigrants arriving in the country are primarily unskilled, then the larger number of unskilled workers could cause their wage to fall if the overall demand for these workers doesn’t change. A requirement for this effect to occur is that the immigrants be less skilled than the U.S. labor force they enter. As discussed above, during colonial times immigrant volume was small and the immigrants were probably more skilled than the existing U.S. labor force. During the 1830s and 1840s, the volume and rate of immigration increased substantially and the skill level of the immigrant stream fell to approximately match that of the native labor force. Instead of lowering the wages of unskilled workers relative to those of skilled workers, however, the large inflow apparently led to little change in the wages of unskilled workers, while some skilled workers lost and others gained. The explanation for these results is that the larger number of unskilled workers resulting from immigration was a factor in employers adopting new methods of production that used more unskilled labor. As a result of this technological change, the demand for unskilled workers increased so their wage did not decline. As employers adopted these new machines, however, skilled artisans who had previously done many of these jobs, such as iron casting, suffered losses. Other skilled workers, such as many white-collar workers who were not in direct competition with the immigrants, gained. Some evidence exists to support a differential effect on skilled workers during the antebellum period (Williamson and Lindert, 1980; Margo, 2000). After the Civil War, however, the skill level of the immigrant stream was close to that of the native labor force, so immigration probably did not further affect the wage structure through the 1920s (Carter and Sutch, 1999).

Impact since World War II

The lower volume of immigration in the period from 1930 through 1960 meant immigration had little effect on the relative wages of different workers during these years. With the resumption of higher volumes of immigration after 1965, however, and with the immigrants’ skill levels being low through 2000, an effect on relative wages again became possible. In fact, the relative wages of high-school dropouts in the United States deteriorated during the same period, especially after the mid-1970s. Researchers who have studied the question have concluded that immigration accounted for about one-fourth of the wage deterioration experienced by high-school dropouts during the 1980s, though some researchers find a lower effect and others a higher one (Friedberg and Hunt, 1995; Borjas, 1999). Wages are determined by a number of factors other than immigration. In this case, it is thought the changing nature of the economy, such as the widespread use of computers increasing the benefits to education, bears more of the blame for the decline in the relative wages of high-school dropouts.

Economic Benefits from Immigration

Beyond any effect on wages, there are a number of ways in which immigration might improve the overall standard of living in an economy. First, immigrants may engage in inventive or scientific activity, with the result being a gain to everyone. Evidence exists for both the historical and more recent periods that the United States has attracted individuals with an inventive/scientific nature. The United States has always been a leader in these areas. Individuals are more likely to be successful in such an environment than in one where these activities are not as highly valued. Second, immigrants expand the size of markets for various goods, which may lead to lower firms’ average costs due to an increase in firm size. The result would be a decrease in the price of the goods in question. Third, most individuals immigrate between the ages of 15 and 35, so the expenses of their basic schooling are paid abroad. In the past, most immigrants, being of working age, immediately got a job. Thus, immigration increased the percentage of the population in the United States that worked, a factor that raises the average standard of living in a country. Even in more recent times, most immigrants work, though the increased proportion of older individuals in the immigrant stream means the positive effects from this factor may be lower than in the past. Fourth, while immigrants may place a strain on government services in an area, such as the school system, they also pay taxes. Even illegal immigrants directly pay sales taxes on their purchases of goods and indirectly pay property taxes through their rent. Finally, the fact that immigrants are less likely to immigrate to the United States during periods of high unemployment is also beneficial. By reducing the number of people looking for jobs during these periods, this factor increases the likelihood U.S. citizens will be able to find a job.

The Experience of Immigrants in the U.S. Labor Market

This section examines the labor market experiences of immigrants in the United States. The issue of discrimination against immigrants in jobs is investigated along with the issue of the success immigrants experienced over time. Again, the issues are investigated for the historical period of immigration as well as more recent times. Interested readers are directed to Borjas (1999), Ferrie (2000), Carter and Sutch (1999), Hatton and Williamson (1998), and Friedberg and Hunt (1995) for more technical discussions.

Did Immigrants Face Labor Market Discrimination?

Discrimination can take various forms. The first form is wage discrimination, in which a worker of one group is paid a wage lower than an equally productive worker of another group. Empirical tests of this hypothesis generally find this type of discrimination has not existed. At any point in time, immigrants have been paid the same wage for a specific job as a native worker. If immigrants generally received lower wages than native workers, the differences reflected the lower skills of the immigrants. Historically, as discussed above, the skill level of the immigrant stream was similar to that of the native labor force, so wages did not differ much between the two groups. During more recent years, the immigrant stream has been less skilled than the native labor force, leading to the receipt of lower wages by immigrants. A second form of discrimination is in the jobs an immigrant is able to obtain. For example, in 1910, immigrants accounted for over half of the workers in various jobs; examples are miners, apparel workers, workers in steel manufacturing, meat packers, bakers, and tailors. If a reason for the employment concentration was that immigrants were kept out of alternative higher paying jobs, then the immigrants would suffer. This type of discrimination may have occurred against Catholics during the 1840s and 1850s and against the immigrants from central, southern, and eastern Europe after 1890. In both cases, it is possible the immigrants suffered because they could not obtain higher paying jobs. In more recent years, reports of immigrants trained as doctors, say, in their home country but not allowed to easily practice as such in the United States, may represent a similar situation. Yet the open nature of the U.S. schooling system and economy has been such that this effect usually did not impact the fortunes of the immigrants’ children or did so at a much smaller rate.

Wage Growth, Job Mobility, and Wealth Accumulation

Another aspect of how immigrants fared in the U.S. labor market is their experiences over time with respect to wage growth, job mobility, and wealth accumulation. A study done by Ferrie (1999) for immigrants arriving between 1840 and 1850, the period when the inflow of immigrants relative to the U.S. population was the highest, found immigrants from Britain and Germany generally improved their job status over time. By 1860, over 75% of the individuals reporting a low-skilled job on the Passenger Lists had moved up into a higher-skilled job, while fewer than 25% of those reporting a high-skilled job on the Passenger Lists had moved down into a lower-skilled job. Thus, the job mobility for these individuals was high. For immigrants from Ireland, the experience was quite different; the percentage of immigrants moving up was only 40% and the percentage moving down was over 50%. It isn’t clear if the Irish did worse because they had less education and fewer skills or whether the differences were due to some type of discrimination against them in the labor market. As to wealth, all the immigrant groups succeeded in accumulating larger amounts of wealth the longer they were in the United States, though their wealth levels fell short of those enjoyed by natives. Essentially, the evidence indicates antebellum immigrants were quite successful over time in matching their skills to the available jobs in the U.S. economy.

The extent to which immigrants had success over time in the labor market in the period since the Civil War is not clear. Most researchers have thought that immigrants who arrived before 1915 had a difficult time. For example, Hanes (1996) concludes that immigrants, even those from northwest Europe, had slower earnings growth over time than natives, a finding he argues was due to poor assimilation. Hatton and Williamson (1998), on the other hand, criticize these findings on technical grounds and conclude that immigrants assimilated relatively easily into the U.S. labor market. For the period after World War II, Chiswick (1978) argues that immigrants’ wages have increased relative to those of natives the longer the immigrants have been in the United States. Borjas (1999) has criticized Chiswick’s finding by suggesting it is caused by a decline in the skills possessed by the arriving immigrants between the 1950s and the 1990s. Borjas finds that 25- to 34-year-old male immigrants who arrived in the late 1950s had wages 9% lower than comparable native males, but by 1970 had wages 6% higher. In contrast, those arriving in the late 1970s had wages 22% lower at entry. By the late 1990s, their wages were still 12% lower than comparable natives. Overall, the degree of success experienced by immigrants in the U.S. labor market remains an area of controversy.

References

Borjas, George J. Heaven’s Door: Immigration Policy and the American Economy. Princeton: Princeton University Press, 1999.

Briggs, Vernon M., Jr. Immigration and the American Labor Force. Baltimore: Johns Hopkins University Press, 1984.

Carter, Susan B., and Richard Sutch. “Historical Perspectives on the Economic Consequences of Immigration into the United States.” In The Handbook of International Migration: The American Experience, edited by Charles Hirschman, Philip Kasinitz, and Josh DeWind, 319-341. New York: Russell Sage Foundation, 1999

Carter, Susan B., et. al.  Historical Statistics of the United States: Earliest Times to the Present – Millennial Edition. Volume 1: Population. New York: Cambridge University Press, 2006.

Chiswick, Barry R. “The Effect of Americanization on the Earnings of Foreign-Born Men.” Journal of Political Economy 86 (1978): 897-921.

Cohn, Raymond L. “A Comparative Analysis of European Immigrant Streams to the United States during the Early Mass Migration.” Social Science History 19 (1995): 63-89.

Cohn, Raymond L.  “The Transition from Sail to Steam in Immigration to the United States.” Journal of Economic History 65 (2005): 479-495.

Cohn, Raymond L. Mass Migration under Sail: European Immigration to the Antebellum United States. New York: Cambridge University Press, 2009.

Erickson, Charlotte J. Leaving England: Essays on British Emigration in the Nineteenth Century. Ithaca: Cornell University Press, 1994.

Ferenczi, Imre. International Migrations. New York: Arno Press, 1970.

Ferrie, Joseph P. Yankeys Now: Immigrants in the Antebellum United States, 1840-1860. New York: Oxford University Press, 1999.

Friedberg, Rachael M., and Hunt, Jennifer. “The Impact of Immigrants on Host Country Wages, Employment and Growth.” The Journal of Economic Perspectives 9 (1995): 23-44.

Goldin, Claudia. “The Political Economy of Immigration Restrictions in the United States, 1890 to 1921.” In The Regulated Economy: A Historical Approach to Political Economy, edited by Claudia Goldin and Gary D. Libecap, 223-257. Chicago: University of Chicago Press, 1994.

Grabbe, Hans-Jürgen. “European Immigration to the United States in the Early National Period, 1783-1820.” Proceeding of the American Philosophical Society 133 (1989): 190-214.

Hanes, Christopher. “Immigrants’ Relative Rate of Wage Growth in the Late Nineteenth Century.” Explorations in Economic History 33 (1996): 35-64.

Hansen, Marcus L. The Atlantic Migration, 1607-1860. Cambridge, MA.: Harvard University Press, 1940.

Hatton, Timothy J., and Jeffrey G. Williamson. The Age of Mass Migration: Causes and Economic Impact. New York: Oxford University Press, 1998.

Jones, Maldwyn Allen. American Immigration. Chicago: University of Chicago Press, Second Edition, 1960.

Le May, Michael C. From Open Door to Dutch Door: An Analysis of U.S. Immigration Policy Since 1820. New York: Praeger, 1987.

Margo, Robert A. Wages and Labor Markets in the United States, 1820-1860. Chicago: University of Chicago Press, 2000.

Massey, Douglas S. “Why Does Immigration Occur? A Theoretical Synthesis.” In The Handbook of International Migration: The American Experience, edited by Charles Hirschman, Philip Kasinitz, and Josh DeWind, 34-52. New York: Russell Sage Foundation, 1999.

Miller, Kerby A. Emigrants and Exiles: Ireland and the Irish Exodus to North America. Oxford: Oxford University Press, 1985.

Nugent, Walter. Crossings: The Great Transatlantic Migrations, 1870-1914. Bloomington and Indianapolis: Indiana University Press, 1992.

Taylor, Philip. The Distant Magnet. New York: Harper & Row, 1971.

Thomas, Brinley. Migration and Economic Growth: A Study of Great Britain and the Atlantic Economy. Cambridge, U.K.: Cambridge University Press, 1954.

U.S. Department of Commerce. Historical Statistics of the United States. Washington, DC, 1976.

U.S. Immigration and Naturalization Service. Statistical Yearbook of the Immigration and Naturalization Service. Washington, DC: U.S. Government Printing Office, various years.

Walker, Mack. Germany and the Emigration, 1816-1885. Cambridge, MA: Harvard University Press, 1964.

Williamson, Jeffrey G., and Peter H. Lindert, Peter H. American Inequality: A Macroeconomic History. New York: Academic Press, 1980.

Citation: Cohn, Raymond L. “Immigration to the United States”. EH.Net Encyclopedia, edited by Robert Whaples. Revised August 2, 2017. URL http://eh.net/encyclopedia/immigration-to-the-united-states/

Hours of Work in U.S. History

Robert Whaples, Wake Forest University

In the 1800s, many Americans worked seventy hours or more per week and the length of the workweek became an important political issue. Since then the workweek’s length has decreased considerably. This article presents estimates of the length of the historical workweek in the U.S., describes the history of the shorter-hours “movement,” and examines the forces that drove the workweek’s decline over time.

Estimates of the Length of the Workweek

Measuring the length of the workweek (or workday or workyear) is a difficult task, full of ambiguities concerning what constitutes work and who is to be considered a worker. Estimating the length of the historical workweek is even more troublesome. Before the Civil War most Americans were employed in agriculture and most of these were self-employed. Like self-employed workers in other fields, they saw no reason to record the amount of time they spent working. Often the distinction between work time and leisure time was blurry. Therefore, estimates of the length of the typical workweek before the mid-1800s are very imprecise.

The Colonial Period

Based on the amount of work performed — for example, crops raised per worker — Carr (1992) concludes that in the seventeenth-century Chesapeake region, “for at least six months of the year, an eight to ten-hour day of hard labor was necessary.” This does not account for other required tasks, which probably took about three hours per day. This workday was considerably longer than for English laborers, who at the time probably averaged closer to six hours of heavy labor each day.

The Nineteenth Century

Some observers believe that most American workers adopted the practice of working from “first light to dark” — filling all their free hours with work — throughout the colonial period and into the nineteenth century. Others are skeptical of such claims and argue that work hours increased during the nineteenth century — especially its first half. Gallman (1975) calculates “changes in implicit hours of work per agricultural worker” and estimates that hours increased 11 to 18 percent from 1800 to 1850. Fogel and Engerman (1977) argue that agricultural hours in the North increased before the Civil War due to the shift into time-intensive dairy and livestock. Weiss and Craig (1993) find evidence suggesting that agricultural workers also increased their hours of work between 1860 and 1870. Finally, Margo (2000) estimates that “on an economy-wide basis, it is probable that annual hours of work rose over the (nineteenth) century, by around 10 percent.” He credits this rise to the shift out of agriculture, a decline in the seasonality of labor demand and reductions in annual periods of nonemployment. On the other hand, it is clear that working hours declined substantially for one important group. Ransom and Sutch (1977) and Ng and Virts (1989) estimate that annual labor hours per capita fell 26 to 35 percent among African-Americans with the end of slavery.

Manufacturing Hours before 1890

Our most reliable estimates of the workweek come from manufacturing, since most employers required that manufacturing workers remain at work during precisely specified hours. The Census of Manufactures began to collect this information in 1880 but earlier estimates are available. Much of what is known about average work hours in the nineteenth century comes from two surveys of manufacturing hours taken by the federal government. The first survey, known as the Weeks Report, was prepared by Joseph Weeks as part of the Census of 1880. The second was prepared in 1893 by Commissioner of Labor Carroll D. Wright, for the Senate Committee on Finance, chaired by Nelson Aldrich. It is commonly called the Aldrich Report. Both of these sources, however, have been criticized as flawed due to problems such as sample selection bias (firms whose records survived may not have been typical) and unrepresentative regional and industrial coverage. In addition, the two series differ in their estimates of the average length of the workweek by as much as four hours. These estimates are reported in Table 1. Despite the previously mentioned problems, it seems reasonable to accept two important conclusions based on these data — the length of the typical manufacturing workweek in the 1800s was very long by modern standards and it declined significantly between 1830 and 1890.

Table 1
Estimated Average Weekly Hours Worked in Manufacturing, 1830-1890

Year Weeks Report Aldrich Report
1830 69.1
1840 67.1 68.4
1850 65.5 69.0
1860 62.0 66.0
1870 61.1 63.0
1880 60.7 61.8
1890 60.0

Sources: U.S. Department of Interior (1883), U.S. Senate (1893)
Note: Atack and Bateman (1992), using data from census manuscripts, estimate average weekly hours to be 60.1 in 1880 — very close to Weeks’ contemporary estimate. They also find that the summer workweek was about 1.5 hours longer than the winter workweek.

Hours of Work during the Twentieth Century

Because of changing definitions and data sources there does not exist a consistent series of workweek estimates covering the entire twentieth century. Table 2 presents six sets of estimates of weekly hours. Despite differences among the series, there is a fairly consistent pattern, with weekly hours falling considerably during the first third of the century and much more slowly thereafter. In particular, hours fell strongly during the years surrounding World War I, so that by 1919 the eight-hour day (with six workdays per week) had been won. Hours fell sharply at the beginning of the Great Depression, especially in manufacturing, then rebounded somewhat and peaked during World War II. After World War II, the length of the workweek stabilized around forty hours. Owen’s nonstudent-male series shows little trend after World War II, but the other series show a slow, but steady, decline in the length of the average workweek. Greis’s two series are based on the average length of the workyear and adjust for paid vacations, holidays and other time-off. The last column is based on information reported by individuals in the decennial censuses and in the Current Population Survey of 1988. It may be the most accurate and representative series, as it is based entirely on the responses of individuals rather than employers.

Table 2
Estimated Average Weekly Hours Worked, 1900-1988

Year Census of Manu-facturing JonesManu-

facturing

OwenNonstudent Males GreisManu-

facturing

GreisAll Workers Census/CPS All Workers
1900 59.6* 55.0 58.5
1904 57.9 53.6 57.1
1909 56.8 (57.3) 53.1 55.7
1914 55.1 (55.5) 50.1 54.0
1919 50.8 (51.2) 46.1 50.0
1924 51.1* 48.8 48.8
1929 50.6 48.0 48.7
1934 34.4 40.6
1940 37.6 42.5 43.3
1944 44.2 46.9
1947 39.2 42.4 43.4 44.7
1950 38.7 41.1 42.7
1953 38.6 41.5 43.2 44.0
1958 37.8* 40.9 42.0 43.4
1960 41.0 40.9
1963 41.6 43.2 43.2
1968 41.7 41.2 42.0
1970 41.1 40.3
1973 40.6 41.0
1978 41.3* 39.7 39.1
1980 39.8
1988 39.2

Sources: Whaples (1990a), Jones (1963), Owen (1976, 1988), and Greis (1984). The last column is based on the author’s calculations using Coleman and Pencavel’s data from Table 4 (below).
* = these estimates are from one year earlier than the year listed.
(The figures in parentheses in the first column are unofficial estimates but are probably more precise, as they better estimate the hours of workers in industries with very long workweeks.)

Hours in Other Industrial Sectors

Table 3 compares the length of the workweek in manufacturing to that in other industries for which there is available information. (Unfortunately, data from the agricultural and service sectors are unavailable until late in this period.) The figures in Table 3 show that the length of the workweek was generally shorter in the other industries — sometimes considerably shorter. For example, in 1910 anthracite coalminers’ workweeks were about forty percent shorter than the average workweek among manufacturing workers. All of the series show an overall downward trend.

Table 3
Estimated Average Weekly Hours Worked, Other Industries

Year Manufacturing Construction Railroads Bituminous Coal Anthracite Coal
1850s about 66 about 66
1870s about 62 about 60
1890 60.0 51.3
1900 59.6 50.3 52.3 42.8 35.8
1910 57.3 45.2 51.5 38.9 43.3
1920 51.2 43.8 46.8 39.3 43.2
1930 50.6 42.9 33.3 37.0
1940 37.6 42.5 27.8 27.2
1955 38.5 37.1 32.4 31.4

Sources: Douglas (1930), Jones (1963), Licht (1983), and Tables 1 and 2.
Note: The manufacturing figures for the 1850s and 1870s are approximations based on averaging numbers from the Weeks and Aldrich reports from Table 1. The early estimates for the railroad industry are also approximations.

Recent Trends by Race and Gender

Some analysts, such as Schor (1992) have argued that the workweek increased substantially in the last half of the twentieth century. Few economists accept this conclusion, arguing that it is based on the use of faulty data (public opinion surveys) and unexplained methods of “correcting” more reliable sources. Schor’s conclusions are contradicted by numerous studies. Table 4 presents Coleman and Pencavel’s (1993a, 1993b) estimates of the average workweek of employed people — disaggregated by race and gender. For all four groups the average length of the workweek has dropped since 1950. Although median weekly hours were virtually constant for men, the upper tail of the hours distribution fell for those with little schooling and rose for the well-educated. In addition, Coleman and Pencavel also find that work hours declined for young and older men (especially black men), but changed little for white men in their prime working years. Women with relatively little schooling were working fewer hours in the 1980s than in 1940, while the reverse is true of well-educated women.

Table 4
Estimated Average Weekly Hours Worked, by Race and Gender, 1940-1988

Year White Men Black Men White Women Black Women
1940 44.1 44.5 40.6 42.2
1950 43.4 42.8 41.0 40.3
1960 43.3 40.4 36.8 34.7
1970 43.1 40.2 36.1 35.9
1980 42.9 39.6 35.9 36.5
1988 42.4 39.6 35.5 37.2

Source: Coleman and Pencavel (1993a, 1993b)

Broader Trends in Time Use, 1880 to 2040

In 1880 a typical male household head had very little leisure time — only about 1.8 hours per day over the course of a year. However, as Fogel’s (2000) estimates in Table 5 show, between 1880 and 1995 the amount of work per day fell nearly in half, allowing leisure time to more than triple. Because of the decline in the length of the workweek and the declining portion of a lifetime that is spent in paid work (due largely to lengthening periods of education and retirement) the fraction of the typical American’s lifetime devoted to work has become remarkably small. Based on these trends Fogel estimates that four decades from now less than one-fourth of our discretionary time (time not needed for sleep, meals, and hygiene) will be devoted to paid work — over three-fourths will be available for doing what we wish.

Table 5
Division of the Day for the Average Male Household Head over the Course of a Year, 1880 and 1995

Activity 1880 1995
Sleep 8 8
Meals and hygiene 2 2
Chores 2 2
Travel to and from work 1 1
Work 8.5 4.7
Illness .7 .5
Left over for leisure activities 1.8 5.8

Source: Fogel (2000)

Table 6
Estimated Trend in the Lifetime Distribution of Discretionary Time, 1880-2040

Activity 1880 1995 2040
Lifetime Discretionary Hours 225,900 298,500 321,900
Lifetime Work Hours 182,100 122,400 75,900
Lifetime Leisure Hours 43,800 176,100 246,000

Source: Fogel (2000)
Notes: Discretionary hours exclude hours used for sleep, meals and hygiene. Work hours include paid work, travel to and from work, and household chores.

Postwar International Comparisons

While hours of work have decreased slowly in the U.S. since the end of World War II, they have decreased more rapidly in Western Europe. Greis (1984) calculates that annual hours worked per employee fell from 1908 to 1704 in the U.S. between 1950 and 1979, a 10.7 percent decrease. This compares to a 21.8 percent decrease across a group of twelve Western European countries, where the average fell from 2170 hours to 1698 hours between 1950 and 1979. Perhaps the most precise way of measuring work hours is to have individuals fill out diaries on their day-to-day and hour-to-hour time use. Table 7 presents an international comparison of average work hours both inside and outside of the workplace, by adult men and women — averaging those who are employed with those who are not. (Juster and Stafford (1991) caution, however, that making these comparisons requires a good deal of guesswork.) These numbers show a significant drop in total work per week in the U.S. between 1965 and 1981. They also show that total work by men and women is very similar, although it is divided differently. Total work hours in the U.S. were fairly similar to those in Japan, but greater than in Denmark, while less than in the USSR.

Table 7
Weekly Work Time in Four Countries, Based on Time Diaries, 1960s-1980s

Activity US USSR (Pskov)
Men Women Men Women
1965 1981 1965 1981 1965 1981 1965 1981
Total Work 63.1 57.8 60.9 54.4 64.4 65.7 75.3 66.3
Market Work 51.6 44.0 18.9 23.9 54.6 53.8 43.8 39.3
Commuting 4.8 3.5 1.6 2.0 4.9 5.2 3.7 3.4
Housework 11.5 13.8 41.8 30.5 9.8 11.9 31.5 27.0
Activity Japan Denmark
Men Women Men Women
1965 1985 1965 1985 1964 1987 1964 1987
Total Work 60.5 55.5 64.7 55.6 45.4 46.2 43.4 43.9
Market Work 57.7 52.0 33.2 24.6 41.7 33.4 13.3 20.8
Commuting 3.6 4.5 1.0 1.2 n.a n.a n.a n.a
Housework 2.8 3.5 31.5 31.0 3.7 12.8 30.1 23.1

Source: Juster and Stafford (1991)

The Shorter Hours “Movement” in the U.S.

The Colonial Period

Captain John Smith, after mapping New England’s coast, came away convinced that three days’ work per week would satisfy any settler. Far from becoming a land of leisure, however, the abundant resources of British America and the ideology of its settlers, brought forth high levels of work. Many colonial Americans held the opinion that prosperity could be taken as a sign of God’s pleasure with the individual, viewed work as inherently good and saw idleness as the devil’s workshop. Rodgers (1978) argues that this work ethic spread and eventually reigned supreme in colonial America. The ethic was consistent with the American experience, since high returns to effort meant that hard work often yielded significant increases in wealth. In Virginia, authorities also transplanted the Statue of Artificers, which obliged all Englishmen (except the gentry) to engage in productive activity from sunrise to sunset. Likewise, a 1670 Massachusetts law demanded a minimum ten-hour workday, but it is unlikely that these laws had any impact on the behavior of most free workers.

The Revolutionary War Period

Roediger and Foner (1989) contend that the Revolutionary War era brought a series of changes that undermined support for sun-to-sun work. The era’s republican ideology emphasized that workers needed free time, away from work, to participate in democracy. Simultaneously, the development of merchant capitalism meant that there were, for the first time, a significant number of wageworkers. Roediger and Foner argue that reducing labor costs was crucial to the profitability of these workers’ employers, who reduced costs by squeezing more work from their employees — reducing time for meals, drink and rest and sometimes even rigging the workplace’s official clock. Incensed by their employers’ practice of paying a flat daily wage during the long summer shift and resorting to piece rates during short winter days, Philadelphia’s carpenters mounted America’s first ten-hour-day strike in May 1791. (The strike was unsuccessful.)

1820s: The Shorter Hours Movement Begins

Changes in the organization of work, with the continued rise of merchant capitalists, the transition from the artisanal shop to the early factory, and an intensified work pace had become widespread by about 1825. These changes produced the first extensive, aggressive movement among workers for shorter hours, as the ten-hour movement blossomed in New York City, Philadelphia and Boston. Rallying around the ten-hour banner, workers formed the first city-central labor union in the U.S., the first labor newspaper, and the first workingmen’s political party — all in Philadelphia — in the late 1820s.

Early Debates over Shorter Hours

Although the length of the workday is largely an economic decision arrived at by the interaction of the supply and demand for labor, advocates of shorter hours and foes of shorter hours have often argued the issue on moral grounds. In the early 1800s, advocates argued that shorter work hours improved workers’ health, allowed them time for self-improvement and relieved unemployment. Detractors countered that workers would abuse leisure time (especially in saloons) and that long, dedicated hours of work were the path to success, which should not be blocked for the great number of ambitious workers.

1840s: Early Agitation for Government Intervention

When Samuel Slater built the first textile mills in the U.S., “workers labored from sun up to sun down in summer and during the darkness of both morning and evening in the winter. These hours ? only attracted attention when they exceeded the common working day of twelve hours,” according to Ware (1931). During the 1830s, an increased work pace, tighter supervision, and the addition of about fifteen minutes to the work day (partly due to the introduction of artificial lighting during winter months), plus the growth of a core of more permanent industrial workers, fueled a campaign for a shorter workweek among mill workers in Lowell, Massachusetts, whose workweek averaged about 74 hours. This agitation was led by Sarah Bagley and the New England Female Labor Reform Association, which, beginning in 1845, petitioned the state legislature to intervene in the determination of hours. The petitions were followed by America’s first-ever examination of labor conditions by a governmental investigating committee. The Massachusetts legislature proved to be very unsympathetic to the workers’ demands, but similar complaints led to the passage of laws in New Hampshire (1847) and Pennsylvania (1848), declaring ten hours to be the legal length of the working day. However, these laws also specified that a contract freely entered into by employee and employer could set any length for the workweek. Hence, these laws had little impact. Legislation passed by the federal government had a more direct, though limited effect. On March 31, 1840, President Martin Van Buren issued an executive order mandating a ten-hour day for all federal employees engaged in manual work.

1860s: Grand Eight Hours Leagues

As the length of the workweek gradually declined, political agitation for shorter hours seems to have waned for the next two decades. However, immediately after the Civil War reductions in the length of the workweek reemerged as an important issue for organized labor. The new goal was an eight-hour day. Roediger (1986) argues that many of the new ideas about shorter hours grew out of the abolitionists’ critique of slavery — that long hours, like slavery, stunted aggregate demand in the economy. The leading proponent of this idea, Ira Steward, argued that decreasing the length of the workweek would raise the standard of living of workers by raising their desired consumption levels as their leisure expanded, and by ending unemployment. The hub of the newly launched movement was Boston and Grand Eight Hours Leagues sprang up around the country in 1865 and 1866. The leaders of the movement called the meeting of the first national organization to unite workers of different trades, the National Labor Union, which met in Baltimore in 1867. In response to this movement, eight states adopted general eight-hour laws, but again the laws allowed employer and employee to mutually consent to workdays longer than the “legal day.” Many critics saw these laws and this agitation as a hoax, because few workers actually desired to work only eight hours per day at their original hourly pay rate. The passage of the state laws did foment action by workers — especially in Chicago where parades, a general strike, rioting and martial law ensued. In only a few places did work hours fall after the passage of these laws. Many become disillusioned with the idea of using the government to promote shorter hours and by the late 1860s, efforts to push for a universal eight-hour day had been put on the back burner.

The First Enforceable Hours Laws

Despite this lull in shorter-hours agitation, in 1874, Massachusetts passed the nation’s first enforceable ten-hour law. It covered only female workers and became fully effective by 1879. This legislation was fairly late by European standards. Britain had passed its first effective Factory Act, setting maximum hours for almost half of its very young textile workers, in 1833.

1886: Year of Dashed Hopes

In the early 1880s organized labor in the U.S. was fairly weak. In 1884, the short-lived Federation of Organized Trades and Labor Unions (FOTLU) fired a “shot in the dark.” During its final meeting, before dissolving, the Federation “ordained” May 1, 1886 as the date on which workers would cease working beyond eight hours per day. Meanwhile, the Knights of Labor, which had begun as a secret fraternal society and evolved a labor union, began to gain strength. It appears that many nonunionized workers, especially the unskilled, came to see in the Knights a chance to obtain a better deal from their employers, perhaps even to obtain the eight-hour day. FOTLU’s call for workers to simply walk off the job after eight hours beginning on May 1, plus the activities of socialist and anarchist labor organizers and politicians, and the apparent strength of the Knights combined to attract members in record numbers. The Knights mushroomed and its new membership demanded that their local leaders support them in attaining the eight-hour day. Many smelled victory in the air — the movement to win the eight-hour day became frenzied and the goal became “almost a religious crusade” (Grob, 1961).

The Knights’ leader, Terence Powderly, thought that the push for a May 1 general strike for eight-hours was “rash, short-sighted and lacking in system” and “must prove abortive” (Powderly, 1890). He offered no effective alternative plan but instead tried to block the mass action, issuing a “secret circular” condemning the use of strikes. Powderly reasoned that low incomes forced workmen to accept long hours. Workers didn’t want shorter hours unless their daily pay was maintained, but employers were unwilling and/or unable to offer this. Powderly’s rival, labor leader Samuel Gompers, agreed that “the movement of ’86 did not have the advantage of favorable conditions” (Gompers, 1925). Nelson (1986) points to divisions among workers, which probably had much to do with the failure in 1886 of the drive for the eight-hour day. Some insisted on eight hours with ten hours’ pay, but others were willing to accept eight hours with eight hours’ pay,

Haymarket Square Bombing

The eight-hour push of 1886 was, in Norman Ware’s words, “a flop” (Ware, 1929). Lack of will and organization among workers was undoubtedly important, but its collapse was aided by violence that marred strikes and political rallies in Chicago and Milwaukee. The 1886 drive for eight-hours literally blew up in organized labor’s face. At Haymarket Square in Chicago an anarchist bomb killed fifteen policemen during an eight-hour rally, and in Milwaukee’s Bay View suburb nine strikers were killed as police tried to disperse roving pickets. The public backlash and fear of revolution damned the eight-hour organizers along with the radicals and dampened the drive toward eight hours — although it is estimated that the strikes of May 1886 shortened the workweek for about 200,000 industrial workers, especially in New York City and Cincinnati.

The AFL’s Strategy

After the demise of the Knights of Labor, the American Federation of Labor (AFL) became the strongest labor union in the U.S. It held shorter hours as a high priority. The inside cover of its Proceedings carried two slogans in large type: “Eight hours for work, eight hours for rest, eight hours for what we will” and “Whether you work by the piece or work by the day, decreasing the hours increases the pay.” (The latter slogan was coined by Ira Steward’s wife, Mary.) In the aftermath of 1886, the American Federation of Labor adopted a new strategy of selecting each year one industry in which it would attempt to win the eight-hour day, after laying solid plans, organizing, and building up a strike fund war chest by taxing nonstriking unions. The United Brotherhood of Carpenters and Joiners was selected first and May 1, 1890 was set as a day of national strikes. It is estimated that nearly 100,000 workers gained the eight-hour day as a result of these strikes in 1890. However, other unions turned down the opportunity to follow the carpenters’ example and the tactic was abandoned. Instead, the length of the workweek continued to erode during this period, sometimes as the result of a successful local strike, more often as the result of broader economic forces.

The Spread of Hours Legislation

Massachusetts’ first hours law in 1874 set sixty hours per week as the legal maximum for women, in 1892 this was cut to 58, in 1908 to 56, and in 1911 to 54. By 1900, 26 percent of states had maximum hours laws covering women, children and, in some, adult men (generally only those in hazardous industries). The percentage of states with maximum hours laws climbed to 58 percent in 1910, 76 percent in 1920, and 84 percent in 1930. Steinberg (1982) calculates that the percent of employees covered climbed from 4 percent nationally in 1900, to 7 percent in 1910, and 12 percent in 1920 and 1930. In addition, these laws became more restrictive with the average legal standard falling from a maximum of 59.3 hours per week in 1900 to 56.7 in 1920. According to her calculations, in 1900 about 16 percent of the workers covered by these laws were adult men, 49 percent were adult women and the rest were minors.

Court Rulings

The banner years for maximum hours legislation were right around 1910. This may have been partly a reaction to the Supreme Court’s ruling upholding female-hours legislation in the Muller vs. Oregon case (1908). The Court’s rulings were not always completely consistent during this period, however. In 1898 the Court upheld a maximum eight-hour day for workmen in the hazardous industries of mining and smelting in Utah in Holden vs. Hardy. In Lochner vs. New York (1905), it rejected as unconstitutional New York’s ten-hour day for bakers, which was also adopted (at least nominally) out of concerns for safety. The defendant showed that mortality rates in baking were only slightly above average, and lower than those for many unregulated occupations, arguing that this was special interest legislation, designed to favor unionized bakers. Several state courts, on the other hand, supported laws regulating the hours of men in only marginally hazardous work. By 1917, in Bunting vs. Oregon, the Supreme Court seemingly overturned the logic of the Lochner decision, supporting a state law that required overtime payment for all men working long hours. The general presumption during this period was that the courts would allow regulation of labor concerning women and children, who were thought to be incapable of bargaining on an equal footing with employers and in special need of protection. Men were allowed freedom of contract unless it could be proven that regulating their hours served a higher good for the population at large.

New Arguments about Shorter Hours

During the first decades of the twentieth century, arguments favoring shorter hours moved away from Steward’s line that shorter hours increased pay and reduced unemployment to arguments that shorter hours were good for employers because they made workers more productive. A new cadre of social scientists began to offer evidence that long hours produced health-threatening, productivity-reducing fatigue. This line of reasoning, advanced in the court brief of Louis Brandeis and Josephine Goldmark, was crucial in the Supreme Court’s decision to support state regulation of women’s hours in Muller vs. Oregon. Goldmark’s book, Fatigue and Efficiency (1912) was a landmark. In addition, data relating to hours and output among British and American war workers during World War I helped convince some that long hours could be counterproductive. Businessmen, however, frequently attacked the shorter hours movement as merely a ploy to raise wages, since workers were generally willing to work overtime at higher wage rates.

Federal Legislation in the 1910s

In 1912 the Federal Public Works Act was passed, which provided that every contract to which the U.S. government was a party must contain an eight-hour day clause. Three year later LaFollette’s Bill established maximum hours for maritime workers. These were preludes to the most important shorter-hours law enacted by Congress during this period — 1916’s Adamson Act, which was passed to counter a threatened nationwide strike, granted rail workers the basic eight hour day. (The law set eight hours as the basic workday and required higher overtime pay for longer hours.)

World War I and Its Aftermath

Labor markets became very tight during World War I as the demand for workers soared and the unemployment rate plunged. These forces put workers in a strong bargaining position, which they used to obtain shorter work schedules. The move to shorter hours was also pushed by the federal government, which gave unprecedented support to unionization. The federal government began to intervene in labor disputes for the first time, and the National War Labor Board “almost invariably awarded the basic eight-hour day when the question of hours was at issue” in labor disputes (Cahill, 1932). At the end of the war everyone wondered if organized labor would maintain its newfound power and the crucial test case was the steel industry. Blast furnace workers generally put in 84-hour workweeks. These abnormally long hours were the subject of much denunciation and a major issue in a strike that began in September 1919. The strike failed (and organized labor’s power receded during the 1920s), but four years later US Steel reduced its workday from twelve to eight hours. The move came after much arm-twisting by President Harding but its timing may be explained by immigration restrictions and the loss of immigrant workers who were willing to accept such long hours (Shiells, 1990).

The Move to a Five-day Workweek

During the 1920s agitation for shorter workdays largely disappeared, now that the workweek had fallen to about 50 hours. However, pressure arose to grant half-holidays on Saturday or Saturday off — especially in industries whose workers were predominantly Jewish. By 1927 at least 262 large establishments had adopted the five-day week, while only 32 had it by 1920. The most notable action was Henry Ford’s decision to adopt the five-day week in 1926. Ford employed more than half of the nation’s approximately 400,000 workers with five-day weeks. However, Ford’s motives were questioned by many employers who argued that productivity gains from reducing hours ceased beyond about forty-eight hours per week. Even the reformist American Labor Legislation Review greeted the call for a five-day workweek with lukewarm interest.

Changing Attitudes in the 1920s

Hunnicutt (1988) argues that during the 1920s businessmen and economists began to see shorter hours as a threat to future economic growth. With the development of advertising — the “gospel of consumption” — a new vision of progress was proposed to American workers. It replaced the goal of leisure time with a list of things to buy and business began to persuade workers that more work brought more tangible rewards. Many workers began to oppose further decreases in the length of the workweek. Hunnicutt concludes that a new work ethic arose as Americans threw off the psychology of scarcity for one of abundance.

Hours’ Reduction during the Great Depression

Then the Great Depression hit the American economy. By 1932 about half of American employers had shortened hours. Rather than slash workers’ real wages, employers opted to lay-off many workers (the unemployment rate hit 25 percent) and tried to protect the ones they kept on by the sharing of work among them. President Hoover’s Commission for Work Sharing pushed voluntary hours reductions and estimated that they had saved three to five million jobs. Major employers like Sears, GM, and Standard Oil scaled down their workweeks and Kellogg’s and the Akron tire industry pioneered the six-hour day. Amid these developments, the AFL called for a federally-mandated thirty-hour workweek.

The Black-Connery 30-Hours Bill and the NIRA

The movement for shorter hours as a depression-fighting work-sharing measure built such a seemingly irresistible momentum that by 1933 observers predicting that the “30-hour week was within a month of becoming federal law” (Hunnicutt, 1988). During the period after the 1932 election but before Franklin Roosevelt’s inauguration, Congressional hearings on thirty hours began, and less than one month into FDR’s first term, the Senate passed, 53 to 30, a thirty-hour bill authored by Hugo Black. The bill was sponsored in the House by William Connery. Roosevelt originally supported the Black-Connery proposals, but soon backed off, uneasy with a provision forbidding importation of goods produced by workers whose weeks were longer than thirty hours, and convinced by arguments of business that trying to legislate fewer hours might have disastrous results. Instead, FDR backed the National Industrial Recovery Act (NIRA). Hunnicutt argues that an implicit deal was struck in the NIRA. Labor leaders were persuaded by NIRA Section 7a’s provisions — which guaranteed union organization and collective bargaining — to support the NIRA rather than the Black-Connery Thirty-Hour Bill. Business, with the threat of thirty hours hanging over its head, fell raggedly into line. (Most historians cite other factors as the key to the NIRA’s passage. See Barbara Alexander’s article on the NIRA in this encyclopedia.) When specific industry codes were drawn up by the NIRA-created National Recovery Administration (NRA), shorter hours were deemphasized. Despite a plan by NRA Administrator Hugh Johnson to make blanket provisions for a thirty-five hour workweek in all industry codes, by late August 1933, the momentum toward the thirty-hour week had dissipated. About half of employees covered by NRA codes had their hours set at forty per week and nearly 40 percent had workweeks longer than forty hours.

The FSLA: Federal Overtime Law

Hunnicutt argues that the entire New Deal can be seen as an attempt to keep shorter-hours advocates at bay. After the Supreme Court struck down the NRA, Roosevelt responded to continued demands for thirty hours with the Works Progress Administration, the Wagner Act, Social Security, and, finally, the Fair Labor Standards Acts, which set a federal minimum wage and decreed that overtime beyond forty hours per week would be paid at one-and-a-half times the base rate in covered industries.

The Demise of the Shorter Hours’ Movement

As the Great Depression ended, average weekly work hours slowly climbed from their low reached in 1934. During World War II hours reached a level almost as high as at the end of World War I. With the postwar return of weekly work hours to the forty-hour level the shorter hours movement effectively ended. Occasionally organized labor’s leaders announced that they would renew the push for shorter hours, but they found that most workers didn’t desire a shorter workweek.

The Case of Kellogg’s

Offsetting isolated examples of hours reductions after World War II, there were noteworthy cases of backsliding. Hunnicutt (1996) has studied the case of Kellogg’s in great detail. In 1946, 87% of women and 71% of men working at Kellogg’s voted to return to the six-hour day, with the end of the war. Over the course of the next decade, however, the tide turned. By 1957 most departments had opted to switch to 8-hour shifts, so that only about one-quarter of the work force, mostly women, retained a six-hour shift. Finally, in 1985, the last department voted to adopt an 8-hour workday. Workers, especially male workers, began to favor additional money more than the extra two hours per day of free time. In interviews they explained that they needed the extra money to buy a wide range of consumer items and to keep up with the neighbors. Several men told about the friction that resulted when men spent too much time around the house: “The wives didn’t like the men underfoot all day.” “The wife always found something for me to do if I hung around.” “We got into a lot of fights.” During the 1950s, the threat of unemployment evaporated and the moral condemnation for being a “work hog” no longer made sense. In addition, the rise of quasi-fixed employment costs (such as health insurance) induced management to push workers toward a longer workday.

The Current Situation

As the twentieth century ended there was nothing resembling a shorter hours “movement.” The length of the workweek continues to fall for most groups — but at a glacial pace. Some Americans complain about a lack of free time but the vast majority seem content with an average workweek of roughly forty hours — channeling almost all of their growing wages into higher incomes rather than increased leisure time.

Causes of the Decline in the Length of the Workweek

Supply, Demand and Hours of Work

The length of the workweek, like other labor market outcomes, is determined by the interaction of the supply and demand for labor. Employers are torn by conflicting pressures. Holding everything else constant, they would like employees to work long hours because this means that they can utilize their equipment more fully and offset any fixed costs from hiring each worker (such as the cost of health insurance — common today, but not a consideration a century ago). On the other hand, longer hours can bring reduced productivity due to worker fatigue and can bring worker demands for higher hourly wages to compensate for putting in long hours. If they set the workweek too high, workers may quit and few workers will be willing to work for them at a competitive wage rate. Thus, workers implicitly choose among a variety of jobs — some offering shorter hours and lower earnings, others offering longer hours and higher earnings.

Economic Growth and the Long-Term Reduction of Work Hours

Historically employers and employees often agreed on very long workweeks because the economy was not very productive (by today’s standards) and people had to work long hours to earn enough money to feed, clothe and house their families. The long-term decline in the length of the workweek, in this view, has primarily been due to increased economic productivity, which has yielded higher wages for workers. Workers responded to this rise in potential income by “buying” more leisure time, as well as by buying more goods and services. In a recent survey, a sizeable majority of economic historians agreed with this view. Over eighty percent accepted the proposition that “the reduction in the length of the workweek in American manufacturing before the Great Depression was primarily due to economic growth and the increased wages it brought” (Whaples, 1995). Other broad forces probably played only a secondary role. For example, roughly two-thirds of economic historians surveyed rejected the proposition that the efforts of labor unions were the primary cause of the drop in work hours before the Great Depression.

Winning the Eight-Hour Day in the Era of World War I

The swift reduction of the workweek in the period around World War I has been extensively analyzed by Whaples (1990b). His findings support the consensus that economic growth was the key to reduced work hours. Whaples links factors such as wages, labor legislation, union power, ethnicity, city size, leisure opportunities, age structure, wealth and homeownership, health, education, alternative employment opportunities, industrial concentration, seasonality of employment, and technological considerations to changes in the average workweek in 274 cities and 118 industries. He finds that the rapid economic expansion of the World War I period, which pushed up real wages by more than 18 percent between 1914 and 1919, explains about half of the drop in the length of the workweek. The reduction of immigration during the war was important, as it deprived employers of a group of workers who were willing to put in long hours, explaining about one-fifth of the hours decline. The rapid electrification of manufacturing seems also to have played an important role in reducing the workweek. Increased unionization explains about one-seventh of the reduction, and federal and state legislation and policies that mandated reduced workweeks also had a noticeable role.

Cross-sectional Patterns from 1919

In 1919 the average workweek varied tremendously, emphasizing the point that not all workers desired the same workweek. The workweek exceeded 69 hours in the iron blast furnace, cottonseed oil, and sugar beet industries, but fell below 45 hours in industries such as hats and caps, fur goods, and women’s clothing. Cities’ averages also differed dramatically. In a few Midwestern steel mill towns average workweeks exceeded 60 hours. In a wide range of low-wage Southern cities they reached the high 50s, but in high-wage Western ports, like Seattle, the workweek fell below 45 hours.

Whaples (1990a) finds that among the most important city-level determinants of the workweek during this period were the availability of a pool of agricultural workers, the capital-labor ratio, horsepower per worker, and the amount of employment in large establishments. Hours rose as each of these increased. Eastern European immigrants worked significantly longer than others, as did people in industries whose output varied considerably from season to season. High unionization and strike levels reduced hours to a small degree. The average female employee worked about six and a half fewer hours per week in 1919 than did the average male employee. In city-level comparisons, state maximum hours laws appear to have had little affect on average work hours, once the influences of other factors have been taken into account. One possibility is that these laws were passed only after economic forces lowered the length of the workweek. Overall, in cities where wages were one percent higher, hours were about -0.13 to -0.05 percent lower. Again, this suggests that during the era of declining hours, workers were willing to use higher wages to “buy” shorter hours.

Annotated Bibliography

Perhaps the most comprehensive survey of the shorter hours movement in the U.S. is David Roediger and Philip Foner’s Our Own Time: A History of American Labor and the Working Day (1989). It contends that “the length of the working day has been the central issue for the American labor movement during its most vigorous periods of activity, uniting workers along lines of craft, gender, and ethnicity.” Critics argue that its central premise is flawed because workers have often been divided about the optimal length of the workweek. It explains the point of view of organized labor and recounts numerous historically important events and arguments, but does not attempt to examine in detail the broader economic forces that determined the length of the workweek. An earlier useful comprehensive work is Marion Cahill’s Shorter Hours: A Study of the Movement since the Civil War (1932).

Benjamin Hunnicutt’s Work Without End: Abandoning Shorter Hours for the Right to Work (1988) focuses on the period from 1920 to 1940 and traces the political, intellectual, and social “dialogues” that changed the American concept of progress from dreams of more leisure to an “obsession” with the importance of work and wage-earning. This work’s detailed analysis and insights are valuable, but it draws many of its inferences from what intellectuals said about shorter hours, rather than spending time on the actual decision makers — workers and employers. Hunnicutt’s Kellogg’s Six-Hour Day (1996), is important because it does exactly this — interviewing employees and examining the motives and decisions of a prominent employer. Unfortunately, it shows that one must carefully interpret what workers say on the subject, as they are prone to reinterpret their own pasts so that their choices can be more readily rationalized. (See EH.NET’s review: http://eh.net/book_reviews/kelloggs-six-hour-day/.)

Economists have given surprisingly little attention to the determinants of the workweek. The most comprehensive treatment is Robert Whaples’ “The Shortening of the American Work Week” (1990), which surveys estimates of the length of the workweek, the shorter hours movement, and economic theories about the length of the workweek. Its core is an extensive statistical examination of the determinants of the workweek in the period around World War I.

References

Atack, Jeremy and Fred Bateman. “How Long Was the Workday in 1880?” Journal of Economic History 52, no. 1 (1992): 129-160.

Cahill, Marion Cotter. Shorter Hours: A Study of the Movement since the Civil War. New York: Columbia University Press, 1932.

Carr, Lois Green. “Emigration and the Standard of Living: The Seventeenth Century Chesapeake.” Journal of Economic History 52, no. 2 (1992): 271-291.

Coleman, Mary T. and John Pencavel. “Changes in Work Hours of Male Employees, 1940-1988.” Industrial and Labor Relations Review 46, no. 2 (1993a): 262-283.

Coleman, Mary T. and John Pencavel. “Trends in Market Work Behavior of Women since 1940.” Industrial and Labor Relations Review 46, no. 4 (1993b): 653-676.

Douglas, Paul. Real Wages in the United States, 1890-1926. Boston: Houghton, 1930.

Fogel, Robert. The Fourth Great Awakening and the Future of Egalitarianism. Chicago: University of Chicago Press, 2000.

Fogel, Robert and Stanley Engerman. Time on the Cross: The Economics of American Negro Slavery. Boston: Little, Brown, 1974.

Gallman, Robert. “The Agricultural Sector and the Pace of Economic Growth: U.S. Experience in the Nineteenth Century.” In Essays in Nineteenth-Century Economic History: The Old Northwest, edited by David Klingaman and Richard Vedder. Athens, OH: Ohio University Press, 1975.

Goldmark, Josephine. Fatigue and Efficiency. New York: Charities Publication Committee, 1912.

Gompers, Samuel. Seventy Years of Life and Labor: An Autobiography. New York: Dutton, 1925.

Greis, Theresa Diss. The Decline of Annual Hours Worked in the United States, since 1947. Manpower and Human Resources Studies, no. 10, Wharton School, University of Pennsylvania, 1984.

Grob, Gerald. Workers and Utopia: A Study of Ideological Conflict in the American Labor Movement, 1865-1900. Evanston: Northwestern University Press, 1961.

Hunnicutt, Benjamin Kline. Work Without End: Abandoning Shorter Hours for the Right to Work. Philadelphia: Temple University Press, 1988.

Hunnicutt, Benjamin Kline. Kellogg’s Six-Hour Day. Philadelphia: Temple University Press, 1996.

Jones, Ethel. “New Estimates of Hours of Work per Week and Hourly Earnings, 1900-1957.” Review of Economics and Statistics 45, no. 4 (1963): 374-385.

Juster, F. Thomas and Frank P. Stafford. “The Allocation of Time: Empirical Findings, Behavioral Models, and Problems of Measurement.” Journal of Economic Literature 29, no. 2 (1991): 471-522.

Licht, Walter. Working for the Railroad: The Organization of Work in the Nineteenth Century. Princeton: Princeton University Press, 1983.

Margo, Robert. “The Labor Force in the Nineteenth Century.” In The Cambridge Economic History of the United States, Volume II, The Long Nineteenth Century, edited by Stanley Engerman and Robert Gallman, 207-243. New York: Cambridge University Press, 2000.

Nelson, Bruce. “‘We Can’t Get Them to Do Aggressive Work’: Chicago’s Anarchists and the Eight-Hour Movement.” International Labor and Working Class History 29 (1986).

Ng, Kenneth and Nancy Virts. “The Value of Freedom.” Journal of Economic History 49, no. 4 (1989): 958-965.

Owen, John. “Workweeks and Leisure: An Analysis of Trends, 1948-1975.” Monthly Labor Review 99 (1976).

Owen, John. “Work-time Reduction in the United States and Western Europe.” Monthly Labor Review 111 (1988).

Powderly, Terence. Thirty Years of Labor, 1859-1889. Columbus: Excelsior, 1890.

Ransom, Roger and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. New York: Cambridge University Press, 1977.

Rodgers, Daniel. The Work Ethic in Industrial America, 1850-1920. Chicago: University of Chicago Press, 1978.

Roediger, David. “Ira Steward and the Antislavery Origins of American Eight-Hour Theory.” Labor History 27 (1986).

Roediger, David and Philip Foner. Our Own Time: A History of American Labor and the Working Day. New York: Verso, 1989.

Schor, Juliet B. The Overworked American: The Unexpected Decline in Leisure. New York: Basic Books, 1992.

Shiells, Martha Ellen, “Collective Choice of Working Conditions: Hours in British and U.S. Iron and Steel, 1890-1923.” Journal of Economic History 50, no. 2 (1990): 379-392.

Steinberg, Ronnie. Wages and Hours: Labor and Reform in Twentieth-Century America. New Brunswick, NJ: Rutgers University Press, 1982.

United States, Department of Interior, Census Office. Report on the Statistics of Wages in Manufacturing Industries, by Joseph Weeks, 1880 Census, Vol. 20. Washington: GPO, 1883.

United States Senate. Senate Report 1394, Fifty-Second Congress, Second Session. “Wholesale Prices, Wages, and Transportation.” Washington: GPO, 1893.

Ware, Caroline. The Early New England Cotton Manufacture: A Study of Industrial Beginnings. Boston: Houghton-Mifflin, 1931.

Ware, Norman. The Labor Movement in the United States, 1860-1895. New York: Appleton, 1929.

Weiss, Thomas and Lee Craig. “Agricultural Productivity Growth during the Decade of the Civil War.” Journal of Economic History 53, no. 3 (1993): 527-548.

Whaples, Robert. “The Shortening of the American Work Week: An Economic and Historical Analysis of Its Context, Causes, and Consequences.” Ph.D. dissertation, University of Pennsylvania, 1990a.

Whaples, Robert. “Winning the Eight-Hour Day, 1909-1919.” Journal of Economic History 50, no. 2 (1990b): 393-406.

Whaples, Robert. “Where Is There Consensus Among American Economic Historians? The Results of a Survey on Forty Propositions.” Journal of Economic History 55, no. 1 (1995): 139-154.

Citation: Whaples, Robert. “Hours of Work in U.S. History”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/hours-of-work-in-u-s-history/

Economic History of Hong Kong

Catherine R. Schenk, University of Glasgow

Hong Kong’s economic and political history has been primarily determined by its geographical location. The territory of Hong Kong is comprised of two main islands (Hong Kong Island and Lantau Island) and a mainland hinterland. It thus forms a natural geographic port for Guangdong province in Southeast China. In a sense, there is considerable continuity in Hong Kong’s position in the international economy since its origins were as a commercial entrepot for China’s regional and global trade, and this is still a role it plays today. From a relatively unpopulated territory at the beginning of the nineteenth century, Hong Kong grew to become one of the most important international financial centers in the world. Hong Kong also underwent a rapid and successful process of industrialization from the 1950s that captured the imagination of economists and historians in the 1980s and 1990s.

Hong Kong from 1842 to 1949

After being ceded by China to the British under the Treaty of Nanking in 1842, the colony of Hong Kong quickly became a regional center for financial and commercial services based particularly around the Hongkong and Shanghai Bank and merchant companies such as Jardine Matheson. In 1841 there were only 7500 Chinese inhabitants of Hong Kong and a handful of foreigners, but by 1859 the Chinese community was over 85,000 supplemented by about 1600 foreigners. The economy was closely linked to commercial activity, dominated by shipping, banking and merchant companies. Gradually there was increasing diversification to services and retail outlets to meet the needs of the local population, and also shipbuilding and maintenance linked to the presence of the British naval and merchant shipping. There was some industrial expansion in the nineteenth century; notably sugar refining, cement and ice factories among the foreign sector, alongside smaller-scale local workshop manufactures. The mainland territory of Hong Kong was ceded to British rule by two further treaties in this period; Kowloon in 1860 and the New Territories in 1898.

Hong Kong was profoundly affected by the disastrous events in Mainland China in the inter-war period. After overthrow of the dynastic system in 1911, the Kuomintang (KMT) took a decade to pull together a republican nation-state. The Great Depression and fluctuations in the international price of silver then disrupted China’s economic relations with the rest of the world in the 1930s. From 1937, China descended into the Sino-Japanese War. Two years after the end of World War II, the civil war between the KMT and Chinese Communist Party pushed China into a downward economic spiral. During this period, Hong Kong suffered from the slowdown in world trade and in China’s trade in particular. However, problems on the mainland also diverted business and entrepreneurs from Shanghai and other cities to the relative safety and stability of the British colonial port of Hong Kong.

Post-War Industrialization

After the establishment of the People’s Republic of China (PRC) in 1949, the mainland began a process of isolation from the international economy, partly for ideological reasons and partly because of Cold War embargos on trade imposed first by the United States in 1949 and then by the United Nations in 1951. Nevertheless, Hong Kong was vital to the international economic links that the PRC continued in order to pursue industrialization and support grain imports. Even during the period of self-sufficiency in the 1960s, Hong Kong’s imports of food and water from the PRC were a vital source of foreign exchange revenue that ensured Hong Kong’s usefulness to the mainland. In turn, cheap food helped to restrain rises in the cost of living in Hong Kong thus helping to keep wages low during the period of labor-intensive industrialization.

The industrialization of Hong Kong is usually dated from the embargoes of the 1950s. Certainly, Hong Kong’s prosperity could no longer depend on the China trade in this decade. However, as seen above, industry emerged in the nineteenth century and it began to expand in the interwar period. Nevertheless, industrialization accelerated after 1945 with the inflow of refugees, entrepreneurs and capital fleeing the civil war on the mainland. The most prominent example is immigrants from Shanghai who created the cotton spinning industry in the colony. Hong Kong’s industry was founded in the textile sector in the 1950s before gradually diversifying in the 1960s to clothing, electronics, plastics and other labor-intensive production mainly for export.

The economic development of Hong Kong is unusual in a variety of respects. First, industrialization was accompanied by increasing numbers of small and medium-sized enterprises (SME) rather than consolidation. In 1955, 91 percent of manufacturing establishments employed fewer than one hundred workers, a proportion that increased to 96.5 percent by 1975. Factories employing fewer than one hundred workers accounted for 42 percent of Hong Kong’s domestic exports to the U.K. in 1968, amounting to HK$1.2 billion. At the end of 2002, SMEs still amounted to 98 percent of enterprises, providing 60 percent of total private employment.

Second, until the late 1960s, the government did not engage in active industrial planning. This was partly because the government was preoccupied with social spending on housing large flows of immigrants, and partly because of an ideological sympathy for free market forces. This means that Hong Kong fits outside the usual models of Asian economic development based on state-led industrialization (Japan, South Korea, Singapore, Taiwan) or domination of foreign firms (Singapore) or large firms with close relations to the state (Japan, South Korea). Low taxes, lax employment laws, absence of government debt, and free trade are all pillars of the Hong Kong experience of economic development.

In fact, of course, the reality was very different from the myth of complete laissez-faire. The government’s programs of public housing, land reclamation, and infrastructure investment were ambitious. New industrial towns were built to house immigrants, provide employment and aid industry. The government subsidized industry indirectly through this public housing, which restrained rises in the cost of living that would have threatened Hong Kong’s labor-cost advantage in manufacturing. The government also pursued an ambitious public education program, creating over 300,000 new primary school places between 1954 and 1961. By 1966, 99.8% of school-age children were attending primary school, although free universal primary school was not provided until 1971. Secondary school provision was expanded in the 1970s, and from 1978 the government offered compulsory free education for all children up to the age of 15. The hand of government was much lighter on international trade and finance. Exchange controls were limited to a few imposed by the U.K., and there were no controls on international flows of capital. Government expenditure even fell from 7.5% of GDP in the 1960s to 6.5% in the 1970s. In the same decades, British government spending as a percent of GDP rose from 17% to 20%.

From the mid-1950s Hong Kong’s rapid success as a textile and garment exporter generated trade friction that resulted in voluntary export restraints in a series of treaties with the U.K. beginning in 1959. Despite these agreements, Hong Kong’s exporters continued to exploit their flexibility and adaptability to increase production and find new markets. Indeed, exports increased from 54% of GDP in the 1960s to 64% in the 1970s. Figure 1 shows the annual changes in the growth of real GDP per capita. In the period from 1962 until the onset of the oil crisis in 1973, the average growth rate was 6.5% per year. From 1976 to 1996 GDP grew at an average of 5.6% per year. There were negative shocks in 1967-68 as a result of local disturbances from the onset of the Cultural Revolution in the PRC, and again in 1973 to 1975 from the global oil crisis. In the early 1980s there was another negative shock related to politics, as the terms of Hong Kong’s return to PRC control in 1997 were formalized.

 Annual percentage change of per capita GDP 1962-2001

Reintegration with China, 1978-1997

The Open Door Policy of the PRC announced by Deng Xiao-ping at the end of 1978 marked a new era for Hong Kong’s economy. With the newly vigorous engagement of China in international trade and investment, Hong Kong’s integration with the mainland accelerated as it regained its traditional role as that country’s main provider of commercial and financial services. From 1978 to 1997, visible trade between Hong Kong and the PRC grew at an average rate of 28% per annum. At the same time, Hong Kong firms began to move their labor-intensive activities to the mainland to take advantage of cheaper labor. The integration of Hong Kong with the Pearl River delta in Guangdong is the most striking aspect of these trade and investment links. At the end of 1997, the cumulative value of Hong Kong’s direct investment in Guangdong was estimated at US$48 billion, accounting for almost 80% of the total foreign direct investment there. Hong Kong companies and joint ventures in Guangdong province employed about five million people. Most of these businesses were labor-intensive assembly for export, but from 1997 onward there has been increased investment in financial services, tourism and retail trade.

While manufacturing was moved out of the colony during the 1980s and 1990s, there was a surge in the service sector. This transformation of the structure of Hong Kong’s economy from manufacturing to services was dramatic. Most remarkably it was accomplished without faltering growth rates overall, and with an average unemployment rate of only 2.5% from 1982 to 1997. Figure 2 shows that the value of manufacturing peaked in 1992 before beginning an absolute decline. In contrast, the value of commercial and financial services soared. This is reflected in the contribution of services and manufacturing to GDP shown in Figure 3. Employment in the service sector rose from 52% to 80% of the labor force from 1981 to 2000 while manufacturing employment fell from 39% to 10% in the same period.

 GDP by economic activity at current prices  Contribution to Hong Kong's GDP at factor prices

Asian Financial Crisis, 1997-2002

The terms for the return of Hong Kong to Chinese rule in July 1997 carefully protected the territory’s separate economic characteristics, which have been so beneficial to the Chinese economy. Under the Basic Law, a “one country-two systems” policy was formulated which left Hong Kong monetarily and economically separate from the mainland with exchange and trade controls remaining in place as well as restrictions on the movement of people. Hong Kong was hit hard by the Asian Financial Crisis that struck the region in mid-1997, just at the time of the handover of the colony back to Chinese administrative control. The crisis prompted a collapse in share prices and the property market that affected the ability of many borrowers to repay bank loans. Unlike most Asian countries, Hong Kong Special Administrative Region and mainland China maintained their currencies’ exchange rates with the U.S. dollar rather than devaluing. Along with the Sudden Acute Respiratory Syndrome (SARS) threat in 2002, the Asian Financial Crisis pushed Hong Kong into a new era of recession with a rise in unemployment (6% on average from 1998-2003) and absolute declines in output and prices. The longer-term impact of the crisis has been to increase the intensity and importance of Hong Kong’s trade and investment links with the PRC. Since the PRC did not fare as badly from the regional crisis, the economic prospects for Hong Kong have been tied more closely to the increasingly prosperous mainland.

Suggestions for Further Reading

For a general history of Hong Kong from the nineteenth century, see S. Tsang, A Modern History of Hong Kong, London: IB Tauris, 2004. For accounts of Hong Kong’s economic history see, D.R. Meyer, Hong Kong as a Global Metropolis, Cambridge: Cambridge University Press, 2000; C.R. Schenk, Hong Kong as an International Financial Centre: Emergence and Development, 1945-65, London: Routledge, 2001; and Y-P Ho, Trade, Industrial Restructuring and Development in Hong Kong, London: Macmillan, 1992. Useful statistics and summaries of recent developments are available on the website of the Hong Kong Monetary Authority www.info.gov.hk/hkma.

Citation: Schenk, Catherine. “Economic History of Hong Kong”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/economic-history-of-hong-kong/

Smoot-Hawley Tariff

Anthony O’Brien, Lehigh University

The Smoot-Hawley Tariff of 1930 was the subject of enormous controversy at the time of its passage and remains one of the most notorious pieces of legislation in the history of the United States. In the popular press and in political discussions the usual assumption is that the Smoot-Hawley Tariff was a policy disaster that significantly worsened the Great Depression. During the controversy over passage of the North American Free Trade Agreement (NAFTA) in the 1990s, Vice President Al Gore and billionaire former presidential candidate Ross Perot met in a debate on the Larry King Live program. To help make his point that Perot’s opposition to NAFTA was wrong-headed, Gore gave Perot a framed portrait of Sen. Smoot and Rep. Hawley. Gore assumed the audience would consider Smoot and Hawley to have been exemplars of a foolish protectionism. Although the popular consensus on Smoot-Hawley is clear, the verdict among scholars is more mixed, particularly with respect to the question of whether the tariff significantly worsened the Great Depression.

Background to Passage of the Tariff

The Smoot-Hawley Tariff grew out of the campaign promises of Herbert Hoover during the 1928 presidential election. Hoover, the Republican candidate, had pledged to help farmers by raising tariffs on imports of farm products. Although the 1920s were generally a period of prosperity in the United States, this was not true of agriculture; average farm incomes actually declined between 1920 and 1929. During the campaign Hoover had focused on plans to raise tariffs on farm products, but the tariff plank in the 1928 Republican Party platform had actually referred to the potential of more far-reaching increases:

[W]e realize that there are certain industries which cannot now successfully compete with foreign producers because of lower foreign wages and a lower cost of living abroad, and we pledge the next Republican Congress to an examination and where necessary a revision of these schedules to the end that American labor in the industries may again command the home market, may maintain its standard of living, and may count upon steady employment in its accustomed field.

In a longer perspective, the Republican Party had been in favor of a protective tariff since its founding in the 1850s. The party drew significant support from manufacturing interests in the Midwest and Northeast that believed they benefited from high tariff barriers against foreign imports. Although the free trade arguments dear to most economists were espoused by few American politicians during the 1920s, the Democratic Party was generally critical of high tariffs. In the 1920s the Democratic members of Congress tended to represent southern agricultural interests — which saw high tariffs as curtailing foreign markets for their exports, particularly cotton — or unskilled urban workers — who saw the tariff as driving up the cost of living.

The Republicans did well in the 1928 election, picking up 30 seats in the House — giving them a 267 to 167 majority — and seven seats in the Senate — giving them a 56 to 39 majority. Hoover easily defeated the Democratic presidential candidate, New York Governor Al Smith, capturing 58 percent of the popular vote and 444 of 531 votes in the Electoral College. Hoover took office on March 4, 1929 and immediately called a special session of Congress to convene on April 15 for the purpose of raising duties on agricultural products. Once the session began it became clear, however, that the Republican Congressional leadership had in mind much more sweeping tariff increases.

The House concluded its work relatively quickly and passed a bill on May 28 by a vote of 264 to 147. The bill faced a considerably more difficult time in the Senate. A block of Progressive Republicans, representing midwestern and western states, held the balance of power in the Senate. Some of these Senators had supported the third-party candidacy of Wisconsin Senator Robert LaFollette during the 1924 presidential election and they were much less protectionist than the Republican Party as a whole. It proved impossible to put together a majority in the Senate to pass the bill and the special session ended in November 1929 without a bill being passed.

By the time Congress reconvened the following spring the Great Depression was well underway. Economists date the onset of the Great Depression to the cyclical peak of August 1929, although the stock market crash of October 1929 is the more traditional beginning. By the spring of 1930 it was already clear that the downturn would be severe. The impact of the Depression helped to secure the final few votes necessary to put together a slim majority in the Senate in favor of passage of the bill. Final passage in the Senate took place on June 13, 1930 by a vote of 44 to 42. Final passage took place in the House the following day by a vote of 245 to 177. The vote was largely on party lines. Republicans in the House voted 230 to 27 in favor of final passage. Ten of the 27 Republicans voting no were Progressives from Wisconsin and Minnesota. Democrats voted 150 to 15 against final passage. Ten of the 15 Democrats voting for final passage were from Louisiana or Florida and represented citrus or sugar interests that received significant new protection under the bill.

President Hoover had expressed reservations about the wide-ranging nature of the bill and had privately expressed fears that the bill might provoke retaliation from America’s trading partners. He received a petition signed by more than 1,000 economists, urging him to veto the bill. Ultimately, he signed the Smoot-Hawley bill into law on June 17, 1930.

Tariff Levels under Smoot-Hawley

Calculating the extent to which Smoot-Hawley raised tariffs is not straightforward. The usual summary measure of tariff protection is the ratio of total tariff duties collected to the value of imports. This measure is misleading when applied to the early 1930s. Most of the tariffs in the Smoot-Hawley bill were specific — such as $1.125 per ton of pig iron — rather than ad valorem — or a percentage of the value of the product. During the early 1930s the prices of many products declined, causing the specific tariff to become an increasing percentage of the value of the product. The chart below shows the ratio of import duties collected to the value of dutiable imports. The increase shown for the early 1930s was partly due to declining prices and, therefore, exaggerates the effects of the Smoot-Hawley rate increases.

Source: U.S. Bureau of the Census, Historical Statistics of the United States, Colonial Times to 1970, Washington, D.C.: USGPO, 1975, Series 212.

A more accurate measure of the increase in tariff rates attributable to Smoot-Hawley can be found in a study carried out by the U.S. Tariff Commission. This study calculated the ad valorem rates that would have prevailed on actual U.S. imports in 1928, if the Smoot-Hawley rates been in effect then. These rates were compared with the rates prevailing under the Tariff Act of 1922, known as the Fordney-McCumber Tariff. The results are reproduced in Table 1 for the broad product categories used in tariff schedules and for total dutiable imports.

Table 1
Tariffs Rates under Fordney-McCumber vs. Smoot-Hawley

Equivalent ad valorem rates
Product Fordney-McCumber Smoot-Hawley
Chemicals 29.72% 36.09%
Earthenware, and Glass 48.71 53.73
Metals 33.95 35.08
Wood 24.78 11.73
Sugar 67.85 77.21
Tobacco 63.09 64.78
Agricultural Products 22.71 35.07
Spirits and Wines 38.83 47.44
Cotton Manufactures 40.27 46.42
Flax, Hemp, and Jute 18.16 19.14
Wool and Manufactures 49.54 59.83
Silk Manufactures 56.56 59.13
Rayon Manufactures 52.33 53.62
Paper and Books 24.74 26.06
Sundries 36.97 28.45
Total 38.48 41.14

Source: U.S. Tariff Commission, The Tariff Review, July 1930, Table II, p. 196.

By this measure, Smoot-Hawley raised average tariff rates by about 2 ½ percentage points from the already high rates prevailing under the Fordney-McCumber Tariff of 1922.

The Basic Macroeconomics of the Tariff

Economists are almost uniformly critical of tariffs. One of the bedrock principles of economics is that voluntary trade makes everyone involved better off. For the U.S. government to interfere with trade between Canadian lumber producers and U.S. lumber importers — as it did under Smoot-Hawley by raising the tariff on lumber imports — makes both parties to the trade worse off. In a larger sense, it also hurts the efficiency of the U.S. economy by making it rely on higher priced U.S. lumber rather than less expensive Canadian lumber.

But what is the effect of a tariff on the overall level of employment and production in an economy? The usual answer is that a tariff will leave the overall level of employment and production in an economy largely unaffected. Although the popular view is very different, most economists do not believe that tariffs either create jobs or destroy jobs in aggregate. Economists believe that the overall level of jobs and production in the economy is determined by such things as the capital stock, the population, the state of technology, and so on. These factors are not generally affected by tariffs. So, for instance, a tariff on imports of lumber might drive up housing prices and cause a reduction in the number of houses built. But economists believe that the unemployment in the housing industry will not be long-lived. Economists are somewhat divided on why this is true. Some believe that the economy automatically adjusts rapidly to reallocate labor and machinery that are displaced from one use — such as making houses — into other uses. Other economists believe that this adjustment does not take place automatically, but can be brought about through active monetary or fiscal policy. In either view, the economy is seen as ordinarily being at its so-called full-employment or potential level and deviating from that level only for brief periods of time. Tariffs have the ability to change the mix of production and the mix of jobs available in an economy, but not to change the overall level of production or the overall level of jobs. The macroeconomic impact of tariffs is therefore very limited.

In the case of the Smoot-Hawley Tariff, however, the U.S. economy was in depression in 1930. No active monetary or fiscal policies were carried out and the economy was not making much progress back to full employment. In fact, the cyclical trough was not reached until March 1933 and the economy did not return to full employment until 1941. Under these circumstances is it possible for Smoot-Hawley to have had a significant impact on the level of employment and production and would that impact have been positive or negative?

A simple view of the determination of equilibrium Gross Domestic Product (Y) holds that it is equal to the sum of aggregate expenditures. Aggregate expenditures are divided into four categories: spending by households on consumption goods (C), spending by households and firms on investment goods — such as houses, and machinery and equipment (I), spending by the government on goods and services (G), and net exports, which are the difference between spending on exports by foreign households and firms (EX) and spending on imports by domestic households and firms (IM). So, in the basic algebra of the principles of economics course, at equilibrium, Y = C + I + G + (EX – IM).

The usual story of the Great Depression is that some combination of falling consumption spending and falling investment spending had resulted in the equilibrium level of GDP being far below its full employment level. By raising tariffs on imports, Smoot-Hawley would have reduced the level of imports, but would not have had any direct effect on exports. This simple analysis seems to lead to a surprising conclusion: by reducing imports, Smoot-Hawley would have raised the level of aggregate expenditures in the economy (by increasing net exports or (EX – IM)) and, therefore, increased the level of GDP relative to what it would otherwise have been.

A potential flaw in this argument is that it assumes that Smoot-Hawley did not have a negative impact on U.S. exports. In fact, it may have had a negative impact on exports if foreign governments were led to retaliate against the passage of Smoot-Hawley by raising tariffs on imports of U.S. goods. If net exports fell as a result of Smoot-Hawley, then the tariff would have had a negative macroeconomic impact; it would have made the Depression worse. In 1934 Joseph Jones wrote a very influential book in which he argued that widespread retaliation against Smoot-Hawley had, in fact, taken place. Jones’s book helped to establish the view among the public and among scholars that the passage of Smoot-Hawley had been a policy blunder that had worsened the Great Depression.

Did Retaliation Take Place?

This is a simplified analysis and there are other ways in which Smoot-Hawley could have had a macroeconomic impact, such as by increasing the price level in the U.S. relative to foreign price levels. But in recent years there has been significant scholarly interest in the question of whether Smoot-Hawley did provoke significant retaliation and, therefore, made the Depression worse. Clearly it is possible to overstate the extent of retaliation and Jones almost certainly did. For instance, the important decision by Britain to abandon a century-long commitment to free trade and raise tariffs in 1931 was not affected to any significant extent by Smoot-Hawley.

On the other hand, the case for retaliation by Canada is fairly clear. Then, as now, Canada was easily the largest trading partner of the United States. In 1929, 18 percent of U.S. merchandise exports went to Canada and 11 percent of U.S. merchandise imports came from Canada. At the time of the passage of Smoot-Hawley the Canadian Prime Minister was William Lyon Mackenzie King of the Liberal Party. King had been in office for most of the period since 1921 and had several times reduced Canadian tariffs. He held the position that tariffs should be used to raise revenue, but should not be used for protection. In early 1929 he was contemplating pushing for further tariff reductions, but this option was foreclosed by Hoover’s call for a special session of Congress to consider tariff increases.

As Smoot-Hawley neared passage King came under intense pressure from the Canadian Conservative Party and its leader, Richard Bedford Bennett, to retaliate. In May 1930 Canada imposed so-called countervailing duties on 16 products imported from the United States. The duties on these products — which represented about 30 percent of the value of all U.S. merchandise exports to Canada — were raised to the levels charged by the United States. In a speech, King made clear the retaliatory nature of these increases:

[T]he countervailing duties ? [are] designed to give a practical illustration to the United States of the desire of Canada to trade at all times on fair and equal terms?. For the present we raise the duties on these selected commodities to the level applied against Canadian exports of the same commodities by other countries, but at the same time we tell our neighbour ? we are ready in the future ? to consider trade on a reciprocal basis?.

In the election campaign the following July, Smoot-Hawley was a key issue. Bennett, the Conservative candidate, was strongly in favor in retaliation. In one campaign speech he declared:

How many thousands of American workmen are living on Canadian money today? They’ve got the jobs and we’ve got the soup kitchens?. I will not beg of any country to buy our goods. I will make [tariffs] fight for you. I will use them to blast a way into markets that have been closed.

Bennett handily won the election and pushed through the Canadian Parliament further tariff increases.

What Was the Impact of the Tariff on the Great Depression?

If there was retaliation for Smoot-Hawley, was this enough to have made the tariff a significant contributor to the severity of the Great Depression? Most economists are skeptical because foreign trade made up a small part of the U.S. economy in 1929 and the magnitude of the decline in GDP between 1929 and 1933 was so large. Table 2 gives values for nominal GDP, for real GDP (in 1929 dollars), for nominal and real net exports, and for nominal and real exports. In real terms, net exports did decline by about $.7 billion between 1929 and 1933, but this amounts to less than one percent of 1929 real GDP and is dwarfed by the total decline in real GDP between 1929 and 1933.

Table 2
GDP and Exports, 1929-1933

Year Nominal GDP Real GDP Nominal Net Exports Real Net Exports Nominal Exports Real Exports
1929 $103.1 $103.1 $0.4 $0.3 $5.9 $5.9
1930 $90.4 $93.3 $0.3 $0.0 $4.4 $4.9
1931 $75.8 $86.1 $0.0 -$0.4 $2.9 $4.1
1932 $58.0 $74.7 $0.0 -$0.3 $2.0 $3.3
1933 $55.6 $73.2 $0.1 -$0.4 $2.0 $3.3

Source: U.S. Department of Commerce, National Income and Product Accounts of the United States, Vol. I, 1929-1958, Washington, D.C.: USGPO, 1993.

If we focus on the decline in exports, we can construct an upper bound for the negative impact of Smoot-Hawley. Between 1929 and 1931, real exports declined by an amount equal to about 1.7% of 1929 real GDP. Declines in aggregate expenditures are usually thought to have a multiplied effect on equilibrium GDP. The best estimates are that the multiplier is roughly two. In that case, real GDP would have declined by about 3.4% between 1929 and 1931 as a result of the decline in real exports. Real GDP actually declined by about 16.5% between 1929 and 1931, so the decline in real exports can account for about 21% of the total decline in real GDP. The decline in real exports, then, may well have played an important, but not crucial, role in the decline in GDP during the first two years of the Depression. Bear in mind, though, that not all — perhaps not even most — of the decline in exports can be attributed to retaliation for Smoot-Hawley. Even if Smoot-Hawley had not been passed, U.S. exports would have fallen as incomes declined in Canada, the United Kingdom, and in other U.S. trading partners and as tariff rates in some of these countries increased for reasons unconnected to Smoot-Hawley.

Hawley-Smoot or Smoot-Hawley: A Note on Usage

Congressional legislation is often referred to by the names of the member of the House of Representatives and the member of the Senate who have introduced the bill. Tariff legislation always originates in the House of Representatives and according to convention the name of its House sponsor, in this case Representative Willis Hawley of Oregon, would precede the name of its Senate sponsor, Senator Reed Smoot of Utah — hence, Hawley-Smoot. In this instance, though, Senator Smoot was far better known than Representative Hawley and so the legislation is usually referred to as the Smoot-Hawley Tariff. The more formal name of the legislation was the U.S. Tariff Act of 1930.)

Further Reading

The Republican Party platform for 1928 is reprinted as: “Republican Platform [of 1928]” in Arthur M. Schlesinger, Jr., Fred L. Israel, and William P. Hansen, editors, History of American Presidential Elections, 1789-1968, New York: Chelsea House, 1971, Vol. 3. Herbert Hoover’s views on the tariff can be found in Herbert Hoover, The Future of Our Foreign Trade, Washington, D.C.: GPO, 1926 and Herbert Hoover, The Memoirs of Herbert Hoover: The Cabinet and the Presidency, 1920-1933, New York: Macmillan, 1952, Chapter 41. Trade statistics for this period can be found in U.S. Department of Commerce, Economic Analysis of Foreign Trade of the United States in Relation to the Tariff. Washington, D.C.: GPO, 1933 and in the annual supplements to the Survey of Current Business.

A classic account of the political process that resulted in the Smoot-Hawley Tariff is given in E. E. Schattschneider, Politics, Pressures and the Tariff, New York: Prentice-Hall, 1935. The best case for the view that there was extensive foreign retaliation against Smoot-Hawley is given in Joseph Jones, Tariff Retaliation: Repercussions of the Hawley-Smoot Bill, Philadelphia: University of Pennsylvania Press, 1934. The Jones book should be used with care; his argument is generally considered to be overstated. The view that party politics was of supreme importance in passage of the tariff is well argued in Robert Pastor, Congress and the Politics of United States Foreign Economic Policy, 1929-1976, Berkeley: University of California Press, 1980.

A discussion of the potential macroeconomic impact of Smoot-Hawley appears in Rudiger Dornbusch and Stanley Fischer, “The Open Economy: Implications for Monetary and Fiscal Policy.” In The American Business Cycle: Continuity and Change, edited by Robert J. Gordon, NBER Studies in Business Cycles, Volume 25, Chicago: University of Chicago Press, 1986, pp. 466-70. See, also, the article by Barry Eichengreen listed below. An argument that Smoot-Hawley is unlikely to have had a significant macroeconomic effect is given in Peter Temin, Lessons from the Great Depression, Cambridge, MA: MIT Press, 1989, p. 46. For an argument emphasizing the importance of Smoot-Hawley in explaining the Great Depression, see Alan Meltzer, “Monetary and Other Explanations of the Start of the Great Depression,” Journal of Monetary Economics, 2 (1976): 455-71.

Recent journal articles that deal with the issues discussed in this entry are:

Callahan, Colleen, Judith A. McDonald and Anthony Patrick O’Brien. “Who Voted for Smoot-Hawley?” Journal of Economic History 54, no. 3 (1994): 683-90.

Crucini, Mario J. and James Kahn. “Tariffs and Aggregate Economic Activity: Lessons from the Great Depression.” Journal of Monetary Economics 38, no. 3 (1996): 427-67.

Eichengreen, Barry. “The Political Economy of the Smoot-Hawley Tariff.” Research in Economic History 12 (1989): 1-43.

Irwin, Douglas. “The Smoot-Hawley Tariff: A Quantitative Assessment.” Review of Economics and Statistics 80, no. 2 (1998): 326-334.

Irwin Douglas and Randall S. Kroszner. “Log-Rolling and Economic Interests in the Passage of the Smoot-Hawley Tariff.” Carnegie-Rochester Series on Public Policy 45 (1996): 173-200.

McDonald Judith, Anthony Patrick O’Brien, and Colleen Callahan. “Trade Wars: Canada’s Reaction to the Smoot-Hawley Tariff.” Journal of Economic History 57, no. 4 (1997): 802-26.

Citation: O’Brien, Anthony. “Smoot-Hawley Tariff”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/smoot-hawley-tariff/