EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

An Economic History of New Zealand in the Nineteenth and Twentieth Centuries

John Singleton, Victoria University of Wellington, New Zealand

Living standards in New Zealand were among the highest in the world between the late nineteenth century and the 1960s. But New Zealand’s economic growth was very sluggish between 1950 and the early 1990s, and most Western European countries, as well as several in East Asia, overtook New Zealand in terms of real per capita income. By the early 2000s, New Zealand’s GDP per capita was in the bottom half of the developed world.

Table 1:
Per capita GDP in New Zealand
compared with the United States and Australia
(in 1990 international dollars)

US Australia New Zealand NZ as
% of US
NZ as % of
Austrialia
1840 1588 1374 400 25 29
1900 4091 4013 4298 105 107
1950 9561 7412 8456 88 114
2000 28129 21540 16010 57 74

Source: Angus Maddison, The World Economy: Historical Statistics. Paris: OECD, 2003, pp. 85-7.

Over the second half of the twentieth century, argue Greasley and Oxley (1999), New Zealand seemed in some respects to have more in common with Latin American countries than with other advanced western nations. As well as a snail-like growth rate, New Zealand followed highly protectionist economic policies between 1938 and the 1980s. (In absolute terms, however, New Zealanders continued to be much better off than their Latin American counterparts.) Maddison (1991) put New Zealand in a middle-income group of countries, including the former Czechoslovakia, Hungary, Portugal, and Spain.

Origins and Development to 1914

When Europeans (mainly Britons) started to arrive in Aotearoa (New Zealand) in the early nineteenth century, they encountered a tribal society. Maori tribes made a living from agriculture, fishing, and hunting. Internal trade was conducted on the basis of gift exchange. Maori did not hold to the Western concept of exclusive property rights in land. The idea that land could be bought and sold was alien to them. Most early European residents were not permanent settlers. They were short-term male visitors involved in extractive activities such as sealing, whaling, and forestry. They traded with Maori for food, sexual services, and other supplies.

Growing contact between Maori and the British was difficult to manage. In 1840 the British Crown and some Maori signed the Treaty of Waitangi. The treaty, though subject to various interpretations, to some extent regularized the relationship between Maori and Europeans (or Pakeha). At roughly the same time, the first wave of settlers arrived from England to set up colonies including Wellington and Christchurch. Settlers were looking for a better life than they could obtain in overcrowded and class-ridden England. They wished to build a rural and largely self-sufficient society.

For some time, only the Crown was permitted to purchase land from Maori. This land was then either resold or leased to settlers. Many Maori felt – and many still feel – that they were forced to give up land, effectively at gunpoint, in return for a pittance. Perhaps they did not always grasp that land, once sold, was lost forever. Conflict over land led to intermittent warfare between Maori and settlers, especially in the 1860s. There was brutality on both sides, but the Europeans on the whole showed more restraint in New Zealand than in North America, Australia, or Southern Africa.

Maori actually required less land in the nineteenth century because their numbers were falling, possibly by half between the late eighteenth and late nineteenth centuries. By the 1860s, Maori were outnumbered by British settlers. The introduction of European diseases, alcohol, and guns contributed to the decline in population. Increased mobility and contact between tribes may also have spread disease. The Maori population did not begin to recover until the twentieth century.

Gold was discovered in several parts of New Zealand (including Thames and Otago) in the mid-nineteenth century, but the introduction of sheep farming in the 1850s gave a more enduring boost to the economy. Australian and New Zealand wool was in high demand in the textile mills of Yorkshire. Sheep farming necessitated the clearing of native forests and the planting of grasslands, which changed the appearance of large tracts of New Zealand. This work was expensive, and easy access to the London capital market was critical. Economic relations between New Zealand and Britain were strong, and remained so until the 1970s.

Between the mid-1870s and mid-1890s, New Zealand was adversely affected by weak export prices, and in some years there was net emigration. But wool prices recovered in the 1890s, just as new exports – meat and dairy produce – were coming to prominence. Until the advent of refrigeration in the early 1880s, New Zealand did not export meat and dairy produce. After the introduction of refrigeration, however, New Zealand foodstuffs found their way on to the dinner tables of working class families in Britain, but not the tables of the middle and upper classes, as they could afford fresh produce.

In comparative terms, the New Zealand economy was in its heyday in the two decades before 1914. New Zealand (though not its Maori shadow, Aotearoa) was a wealthy, dynamic, and egalitarian society. The total population in 1914 was slightly above one million. Exports consisted almost entirely of land-intensive pastoral commodities. Manufactures loomed large in New Zealand’s imports. High labor costs, and the absence of scale economies in the tiny domestic market, hindered industrialization, though there was some processing of export commodities and imports.

War, Depression and Recovery, 1914-38

World War One disrupted agricultural production in Europe, and created a robust demand for New Zealand’s primary exports. Encouraged by high export prices, New Zealand farmers borrowed and invested heavily between 1914 and 1920. Land exchanged hands at very high prices. Unfortunately, the early twenties brought the start of a prolonged slump in international commodity markets. Many farmers struggled to service and repay their debts.

The global economic downturn, beginning in 1929-30, was transmitted to New Zealand by the collapse in commodity prices on the London market. Farmers bore the brunt of the depression. At the trough, in 1931-32, net farm income was negative. Declining commodity prices increased the already onerous burden of servicing and repaying farm mortgages. Meat freezing works, woolen mills, and dairy factories were caught in the spiral of decline. Farmers had less to spend in the towns. Unemployment rose, and some of the urban jobless drifted back to the family farm. The burden of external debt, the bulk of which was in sterling, rose dramatically relative to export receipts. But a protracted balance of payments crisis was avoided, since the demand for imports fell sharply in response to the drop in incomes. The depression was not as serious in New Zealand as in many industrial countries. Prices were more flexible in the primary sector and in small business than in modern, capital-intensive industry. Nevertheless, the experience of depression profoundly affected New Zealanders’ attitudes towards the international economy for decades to come.

At first, there was no reason to expect that the downturn in 1929-30 was the prelude to the worst slump in history. As tax and customs revenue fell, the government trimmed expenditure in an attempt to balance the budget. Only in 1931 was the severity of the crisis realized. Further cuts were made in public spending. The government intervened in the labor market, securing an order for an all-round reduction in wages. It pressured and then forced the banks to reduce interest rates. The government sought to maintain confidence and restore prosperity by helping farms and other businesses to lower costs. But these policies did not lead to recovery.

Several factors contributed to the recovery that commenced in 1933-34. The New Zealand pound was devalued by 14 percent against sterling in January 1933. As most exports were sold for sterling, which was then converted into New Zealand pounds, the income of farmers was boosted at a stroke of the pen. Devaluation increased the money supply. Once economic actors, including the banks, were convinced that the devaluation was permanent, there was an increase in confidence and in lending. Other developments played their part. World commodity prices stabilized, and then began to pick up. Pastoral output and productivity continued to rise. The 1932 Ottawa Agreements on imperial trade strengthened New Zealand’s position in the British market at the expense of non-empire competitors such as Argentina, and prefigured an increase in the New Zealand tariff on non-empire manufactures. As was the case elsewhere, the recovery in New Zealand was not the product of a coherent economic strategy. When beneficial policies were adopted it was as much by accident as by design.

Once underway, however, New Zealand’s recovery was comparatively rapid and persisted over the second half of the thirties. A Labour government, elected towards the end of 1935, nationalized the central bank (the Reserve Bank of New Zealand). The government instructed the Reserve Bank to create advances in support of its agricultural marketing and state housing schemes. It became easier to obtain borrowed funds.

An Insulated Economy, 1938-1984

A balance of payments crisis in 1938-39 was met by the introduction of administrative restrictions on imports. Labour had not been prepared to deflate or devalue – the former would have increased unemployment, while the latter would have raised working class living costs. Although intended as a temporary expedient, the direct control of imports became a distinctive feature of New Zealand economic policy until the mid-1980s.

The doctrine of “insulationism” was expounded during the 1940s. Full employment was now the main priority. In the light of disappointing interwar experience, there were doubts about the ability of the pastoral sector to provide sufficient work for New Zealand’s growing population. There was a desire to create more industrial jobs, even though there seemed no prospect of achieving scale economies within such a small country. Uncertainty about export receipts, the need to maintain a high level of domestic demand, and the competitive weakness of the manufacturing sector, appeared to justify the retention of quantitative import controls.

After 1945, many Western countries retained controls over current account transactions for several years. When these controls were relaxed and then abolished in the fifties and early sixties, the anomalous nature of New Zealand’s position became more visible. Although successive governments intended to liberalize, in practice they achieved little, except with respect to trade with Australia.

The collapse of the Korean War commodity boom, in the early 1950s, marked an unfortunate turning point in New Zealand’s economic history. International conditions were unpropitious for the pastoral sector in the second half of the twentieth century. Despite the aspirations of GATT, the United States, Western Europe and Japan restricted agricultural imports, especially of temperate foodstuffs, subsidized their own farmers and, in the case of the Americans and the Europeans, dumped their surpluses in third markets. The British market, which remained open until 1973, when the United Kingdom was absorbed into the EEC, was too small to satisfy New Zealand. Moreover, even the British resorted to agricultural subsidies. Compared with the price of industrial goods, the price of agricultural produce tended to weaken over the long term.

Insulation was a boon to manufacturers, and New Zealand developed a highly diversified industrial structure. But competition was ineffectual, and firms were able to pass cost increases on to the consumer. Import barriers induced many British, American, and Australian multinationals to establish plants in New Zealand. The protected industrial economy did have some benefits. It created jobs – there was full employment until the 1970s – and it increased the stock of technical and managerial skills. But consumers and farmers were deprived of access to cheaper – and often better quality – imported goods. Their interests and welfare were neglected. Competing demand from protected industries also raised the costs of farm inputs, including labor power, and thus reduced the competitiveness of New Zealand’s key export sector.

By the early 1960s, policy makers had realized that New Zealand was falling behind in the race for greater prosperity. The British food market was under threat, as the Macmillan government began a lengthy campaign to enter the protectionist EEC. New Zealand began to look for other economic partners, and the most obvious candidate was Australia. In 1901, New Zealand had declined to join the new federation of Australian colonies. Thus it had been excluded from the Australian common market. After lengthy negotiations, a partial New Zealand-Australia Free Trade Agreement (NAFTA) was signed in 1965. Despite initial misgivings, many New Zealand firms found that they could compete in the Australian market, where tariffs against imports from the rest of the world remained quite high. But this had little bearing on their ability to compete with European, Asian, and North American firms. NAFTA was given renewed impetus by the Closer Economic Relations (CER) agreement of 1983.

Between 1973 and 1984, New Zealand governments were overwhelmed by a group of inter-related economic crises, including two serious supply shocks (the oil crises), rising inflation, and increasing unemployment. Robert Muldoon, the National Party (conservative) prime minister between 1975 and 1984, pursued increasingly erratic macroeconomic policies. He tightened government control over the economy in the early eighties. There were dramatic fluctuations in inflation and in economic growth. In desperation, Muldoon imposed a wage and price freeze in 1982-84. He also mounted a program of large-scale investments, including the expansion of a steel works, and the construction of chemical plants and an oil refinery. By means of these investments, he hoped to reduce the import bill and secure a durable improvement in the balance of payments. But the “Think Big” strategy failed – the projects were inadequately costed, and inherently risky. Although Muldoon’s intention had been to stabilize the economy, his policies had the opposite effect.

Economic Reform, 1984-2000

Muldoon’s policies were discredited, and in 1984 the Labour Party came to power. All other economic strategies having failed, Labour resolved to deregulate and restore the market process. (This seemed very odd at the time.) Within a week of the election, virtually all controls over interest rates had been abolished. Financial markets were deregulated, and, in March 1985, the New Zealand dollar was floated. Other changes followed, including the sale of public sector trading organizations, the reduction of tariffs and the elimination of import licensing. However, reform of the labor market was not completed until the early 1990s, by which time National (this time without Muldoon or his policies) was back in office.

Once credit was no longer rationed, there was a large increase in private sector borrowing, and a boom in asset prices. Numerous speculative investment and property companies were set up in the mid-eighties. New Zealand’s banks, which were not used to managing risk in a deregulated environment, scrambled to lend to speculators in an effort not to miss out on big profits. Many of these ventures turned sour, especially after the 1987 share market crash. Banks were forced to reduce their lending, to the detriment of sound as well as unsound borrowers.

Tight monetary policy and financial deregulation led to rising interest rates after 1984. The New Zealand dollar appreciated strongly. Farmers bore the initial brunt of high borrowing costs and a rising real exchange rate. Manufactured imports also became more competitive, and many inefficient firms were forced to close. Unemployment rose in the late eighties and early nineties. The early 1990s were marked by an international recession, which was particularly painful in New Zealand, not least because of the high hopes raised by the post-1984 reforms.

An economic recovery began towards the end of 1991. With a brief interlude in 1998, strong growth persisted for the remainder of the decade. Confidence was gradually restored to the business sector. Unemployment began to recede. After a lengthy time lag, the economic reforms seemed to be paying off for the majority of the population.

Large structural changes took place after 1984. Factors of production switched out of the protected manufacturing sector, and were drawn into services. Tourism boomed as the relative cost of international travel fell. The face of the primary sector also changed, and the wine industry began to penetrate world markets. But not all manufacturers struggled. Some firms adapted to the new environment and became more export-oriented. For instance, a small engineering company, Scott Technology, became a world leader in the provision of equipment for the manufacture of refrigerators and washing machines.

Annual inflation was reduced to low single digits by the early nineties. Price stability was locked in through the 1989 Reserve Bank Act. This legislation gave the central bank operational autonomy, while compelling it to focus on the achievement and maintenance of price stability rather than other macroeconomic objectives. The Reserve Bank of New Zealand was the first central bank in the world to adopt a regime of inflation targeting. The 1994 Fiscal Responsibility Act committed governments to sound finance and the reduction of public debt.

By 2000, New Zealand’s population was approaching four million. Overall, the reforms of the eighties and nineties were responsible for creating a more competitive economy. New Zealand’s economic decline relative to the rest of the OECD was halted, though it was not reversed. In the nineties, New Zealand enjoyed faster economic growth than either Germany or Japan, an outcome that would have been inconceivable a few years earlier. But many New Zealanders were not satisfied. In particular, they were galled that their closest neighbor, Australia, was growing even faster. Australia, however, was an inherently much wealthier country with massive mineral deposits.

Assessment

Several explanations have been offered for New Zealand’s relatively poor economic performance during the twentieth century.

Wool, meat, and dairy produce were the foundations of New Zealand’s prosperity in Victorian and Edwardian times. After 1920, however, international market conditions were generally unfavorable to pastoral exports. New Zealand had the wrong comparative advantage to enjoy rapid growth in the twentieth century.

Attempts to diversify were only partially successful. High labor costs and the small size of the domestic market hindered the efficient production of standardized labor-intensive goods (e.g. garments) and standardized capital-intensive goods (e.g. autos). New Zealand might have specialized in customized and skill-intensive manufactures, but the policy environment was not conducive to the promotion of excellence in niche markets. Between 1938 and the 1980s, Latin American-style trade policies fostered the growth of a ramshackle manufacturing sector. Only in the late eighties did New Zealand decisively reject this regime.

Geographical and geological factors also worked to New Zealand’s disadvantage. Australia drew ahead of New Zealand in the 1960s, following the discovery of large mineral deposits for which there was a big market in Japan. Staple theory suggests that developing countries may industrialize successfully by processing their own primary products, instead of by exporting them in a raw state. Canada had coal and minerals, and became a significant industrial power. But New Zealand’s staples of wool, meat and dairy produce offered limited downstream potential.

Canada also took advantage of its proximity to the U.S. market, and access to U.S. capital and technology. American-style institutions in the labor market, business, education and government became popular in Canada. New Zealand and Australia relied on, arguably inferior, British-style institutions. New Zealand was a long way from the world’s economic powerhouses, and it was difficult for its firms to establish and maintain contact with potential customers and collaborators in Europe, North America, or Asia.

Clearly, New Zealand’s problems were not all of its own making. The elimination of agricultural protectionism in the northern hemisphere would have given a huge boost the New Zealand economy. On the other hand, in the period between the late 1930s and mid-1980s, New Zealand followed inward-looking economic policies that hindered economic efficiency and flexibility.

References

Bassett, Michael. The State in New Zealand, 1840-1984. Auckland: Auckland University Press, 1998.

Belich, James. Making Peoples: A History of the New Zealanders from Polynesian Settlement to the End of the Nineteenth Century, Auckland: Penguin, 1996.

Condliffe, John B. New Zealand in the Making. London: George Allen & Unwin, 1930.

Dalziel, Paul. “New Zealand’s Economic Reforms: An Assessment.” Review of Political Economy 14, no. 2 (2002): 31-46.

Dalziel, Paul and Ralph Lattimore. The New Zealand Macroeconomy: Striving for Sustainable Growth with Equity. Melbourne: Oxford University Press, fifth edition, 2004.

Easton, Brian. In Stormy Seas: The Post-War New Zealand Economy. Dunedin: University of Otago Press, 1997.

Endres, Tony and Ken Jackson. “Policy Responses to the Crisis: Australasia in the 1930s.” In Capitalism in Crisis: International Responses to the Great Depression, edited by Rick Garside, 148-65. London: Pinter, 1993.

Evans, Lewis, Arthur Grimes, and Bryce Wilkinson (with David Teece), “Economic Reform in New Zealand 1984-95: The Pursuit of Efficiency.” Journal of Economic Literature 34, no. 4 (1996): 1856-1902.

Gould, John D. The Rake’s Progress: the New Zealand Economy since 1945. Auckland: Hodder and Stoughton, 1982.

Greasley, David and Les Oxley. “A Tale of Two Dominions: Comparing the Macroeconomic Records of Australia and Canada since 1870.” Economic History Review 51, no. 2 (1998): 294-318.

Greasley, David and Les Oxley. “Outside the Club: New Zealand’s Economic Growth, 1870-1993.” International Review of Applied Economics 14, no. 2 (1999): 173-92.

Greasley, David and Les Oxley. “Regime Shift and Fast Recovery on the Periphery: New Zealand in the 1930s.” Economic History Review 55, no. 4 (2002): 697-720.

Hawke, Gary R. The Making of New Zealand: An Economic History. Cambridge: Cambridge University Press, 1985.

Jones, Steve R.H. “Government Policy and Industry Structure in New Zealand, 1900-1970.” Australian Economic History Review 39, no, 3 (1999): 191-212.

Mabbett, Deborah. Trade, Employment and Welfare: A Comparative Study of Trade and Labour Market Policies in Sweden and New Zealand, 1880-1980. Oxford: Clarendon Press, 1995.

Maddison, Angus. Dynamic Forces in Capitalist Development. Oxford: Oxford University Press, 1991.

Maddison, Angus. The World Economy: Historical Statistics. Paris: OECD, 2003.

McKinnon, Malcolm. Treasury: 160 Years of the New Zealand Treasury. Auckland: Auckland University Press in association with the Ministry for Culture and Heritage, 2003.

Schedvin, Boris. “Staples and Regions of the Pax Britannica.” Economic History Review 43, no. 4 (1990): 533-59.

Silverstone, Brian, Alan Bollard, and Ralph Lattimore, editors. A Study of Economic Reform: The Case of New Zealand. Amsterdam: Elsevier, 1996.

Singleton, John. “New Zealand: Devaluation without a Balance of Payments Crisis.” In The World Economy and National Economies in the Interwar Slump, edited by Theo Balderston, 172-90. Basingstoke: Palgrave, 2003.

Singleton, John and Paul L. Robertson. Economic Relations between Britain and Australasia, 1945-1970. Basingstoke: Palgrave, 2002.

Ville, Simon. The Rural Entrepreneurs: A History of the Stock and Station Agent Industry in Australia and New Zealand. Cambridge: Cambridge University Press, 2000.

Citation: Singleton, John. “New Zealand in the Nineteenth and Twentieth Centuries”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/an-economic-history-of-new-zealand-in-the-nineteenth-and-twentieth-centuries/

Economic History of Malaysia

John H. Drabble, University of Sydney, Australia

General Background

The Federation of Malaysia (see map), formed in 1963, originally consisted of Malaya, Singapore, Sarawak and Sabah. Due to internal political tensions Singapore was obliged to leave in 1965. Malaya is now known as Peninsular Malaysia, and the two other territories on the island of Borneo as East Malaysia. Prior to 1963 these territories were under British rule for varying periods from the late eighteenth century. Malaya gained independence in 1957, Sarawak and Sabah (the latter known previously as British North Borneo) in 1963, and Singapore full independence in 1965. These territories lie between 2 and 6 degrees north of the equator. The terrain consists of extensive coastal plains backed by mountainous interiors. The soils are not naturally fertile but the humid tropical climate subject to monsoonal weather patterns creates good conditions for plant growth. Historically much of the region was covered in dense rainforest (jungle), though much of this has been removed for commercial purposes over the last century leading to extensive soil erosion and silting of the rivers which run from the interiors to the coast.

SINGAPORE

The present government is a parliamentary system at the federal level (located in Kuala Lumpur, Peninsular Malaysia) and at the state level, based on periodic general elections. Each Peninsular state (except Penang and Melaka) has a traditional Malay ruler, the Sultan, one of whom is elected as paramount ruler of Malaysia (Yang dipertuan Agung) for a five-year term.

The population at the end of the twentieth century approximated 22 million and is ethnically diverse, consisting of 57 percent Malays and other indigenous peoples (collectively known as bumiputera), 24 percent Chinese, 7 percent Indians and the balance “others” (including a high proportion of non-citizen Asians, e.g., Indonesians, Bangladeshis, Filipinos) (Andaya and Andaya, 2001, 3-4)

Significance as a Case Study in Economic Development

Malaysia is generally regarded as one of the most successful non-western countries to have achieved a relatively smooth transition to modern economic growth over the last century or so. Since the late nineteenth century it has been a major supplier of primary products to the industrialized countries; tin, rubber, palm oil, timber, oil, liquified natural gas, etc.

However, since about 1970 the leading sector in development has been a range of export-oriented manufacturing industries such as textiles, electrical and electronic goods, rubber products etc. Government policy has generally accorded a central role to foreign capital, while at the same time working towards more substantial participation for domestic, especially bumiputera, capital and enterprise. By 1990 the country had largely met the criteria for a Newly-Industrialized Country (NIC) status (30 percent of exports to consist of manufactured goods). While the Asian economic crisis of 1997-98 slowed growth temporarily, the current plan, titled Vision 2020, aims to achieve “a fully developed industrialized economy by that date. This will require an annual growth rate in real GDP of 7 percent” (Far Eastern Economic Review, Nov. 6, 2003). Malaysia is perhaps the best example of a country in which the economic roles and interests of various racial groups have been pragmatically managed in the long-term without significant loss of growth momentum, despite the ongoing presence of inter-ethnic tensions which have occasionally manifested in violence, notably in 1969 (see below).

The Premodern Economy

Malaysia has a long history of internationally valued exports, being known from the early centuries A.D. as a source of gold, tin and exotics such as birds’ feathers, edible birds’ nests, aromatic woods, tree resins etc. The commercial importance of the area was enhanced by its strategic position athwart the seaborne trade routes from the Indian Ocean to East Asia. Merchants from both these regions, Arabs, Indians and Chinese regularly visited. Some became domiciled in ports such as Melaka [formerly Malacca], the location of one of the earliest local sultanates (c.1402 A.D.) and a focal point for both local and international trade.

From the early sixteenth century the area was increasingly penetrated by European trading interests, first the Portuguese (from 1511), then the Dutch East India Company [VOC](1602) in competition with the English East India Company [EIC] (1600) for the trade in pepper and various spices. By the late eighteenth century the VOC was dominant in the Indonesian region while the EIC acquired bases in Malaysia, beginning with Penang (1786), Singapore (1819) and Melaka (1824). These were major staging posts in the growing trade with China and also served as footholds from which to expand British control into the Malay Peninsula (from 1870), and northwest Borneo (Sarawak from 1841 and North Borneo from 1882). Over these centuries there was an increasing inflow of migrants from China attracted by the opportunities in trade and as a wage labor force for the burgeoning production of export commodities such as gold and tin. The indigenous people also engaged in commercial production (rice, tin), but remained basically within a subsistence economy and were reluctant to offer themselves as permanent wage labor. Overall, production in the premodern economy was relatively small in volume and technologically undeveloped. The capitalist sector, already foreign dominated, was still in its infancy (Drabble, 2000).

The Transition to Capitalist Production

The nineteenth century witnessed an enormous expansion in world trade which, between 1815 and 1914, grew on average at 4-5 percent a year compared to 1 percent in the preceding hundred years. The driving force came from the Industrial Revolution in the West which saw the innovation of large scale factory production of manufactured goods made possible by technological advances, accompanied by more efficient communications (e.g., railways, cars, trucks, steamships, international canals [Suez 1869, Panama 1914], telegraphs) which speeded up and greatly lowered the cost of long distance trade. Industrializing countries required ever-larger supplies of raw materials as well as foodstuffs for their growing populations. Regions such as Malaysia with ample supplies of virgin land and relative proximity to trade routes were well placed to respond to this demand. What was lacking was an adequate supply of capital and wage labor. In both aspects, the deficiency was supplied largely from foreign sources.

As expanding British power brought stability to the region, Chinese migrants started to arrive in large numbers with Singapore quickly becoming the major point of entry. Most arrived with few funds but those able to amass profits from trade (including opium) used these to finance ventures in agriculture and mining, especially in the neighboring Malay Peninsula. Crops such as pepper, gambier, tapioca, sugar and coffee were produced for export to markets in Asia (e.g. China), and later to the West after 1850 when Britain moved toward a policy of free trade. These crops were labor, not capital, intensive and in some cases quickly exhausted soil fertility and required periodic movement to virgin land (Jackson, 1968).

Tin

Besides ample land, the Malay Peninsula also contained substantial deposits of tin. International demand for tin rose progressively in the nineteenth century due to the discovery of a more efficient method for producing tinplate (for canned food). At the same time deposits in major suppliers such as Cornwall (England) had been largely worked out, thus opening an opportunity for new producers. Traditionally tin had been mined by Malays from ore deposits close to the surface. Difficulties with flooding limited the depth of mining; furthermore their activity was seasonal. From the 1840s the discovery of large deposits in the Peninsula states of Perak and Selangor attracted large numbers of Chinese migrants who dominated the industry in the nineteenth century bringing new technology which improved ore recovery and water control, facilitating mining to greater depths. By the end of the century Malayan tin exports (at approximately 52,000 metric tons) supplied just over half the world output. Singapore was a major center for smelting (refining) the ore into ingots. Tin mining also attracted attention from European, mainly British, investors who again introduced new technology – such as high-pressure hoses to wash out the ore, the steam pump and, from 1912, the bucket dredge floating in its own pond, which could operate to even deeper levels. These innovations required substantial capital for which the chosen vehicle was the public joint stock company, usually registered in Britain. Since no major new ore deposits were found, the emphasis was on increased efficiency in production. European operators, again employing mostly Chinese wage labor, enjoyed a technical advantage here and by 1929 accounted for 61 percent of Malayan output (Wong Lin Ken, 1965; Yip Yat Hoong, 1969).

Rubber

While tin mining brought considerable prosperity, it was a non-renewable resource. In the early twentieth century it was the agricultural sector which came to the forefront. The crops mentioned previously had boomed briefly but were hard pressed to survive severe price swings and the pests and diseases that were endemic in tropical agriculture. The cultivation of rubber-yielding trees became commercially attractive as a raw material for new industries in the West, notably for tires for the booming automobile industry especially in the U.S. Previously rubber had come from scattered trees growing wild in the jungles of South America with production only expandable at rising marginal costs. Cultivation on estates generated economies of scale. In the 1870s the British government organized the transport of specimens of the tree Hevea Brasiliensis from Brazil to colonies in the East, notably Ceylon and Singapore. There the trees flourished and after initial hesitancy over the five years needed for the trees to reach productive age, planters Chinese and European rushed to invest. The boom reached vast proportions as the rubber price reached record heights in 1910 (see Fig.1). Average values fell thereafter but investors were heavily committed and planting continued (also in the neighboring Netherlands Indies [Indonesia]). By 1921 the rubber acreage in Malaysia (mostly in the Peninsula) had reached 935 000 hectares (about 1.34 million acres) or some 55 percent of the total in South and Southeast Asia while output stood at 50 percent of world production.

Fig.1. Average London Rubber Prices, 1905-41 (current values)

As a result of this boom, rubber quickly surpassed tin as Malaysia’s main export product, a position that it was to hold until 1980. A distinctive feature of the industry was that the technology of extracting the rubber latex from the trees (called tapping) by an incision with a special knife, and its manufacture into various grades of sheet known as raw or plantation rubber, was easily adopted by a wide range of producers. The larger estates, mainly British-owned, were financed (as in the case of tin mining) through British-registered public joint stock companies. For example, between 1903 and 1912 some 260 companies were registered to operate in Malaya. Chinese planters for the most part preferred to form private partnerships to operate estates which were on average smaller. Finally, there were the smallholdings (under 40 hectares or 100 acres) of which those at the lower end of the range (2 hectares/5 acres or less) were predominantly owned by indigenous Malays who found growing and selling rubber more profitable than subsistence (rice) farming. These smallholders did not need much capital since their equipment was rudimentary and labor came either from within their family or in the form of share-tappers who received a proportion (say 50 percent) of the output. In Malaya in 1921 roughly 60 percent of the planted area was estates (75 percent European-owned) and 40 percent smallholdings (Drabble, 1991, 1).

The workforce for the estates consisted of migrants. British estates depended mainly on migrants from India, brought in under government auspices with fares paid and accommodation provided. Chinese business looked to the “coolie trade” from South China, with expenses advanced that migrants had subsequently to pay off. The flow of immigration was directly related to economic conditions in Malaysia. For example arrivals of Indians averaged 61 000 a year between 1900 and 1920. Substantial numbers also came from the Netherlands Indies.

Thus far, most capitalist enterprise was located in Malaya. Sarawak and British North Borneo had a similar range of mining and agricultural industries in the 19th century. However, their geographical location slightly away from the main trade route (see map) and the rugged internal terrain costly for transport made them less attractive to foreign investment. However, the discovery of oil by a subsidiary of Royal Dutch-Shell starting production from 1907 put Sarawak more prominently in the business of exports. As in Malaya, the labor force came largely from immigrants from China and to a lesser extent Java.

The growth in production for export in Malaysia was facilitated by development of an infrastructure of roads, railways, ports (e.g. Penang, Singapore) and telecommunications under the auspices of the colonial governments, though again this was considerably more advanced in Malaya (Amarjit Kaur, 1985, 1998)

The Creation of a Plural Society

By the 1920s the large inflows of migrants had created a multi-ethnic population of the type which the British scholar, J.S. Furnivall (1948) described as a plural society in which the different racial groups live side by side under a single political administration but, apart from economic transactions, do not interact with each other either socially or culturally. Though the original intention of many migrants was to come for only a limited period (say 3-5 years), save money and then return home, a growing number were staying longer, having children and becoming permanently domiciled in Malaysia. The economic developments described in the previous section were unevenly located, for example, in Malaya the bulk of the tin mines and rubber estates were located along the west coast of the Peninsula. In the boom-times, such was the size of the immigrant inflows that in certain areas they far outnumbered the indigenous Malays. In social and cultural terms Indians and Chinese recreated the institutions, hierarchies and linguistic usage of their countries of origin. This was particularly so in the case of the Chinese. Not only did they predominate in major commercial centers such as Penang, Singapore, and Kuching, but they controlled local trade in the smaller towns and villages through a network of small shops (kedai) and dealerships that served as a pipeline along which export goods like rubber went out and in return imported manufactured goods were brought in for sale. In addition Chinese owned considerable mining and agricultural land. This created a distribution of wealth and division of labor in which economic power and function were directly related to race. In this situation lay the seeds of growing discontent among bumiputera that they were losing their ancestral inheritance (land) and becoming economically marginalized. As long as British colonial rule continued the various ethnic groups looked primarily to government to protect their interests and maintain peaceable relations. An example of colonial paternalism was the designation from 1913 of certain lands in Malaya as Malay Reservations in which only indigenous people could own and deal in property (Lim Teck Ghee, 1977).

Benefits and Drawbacks of an Export Economy

Prior to World War II the international economy was divided very broadly into the northern and southern hemispheres. The former contained most of the industrialized manufacturing countries and the latter the principal sources of foodstuffs and raw materials. The commodity exchange between the spheres was known as the Old International Division of Labor (OIDL). Malaysia’s place in this system was as a leading exporter of raw materials (tin, rubber, timber, oil, etc.) and an importer of manufactures. Since relatively little processing was done on the former prior to export, most of the value-added component in the final product accrued to foreign manufacturers, e.g. rubber tire manufacturers in the U.S.

It is clear from this situation that Malaysia depended heavily on earnings from exports of primary commodities to maintain the standard of living. Rice had to be imported (mainly from Burma and Thailand) because domestic production supplied on average only 40 percent of total needs. As long as export prices were high (for example during the rubber boom previously mentioned), the volume of imports remained ample. Profits to capital and good smallholder incomes supported an expanding economy. There are no official data for Malaysian national income prior to World War II, but some comparative estimates are given in Table 1 which indicate that Malayan Gross Domestic Product (GDP) per person was easily the leader in the Southeast and East Asian region by the late 1920s.

Table 1
GDP per Capita: Selected Asian Countries, 1900-1990
(in 1985 international dollars)

1900 1929 1950 1973 1990
Malaya/Malaysia1 6002 1910 1828 3088 5775
Singapore 22763 5372 14441
Burma 523 651 304 446 562
Thailand 594 623 652 1559 3694
Indonesia 617 1009 727 1253 2118
Philippines 735 1106 943 1629 1934
South Korea 568 945 565 1782 6012
Japan 724 1192 1208 7133 13197

Notes: Malaya to 19731; Guesstimate2; 19603

Source: van der Eng (1994).

However, the international economy was subject to strong fluctuations. The levels of activity in the industrialized countries, especially the U.S., were the determining factors here. Almost immediately following World War I there was a depression from 1919-22. Strong growth in the mid and late-1920s was followed by the Great Depression (1929-32). As industrial output slumped, primary product prices fell even more heavily. For example, in 1932 rubber sold on the London market for about one one-hundredth of the peak price in 1910 (Fig.1). The effects on export earnings were very severe; in Malaysia’s case between 1929 and 1932 these dropped by 73 percent (Malaya), 60 percent (Sarawak) and 50 percent (North Borneo). The aggregate value of imports fell on average by 60 percent. Estates dismissed labor and since there was no social security, many workers had to return to their country of origin. Smallholder incomes dropped heavily and many who had taken out high-interest secured loans in more prosperous times were unable to service these and faced the loss of their land.

The colonial government attempted to counteract this vulnerability to economic swings by instituting schemes to restore commodity prices to profitable levels. For the rubber industry this involved two periods of mandatory restriction of exports to reduce world stocks and thus exert upward pressure on market prices. The first of these (named the Stevenson scheme after its originator) lasted from 1 October 1922- 1 November 1928, and the second (the International Rubber Regulation Agreement) from 1 June 1934-1941. Tin exports were similarly restricted from 1931-41. While these measures did succeed in raising world prices, the inequitable treatment of Asian as against European producers in both industries has been debated. The protective policy has also been blamed for “freezing” the structure of the Malaysian economy and hindering further development, for instance into manufacturing industry (Lim Teck Ghee, 1977; Drabble, 1991).

Why No Industrialization?

Malaysia had very few secondary industries before World War II. The little that did appear was connected mainly with the processing of the primary exports, rubber and tin, together with limited production of manufactured goods for the domestic market (e.g. bread, biscuits, beverages, cigarettes and various building materials). Much of this activity was Chinese-owned and located in Singapore (Huff, 1994). Among the reasons advanced are; the small size of the domestic market, the relatively high wage levels in Singapore which made products uncompetitive as exports, and a culture dominated by British trading firms which favored commerce over industry. Overshadowing all these was the dominance of primary production. When commodity prices were high, there was little incentive for investors, European or Asian, to move into other sectors. Conversely, when these prices fell capital and credit dried up, while incomes contracted, thus lessening effective demand for manufactures. W.G. Huff (2002) has argued that, prior to World War II, “there was, in fact, never a good time to embark on industrialization in Malaya.”

War Time 1942-45: The Japanese Occupation

During the Japanese occupation years of World War II, the export of primary products was limited to the relatively small amounts required for the Japanese economy. This led to the abandonment of large areas of rubber and the closure of many mines, the latter progressively affected by a shortage of spare parts for machinery. Businesses, especially those Chinese-owned, were taken over and reassigned to Japanese interests. Rice imports fell heavily and thus the population devoted a large part of their efforts to producing enough food to stay alive. Large numbers of laborers (many of whom died) were conscripted to work on military projects such as construction of the Thai-Burma railroad. Overall the war period saw the dislocation of the export economy, widespread destruction of the infrastructure (roads, bridges etc.) and a decline in standards of public health. It also saw a rise in inter-ethnic tensions due to the harsh treatment meted out by the Japanese to some groups, notably the Chinese, compared to a more favorable attitude towards the indigenous peoples among whom (Malays particularly) there was a growing sense of ethnic nationalism (Drabble, 2000).

Postwar Reconstruction and Independence

The returning British colonial rulers had two priorities after 1945; to rebuild the export economy as it had been under the OIDL (see above), and to rationalize the fragmented administrative structure (see General Background). The first was accomplished by the late 1940s with estates and mines refurbished, production restarted once the labor force had been brought back and adequate rice imports regained. The second was a complex and delicate political process which resulted in the formation of the Federation of Malaya (1948) from which Singapore, with its predominantly Chinese population (about 75%), was kept separate. In Borneo in 1946 the state of Sarawak, which had been a private kingdom of the English Brooke family (so-called “White Rajas”) since 1841, and North Borneo, administered by the British North Borneo Company from 1881, were both transferred to direct rule from Britain. However, independence was clearly on the horizon and in Malaya tensions continued with the guerrilla campaign (called the “Emergency”) waged by the Malayan Communist Party (membership largely Chinese) from 1948-60 to force out the British and set up a Malayan Peoples’ Republic. This failed and in 1957 the Malayan Federation gained independence (Merdeka) under a “bargain” by which the Malays would hold political paramountcy while others, notably Chinese and Indians, were given citizenship and the freedom to pursue their economic interests. The bargain was institutionalized as the Alliance, later renamed the National Front (Barisan Nasional) which remains the dominant political grouping. In 1963 the Federation of Malaysia was formed in which the bumiputera population was sufficient in total to offset the high proportion of Chinese arising from the short-lived inclusion of Singapore (Andaya and Andaya, 2001).

Towards the Formation of a National Economy

Postwar two long-term problems came to the forefront. These were (a) the political fragmentation (see above) which had long prevented a centralized approach to economic development, coupled with control from Britain which gave primacy to imperial as opposed to local interests and (b) excessive dependence on a small range of primary products (notably rubber and tin) which prewar experience had shown to be an unstable basis for the economy.

The first of these was addressed partly through the political rearrangements outlined in the previous section, with the economic aspects buttressed by a report from a mission to Malaya from the International Bank for Reconstruction and Development (IBRD) in 1954. The report argued that Malaya “is now a distinct national economy.” A further mission in 1963 urged “closer economic cooperation between the prospective Malaysia[n] territories” (cited in Drabble, 2000, 161, 176). The rationale for the Federation was that Singapore would serve as the initial center of industrialization, with Malaya, Sabah and Sarawak following at a pace determined by local conditions.

The second problem centered on economic diversification. The IBRD reports just noted advocated building up a range of secondary industries to meet a larger portion of the domestic demand for manufactures, i.e. import-substitution industrialization (ISI). In the interim dependence on primary products would perforce continue.

The Adoption of Planning

In the postwar world the development plan (usually a Five-Year Plan) was widely adopted by Less-Developed Countries (LDCs) to set directions, targets and estimated costs. Each of the Malaysian territories had plans during the 1950s. Malaya was the first to get industrialization of the ISI type under way. The Pioneer Industries Ordinance (1958) offered inducements such as five-year tax holidays, guarantees (to foreign investors) of freedom to repatriate profits and capital etc. A modest degree of tariff protection was granted. The main types of goods produced were consumer items such as batteries, paints, tires, and pharmaceuticals. Just over half the capital invested came from abroad, with neighboring Singapore in the lead. When Singapore exited the federation in 1965, Malaysia’s fledgling industrialization plans assumed greater significance although foreign investors complained of stifling bureaucracy retarding their projects.

Primary production, however, was still the major economic activity and here the problem was rejuvenation of the leading industries, rubber in particular. New capital investment in rubber had slowed since the 1920s, and the bulk of the existing trees were nearing the end of their economic life. The best prospect for rejuvenation lay in cutting down the old trees and replanting the land with new varieties capable of raising output per acre/hectare by a factor of three or four. However, the new trees required seven years to mature. Corporately owned estates could replant progressively, but smallholders could not face such a prolonged loss of income without support. To encourage replanting, the government offered grants to owners, financed by a special duty on rubber exports. The process was a lengthy one and it was the 1980s before replanting was substantially complete. Moreover, many estates elected to switch over to a new crop, oil palms (a product used primarily in foodstuffs), which offered quicker returns. Progress was swift and by the 1960s Malaysia was supplying 20 percent of world demand for this commodity.

Another priority at this time consisted of programs to improve the standard of living of the indigenous peoples, most of whom lived in the rural areas. The main instrument was land development, with schemes to open up large areas (say 100,000 acres or 40 000 hectares) which were then subdivided into 10 acre/4 hectare blocks for distribution to small farmers from overcrowded regions who were either short of land or had none at all. Financial assistance (repayable) was provided to cover housing and living costs until the holdings became productive. Rubber and oil palms were the main commercial crops planted. Steps were also taken to increase the domestic production of rice to lessen the historical dependence on imports.

In the primary sector Malaysia’s range of products was increased from the 1960s by a rapid increase in the export of hardwood timber, mostly in the form of (unprocessed) saw-logs. The markets were mainly in East Asia and Australasia. Here the largely untapped resources of Sabah and Sarawak came to the fore, but the rapid rate of exploitation led by the late twentieth century to damaging effects on both the environment (extensive deforestation, soil-loss, silting, changed weather patterns), and the traditional hunter-gatherer way of life of forest-dwellers (decrease in wild-life, fish, etc.). Other development projects such as the building of dams for hydroelectric power also had adverse consequences in all these respects (Amarjit Kaur, 1998; Drabble, 2000; Hong, 1987).

A further major addition to primary exports came from the discovery of large deposits of oil and natural gas in East Malaysia, and off the east coast of the Peninsula from the 1970s. Gas was exported in liquified form (LNG), and was also used domestically as a substitute for oil. At peak values in 1982, petroleum and LNG provided around 29 percent of Malaysian export earnings but had declined to 18 percent by 1988.

Industrialization and the New Economic Policy 1970-90

The program of industrialization aimed primarily at the domestic market (ISI) lost impetus in the late 1960s as foreign investors, particularly from Britain switched attention elsewhere. An important factor here was the outbreak of civil disturbances in May 1969, following a federal election in which political parties in the Peninsula (largely non-bumiputera in membership) opposed to the Alliance did unexpectedly well. This brought to a head tensions, which had been rising during the 1960s over issues such as the use of the national language, Malay (Bahasa Malaysia) as the main instructional medium in education. There was also discontent among Peninsular Malays that the economic fruits since independence had gone mostly to non-Malays, notably the Chinese. The outcome was severe inter-ethnic rioting centered in the federal capital, Kuala Lumpur, which led to the suspension of parliamentary government for two years and the implementation of the New Economic Policy (NEP).

The main aim of the NEP was a restructuring of the Malaysian economy over two decades, 1970-90 with the following aims:

  1. to redistribute corporate equity so that the bumiputera share would rise from around 2 percent to 30 percent. The share of other Malaysians would increase marginally from 35 to 40 percent, while that of foreigners would fall from 63 percent to 30 percent.
  2. to eliminate the close link between race and economic function (a legacy of the colonial era) and restructure employment so that that the bumiputera share in each sector would reflect more accurately their proportion of the total population (roughly 55 percent). In 1970 this group had about two-thirds of jobs in the primary sector where incomes were generally lowest, but only 30 percent in the secondary sector. In high-income middle class occupations (e.g. professions, management) the share was only 13 percent.
  3. To eradicate poverty irrespective of race. In 1970 just under half of all households in Peninsular Malaysia had incomes below the official poverty line. Malays accounted for about 75 percent of these.

The principle underlying these aims was that the redistribution would not result in any one group losing in absolute terms. Rather it would be achieved through the process of economic growth, i.e. the economy would get bigger (more investment, more jobs, etc.). While the primary sector would continue to receive developmental aid under the successive Five Year Plans, the main emphasis was a switch to export-oriented industrialization (EOI) with Malaysia seeking a share in global markets for manufactured goods. Free Trade Zones (FTZs) were set up in places such as Penang where production was carried on with the undertaking that the output would be exported. Firms locating there received concessions such as duty-free imports of raw materials and capital goods, and tax concessions, aimed at primarily at foreign investors who were also attracted by Malaysia’s good facilities, relatively low wages and docile trade unions. A range of industries grew up; textiles, rubber and food products, chemicals, telecommunications equipment, electrical and electronic machinery/appliances, car assembly and some heavy industries, iron and steel. As with ISI, much of the capital and technology was foreign, for example the Japanese firm Mitsubishi was a partner in a venture to set up a plant to assemble a Malaysian national car, the Proton, from mostly imported components (Drabble, 2000).

Results of the NEP

Table 2 below shows the outcome of the NEP in the categories outlined above.

Table 2
Restructuring under the NEP, 1970-90

1970 1990
Wealth Ownership (%) Bumiputera 2.0 20.3
Other Malaysians 34.6 54.6
Foreigners 63.4 25.1
Employment
(%) of total
workers
in each
sector
Primary sector (agriculture, mineral
extraction, forest products and fishing)
Bumiputera 67.6 [61.0]* 71.2 [36.7]*
Others 32.4 28.8
Secondary sector
(manufacturing and construction)
Bumiputera 30.8 [14.6]* 48.0 [26.3]*
Others 69.2 52.0
Tertiary sector (services) Bumiputera 37.9 [24.4]* 51.0 [36.9]*
Others 62.1 49.0

Note: [ ]* is the proportion of the ethnic group thus employed. The “others” category has not been disaggregated by race to avoid undue complexity.
Source: Drabble, 2000, Table 10.9.

Section (a) shows that, overall, foreign ownership fell substantially more than planned, while that of “Other Malaysians” rose well above the target. Bumiputera ownership appears to have stopped well short of the 30 percent mark. However, other evidence suggests that in certain sectors such as agriculture/mining (35.7%) and banking/insurance (49.7%) bumiputera ownership of shares in publicly listed companies had already attained a level well beyond the target. Section (b) indicates that while bumiputera employment share in primary production increased slightly (due mainly to the land schemes), as a proportion of that ethnic group it declined sharply, while rising markedly in both the secondary and tertiary sectors. In middle class employment the share rose to 27 percent.

As regards the proportion of households below the poverty line, in broad terms the incidence in Malaysia fell from approximately 49 percent in 1970 to 17 percent in 1990, but with large regional variations between the Peninsula (15%), Sarawak (21 %) and Sabah (34%) (Drabble, 2000, Table 13.5). All ethnic groups registered big falls, but on average the non-bumiputera still enjoyed the lowest incidence of poverty. By 2002 the overall level had fallen to only 4 percent.

The restructuring of the Malaysian economy under the NEP is very clear when we look at the changes in composition of the Gross Domestic Product (GDP) in Table 3 below.

Table 3
Structural Change in GDP 1970-90 (% shares)

Year Primary Secondary Tertiary
1970 44.3 18.3 37.4
1990 28.1 30.2 41.7

Source: Malaysian Government, 1991, Table 3-2.

Over these three decades Malaysia accomplished a transition from a primary product-dependent economy to one in which manufacturing industry had emerged as the leading growth sector. Rubber and tin, which accounted for 54.3 percent of Malaysian export value in 1970, declined sharply in relative terms to a mere 4.9 percent in 1990 (Crouch, 1996, 222).

Factors in the structural shift

The post-independence state played a leading role in the transformation. The transition from British rule was smooth. Apart from the disturbances in 1969 government maintained a firm control over the administrative machinery. Malaysia’s Five Year Development plans were a model for the developing world. Foreign capital was accorded a central role, though subject to the requirements of the NEP. At the same time these requirements discouraged domestic investors, the Chinese especially, to some extent (Jesudason, 1989).

Development was helped by major improvements in education and health. Enrolments at the primary school level reached approximately 90 percent by the 1970s, and at the secondary level 59 percent of potential by 1987. Increased female enrolments, up from 39 percent to 58 percent of potential from 1975 to 1991, were a notable feature, as was the participation of women in the workforce which rose to just over 45 percent of total employment by 1986/7. In the tertiary sector the number of universities increased from one to seven between 1969 and 1990 and numerous technical and vocational colleges opened. Bumiputera enrolments soared as a result of the NEP policy of redistribution (which included ethnic quotas and government scholarships). However, tertiary enrolments totaled only 7 percent of the age group by 1987. There was an “educational-occupation mismatch,” with graduates (bumiputera especially) preferring jobs in government, and consequent shortfalls against strong demand for engineers, research scientists, technicians and the like. Better living conditions (more homes with piped water and more rural clinics, for example) led to substantial falls in infant mortality, improved public health and longer life-expectancy, especially in Peninsular Malaysia (Drabble, 2000, 248, 284-6).

The quality of national leadership was a crucial factor. This was particularly so during the NEP. The leading figure here was Dr Mahathir Mohamad, Malaysian Prime Minister from 1981-2003. While supporting the NEP aim through positive discrimination to give bumiputera an economic stake in the country commensurate with their indigenous status and share in the population, he nevertheless emphasized that this should ultimately lead them to a more modern outlook and ability to compete with the other races in the country, the Chinese especially (see Khoo Boo Teik, 1995). There were, however, some paradoxes here. Mahathir was a meritocrat in principle, but in practice this period saw the spread of “money politics” (another expression for patronage) in Malaysia. In common with many other countries Malaysia embarked on a policy of privatization of public assets, notably in transportation (e.g. Malaysian Airlines), utilities (e.g. electricity supply) and communications (e.g. television). This was done not through an open process of competitive tendering but rather by a “nebulous ‘first come, first served’ principle” (Jomo, 1995, 8) which saw ownership pass directly to politically well-connected businessmen, mainly bumiputera, at relatively low valuations.

The New Development Policy

Positive action to promote bumiputera interests did not end with the NEP in 1990, this was followed in 1991 by the New Development Policy (NDP), which emphasized assistance only to “Bumiputera with potential, commitment and good track records” (Malaysian Government, 1991, 17) rather than the previous blanket measures to redistribute wealth and employment. In turn the NDP was part of a longer-term program known as Vision 2020. The aim here is to turn Malaysia into a fully industrialized country and to quadruple per capita income by the year 2020. This will require the country to continue ascending the technological “ladder” from low- to high-tech types of industrial production, with a corresponding increase in the intensity of capital investment and greater retention of value-added (i.e. the value added to raw materials in the production process) by Malaysian producers.

The Malaysian economy continued to boom at historically unprecedented rates of 8-9 percent a year for much of the 1990s (see next section). There was heavy expenditure on infrastructure, for example extensive building in Kuala Lumpur such as the Twin Towers (currently the highest buildings in the world). The volume of manufactured exports, notably electronic goods and electronic components increased rapidly.

Asian Financial Crisis, 1997-98

The Asian financial crisis originated in heavy international currency speculation leading to major slumps in exchange rates beginning with the Thai baht in May 1997, spreading rapidly throughout East and Southeast Asia and severely affecting the banking and finance sectors. The Malaysian ringgit exchange rate fell from RM 2.42 to 4.88 to the U.S. dollar by January 1998. There was a heavy outflow of foreign capital. To counter the crisis the International Monetary Fund (IMF) recommended austerity changes to fiscal and monetary policies. Some countries (Thailand, South Korea, and Indonesia) reluctantly adopted these. The Malaysian government refused and implemented independent measures; the ringgitbecame non-convertible externally and was pegged at RM 3.80 to the US dollar, while foreign capital repatriated before staying at least twelve months was subject to substantial levies. Despite international criticism these actions stabilized the domestic situation quite effectively, restoring net growth (see next section) especially compared to neighboring Indonesia.

Rates of Economic Growth

Malaysia’s economic growth in comparative perspective from 1960-90 is set out in Table 4 below.

Table 4
Asia-Pacific Region: Growth of Real GDP (annual average percent)

1960-69 1971-80 1981-89
Japan 10.9 5.0 4.0
Asian “Tigers”
Hong Kong 10.0 9.5 7.2
South Korea 8.5 8.7 9.3
Singapore 8.9 9.0 6.9
Taiwan 11.6 9.7 8.1
ASEAN-4
Indonesia 3.5 7.9 5.2
Malaysia 6.5 8.0 5.4
Philippines 4.9 6.2 1.7
Thailand 8.3 9.9 7.1

Source: Drabble, 2000, Table 10.2; figures for Japan are for 1960-70, 1971-80, and 1981-90.

The data show that Japan, the dominant Asian economy for much of this period, progressively slowed by the 1990s (see below). The four leading Newly Industrialized Countries (Asian “Tigers” as they were called) followed EOF strategies and achieved very high rates of growth. Among the four ASEAN (Association of Southeast Asian Nations formed 1967) members, again all adopting EOI policies, Thailand stood out followed closely by Malaysia. Reference to Table 1 above shows that by 1990 Malaysia, while still among the leaders in GDP per head, had slipped relative to the “Tigers.”

These economies, joined by China, continued growth into the 1990s at such high rates (Malaysia averaged around 8 percent a year) that the term “Asian miracle” became a common method of description. The exception was Japan which encountered major problems with structural change and an over-extended banking system. Post-crisis the countries of the region have started recovery but at differing rates. The Malaysian economy contracted by nearly 7 percent in 1998, recovered to 8 percent growth in 2000, slipped again to under 1 percent in 2001 and has since stabilized at between 4 and 5 percent growth in 2002-04.

The new Malaysian Prime Minister (since October 2003), Abdullah Ahmad Badawi, plans to shift the emphasis in development to smaller, less-costly infrastructure projects and to break the previous dominance of “money politics.” Foreign direct investment will still be sought but priority will be given to nurturing the domestic manufacturing sector.

Further improvements in education will remain a key factor (Far Eastern Economic Review, Nov.6, 2003).

Overview

Malaysia owes its successful historical economic record to a number of factors. Geographically it lies close to major world trade routes bringing early exposure to the international economy. The sparse indigenous population and labor force has been supplemented by immigrants, mainly from neighboring Asian countries with many becoming permanently domiciled. The economy has always been exceptionally open to external influences such as globalization. Foreign capital has played a major role throughout. Governments, colonial and national, have aimed at managing the structure of the economy while maintaining inter-ethnic stability. Since about 1960 the economy has benefited from extensive restructuring with sustained growth of exports from both the primary and secondary sectors, thus gaining a double impetus.

However, on a less positive assessment, the country has so far exchanged dependence on a limited range of primary products (e.g. tin and rubber) for dependence on an equally limited range of manufactured goods, notably electronics and electronic components (59 percent of exports in 2002). These industries are facing increasing competition from lower-wage countries, especially India and China. Within Malaysia the distribution of secondary industry is unbalanced, currently heavily favoring the Peninsula. Sabah and Sarawak are still heavily dependent on primary products (timber, oil, LNG). There is an urgent need to continue the search for new industries in which Malaysia can enjoy a comparative advantage in world markets, not least because inter-ethnic harmony depends heavily on the continuance of economic prosperity.

Select Bibliography

General Studies

Amarjit Kaur. Economic Change in East Malaysia: Sabah and Sarawak since 1850. London: Macmillan, 1998.

Andaya, L.Y. and Andaya, B.W. A History of Malaysia, second edition. Basingstoke: Palgrave, 2001.

Crouch, Harold. Government and Society in Malaysia. Sydney: Allen and Unwin, 1996.

Drabble, J.H. An Economic History of Malaysia, c.1800-1990: The Transition to Modern Economic Growth. Basingstoke: Macmillan and New York: St. Martin’s Press, 2000.

Furnivall, J.S. Colonial Policy and Practice: A Comparative Study of Burma and Netherlands India. Cambridge (UK), 1948.

Huff, W.G. The Economic Growth of Singapore: Trade and Development in the Twentieth Century. Cambridge: Cambridge University Press, 1994.

Jomo, K.S. Growth and Structural Change in the Malaysian Economy. London: Macmillan, 1990.

Industries/Transport

Alavi, Rokiah. Industrialization in Malaysia: Import Substitution and Infant Industry Performance. London: Routledge, 1966.

Amarjit Kaur. Bridge and Barrier: Transport and Communications in Colonial Malaya 1870-1957. Kuala Lumpur: Oxford University Press, 1985.

Drabble, J.H. Rubber in Malaya 1876-1922: The Genesis of the Industry. Kuala Lumpur: Oxford University Press, 1973.

Drabble, J.H. Malayan Rubber: The Interwar Years. London: Macmillan, 1991.

Huff, W.G. “Boom or Bust Commodities and Industrialization in Pre-World War II Malaya.” Journal of Economic History 62, no. 4 (2002): 1074-1115.

Jackson, J.C. Planters and Speculators: European and Chinese Agricultural Enterprise in Malaya 1786-1921. Kuala Lumpur: University of Malaya Press, 1968.

Lim Teck Ghee. Peasants and Their Agricultural Economy in Colonial Malaya, 1874-1941. Kuala Lumpur: Oxford University Press, 1977.

Wong Lin Ken. The Malayan Tin Industry to 1914. Tucson: University of Arizona Press, 1965.

Yip Yat Hoong. The Development of the Tin Mining Industry of Malaya. Kuala Lumpur: University of Malaya Press, 1969.

New Economic Policy

Jesudason, J.V. Ethnicity and the Economy: The State, Chinese Business and Multinationals in Malaysia. Kuala Lumpur: Oxford University Press, 1989.

Jomo, K.S., editor. Privatizing Malaysia: Rents, Rhetoric, Realities. Boulder, CO: Westview Press, 1995.

Khoo Boo Teik. Paradoxes of Mahathirism: An Intellectual Biography of Mahathir Mohamad. Kuala Lumpur: Oxford University Press, 1995.

Vincent, J.R., R.M. Ali and Associates. Environment and Development in a Resource-Rich Economy: Malaysia under the New Economic Policy. Cambridge, MA: Harvard University Press, 1997

Ethnic Communities

Chew, Daniel. Chinese Pioneers on the Sarawak Frontier, 1841-1941. Kuala Lumpur: Oxford University Press, 1990.

Gullick, J.M. Malay Society in the Late Nineteenth Century. Kuala Lumpur: Oxford University Press, 1989.

Hong, Evelyne. Natives of Sarawak: Survival in Borneo’s Vanishing Forests. Penang: Institut Masyarakat Malaysia, 1987.

Shamsul, A.B. From British to Bumiputera Rule. Singapore: Institute of Southeast Asian Studies, 1986.

Economic Growth

Far Eastern Economic Review. Hong Kong. An excellent weekly overview of current regional affairs.

Malaysian Government. The Second Outline Perspective Plan, 1991-2000. Kuala Lumpur: Government Printer, 1991.

Van der Eng, Pierre. “Assessing Economic Growth and the Standard of Living in Asia 1870-1990.” Milan, Eleventh International Economic History Congress, 1994.

Citation: Drabble, John. “The Economic History of Malaysia”. EH.Net Encyclopedia, edited by Robert Whaples. July 31, 2004. URL http://eh.net/encyclopedia/economic-history-of-malaysia/

Labor Unions in the United States

Gerald Friedman, University of Massachusetts at Amherst

Unions and Collective Action

In capitalist labor markets, which developed in the nineteenth-century in the United States and Western Europe, workers exchange their time and effort for wages. But even while laboring under the supervision of others, wage earners have never been slaves, because they have recourse from abuse. They can quit to seek better employment. Or they are free to join with others to take collective action, forming political movements or labor unions. By the end of the nineteenth century, labor unions and labor-oriented political parties had become major forces influencing wages and working conditions. This article explores the nature and development of labor unions in the United States. It reviews the growth and recent decline of the American labor movement and makes comparisons with the experience of foreign labor unions to clarify particular aspects of the history of labor unions in the United States.

Unions and the Free-Rider Problem

Quitting, exit, is straightforward, a simple act for individuals unhappy with their employment. By contrast, collective action, such as forming a labor union, is always difficult because it requires that individuals commit themselves to produce “public goods” enjoyed by all, including those who “free ride” rather than contribute to the group effort. If the union succeeds, free riders receive the same benefits as do activists; but if it fails, the activists suffer while those who remained outside lose nothing. Because individualist logic leads workers to “free ride,” unions cannot grow by appealing to individual self-interest (Hirschman, 1970; 1982; Olson, 1966; Gamson, 1975).

Union Growth Comes in Spurts

Free riding is a problem for all collective movements, including Rotary Clubs, the Red Cross, and the Audubon Society. But unionization is especially difficult because unions must attract members against the opposition of often-hostile employers. Workers who support unions sacrifice money and risk their jobs, even their lives. Success comes only when large numbers simultaneously follow a different rationality. Unions must persuade whole groups to abandon individualism to throw themselves into the collective project. Rarely have unions grown incrementally, gradually adding members. Instead, workers have joined unions en masse in periods of great excitement, attracted by what the French sociologist Emile Durkheim labeled “collective effervescence” or the joy of participating in a common project without regard for individual interest. Growth has come in spurts, short periods of social upheaval punctuated by major demonstrations and strikes when large numbers see their fellow workers publicly demonstrating a shared commitment to the collective project. Union growth, therefore, is concentrated in short periods of dramatic social upheaval; in the thirteen countries listed in Tables 1 and 2, 67 percent of growth comes in only five years, and over 90 percent in only ten years. As Table 3 shows, in these thirteen countries, unions grew by over 10 percent a year in years with the greatest strike activity but by less than 1 percent a year in the years with the fewest strikers (Friedman, 1999; Shorter and Tilly, 1974; Zolberg, 1972).

Table 1
Union Members per 100 Nonagricultural Workers, 1880-1985: Selected Countries

Year Canada US Austria Denmark France Italy Germany Netherlands Norway Sweden UK Australia Japan
1880 n.a. 1.8 n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a.
1900 4.6 7.5 n.a. 20.8 5.0 n.a. 7.0 n.a. 3.4 4.8 12.7 n.a. n.a.
1914 8.6 10.5 n.a. 25.1 8.1 n.a. 16.9 17.0 13.6 9.9 23.0 32.8 n.a.
1928 11.6 9.9 41.7 39.7 8.0 n.a. 32.5 26.0 17.4 32.0 25.6 46.2 n.a.
1939 10.9 20.7 n.a. 51.8 22.4 n.a. n.a. 32.5 57.0 53.6 31.6 39.2 n.a.
1947 24.6 31.4 64.6 55.9 40.0 n.a. 29.1 40.4 55.1 64.6 44.5 52.9 45.3
1950 26.3 28.4 62.3 58.1 30.2 49.0 33.1 43.0 58.4 67.7 44.1 56.0 46.2
1960 28.3 30.4 63.4 64.4 20.0 29.6 37.1 41.8 61.5 73.0 44.2 54.5 32.2
1975 35.6 26.4 58.5 66.6 21.4 50.1 38.2 39.1 60.5 87.2 51.0 54.7 34.4
1985 33.7 18.9 57.8 82.2 14.5 51.0 39.3 28.6 65.3 103.0 44.2 51.5 28.9

Note: This table shows the unionization rate, the share of nonagricultural workers belonging to unions, in different countries in different years, 1880-1985. Because union membership often includes unemployed and retired union members it may exceed the number of employed workers, giving a unionization rate of greater than 100 percent.

Table 2
Union Growth in Peak and Other Years

Country Years Membership Growth Share of Growth (%) Excess Growth (%)
Top 5 Years Top 10 Years All Years 5 Years 10 Years 5 Years 10 Years
Australia 83 720 000 1 230 000 3 125 000 23.0 39.4 17.0 27.3
Austria 52 5 411 000 6 545 000 3 074 000 176.0 212.9 166.8 194.4
Canada 108 855 000 1 532 000 4 028 000 21.2 38.0 16.6 28.8
Denmark 85 521 000 795 000 1 883 000 27.7 42.2 21.8 30.5
France 92 6 605 000 7 557 000 2 872 000 230.0 263.1 224.5 252.3
Germany 82 10 849 000 13 543 000 9 120 000 119.0 148.5 112.9 136.3
Italy 38 3 028 000 4 671 000 3 713 000 81.6 125.8 68.4 99.5
Japan 43 4 757 000 6 692 000 8 983 000 53.0 74.5 41.3 51.2
Netherlands 71 671 000 1 009 000 1 158 000 57.9 87.1 50.9 73.0
Norway 85 304 000 525 000 1 177 000 25.8 44.6 19.9 32.8
Sweden 99 633 000 1 036 000 3 859 000 16.4 26.8 11.4 16.7
UK 96 4 929 000 8 011 000 8 662 000 56.9 92.5 51.7 82.1
US 109 10 247 000 14 796 000 22 293 000 46.0 66.4 41.4 57.2
Total 1043 49 530 000 67 942 000 73 947 000 67.0 91.9 60.7 79.4

Note: This table shows that most union growth comes in a few years. Union membership growth (net of membership losses) has been calculated for each country for each year. Years were then sorted for each country according to membership growth. This table reports growth for each country for the five and the ten years with the fastest growth and compares this with total growth over all years for which data are available. Excess growth has been calculated as the difference between the share of growth in the top five or ten years and the share that would have come in these periods if growth had been distributed evenly across all years.

Note that years of rapid growth are not necessarily contiguous. There can be more growth in years of rapid growth than over the entire period. This is because some is temporary when years of rapid growth are followed by years of decline.

Sources: Bain and Price (1980): 39, Visser (1989)

Table 3
Impact of Strike Activity on Union Growth
Average Union Membership Growth in Years Sorted by Proportion of Workers Striking

Country Striker Rate Quartile Change
Lowest Third Second Highest
Australia 5.1 2.5 4.5 2.7 -2.4
Austria 0.5 -1.9 0.4 2.4 1.9
Canada 1.3 1.9 2.3 15.8 14.5
Denmark 0.3 1.1 3.0 11.3 11.0
France 0.0 2.1 5.6 17.0 17.0
Germany -0.2 0.4 1.3 20.3 20.5
Italy -2.2 -0.3 2.3 5.8 8.0
Japan -0.2 5.1 3.0 4.3 4.5
Netherlands -0.9 1.2 3.5 6.3 7.2
Norway 1.9 4.3 8.6 10.3 8.4
Sweden 2.5 3.2 5.9 16.9 14.4
UK 1.7 1.7 1.9 3.4 1.7
US -0.5 0.6 2.1 19.9 20.4
Total: Average 0.72 1.68 3.42 10.49 9.78

Note: This table shows that except in Australia unions grew fastest in years with large number of strikers. The proportion of workers striking was calculated for each country for each year as the number of strikers divided by the nonagricultural labor force. Years were then sorted into quartiles, each including one-fourth of the years, according to this striker rate statistic. The average annual union membership growth rate was then calculated for each quartile as the mean of the growth rate in each year in the quartile.

Rapid Union Growth Provokes a Hostile Reaction

These periods of rapid union growth end because social upheaval provokes a hostile reaction. Union growth leads employers to organize, to discover their own collective interests. Emulating their workers, they join together to discharge union activists, to support each other in strikes, and to demand government action against unions. This rising opposition ends periods of rapid union growth, beginning a new phase of decline followed by longer periods of stagnant membership. The weakest unions formed during the union surge succumb to the post-boom reaction; but if enough unions survive they leave a movement larger and broader than before.

Early Labor Unions, Democrats and Socialists

Guilds

Before modern labor unions, guilds united artisans and their employees. Craftsmen did the work of early industry, “masters” working beside “journeymen” and apprentices in small workplaces. Throughout the cities and towns of medieval Europe, guilds regulated production by setting minimum prices and quality, and capping wages, employment, and output. Controlled by independent craftsmen, “masters” who employed journeymen and trained apprentices, guilds regulated industry to protect the comfort and status of the masters. Apprentices and journeymen benefited from guild restrictions only when they advanced to master status.

Guild power was gradually undermined in the early-modern period. Employing workers outside the guild system, including rural workers and semiskilled workers in large urban workplaces, merchants transformed medieval industry. By the early 1800s, few could anticipate moving up to becoming a master artisan or owning their own establishment. Instead, facing the prospect of a lifetime of wage labor punctuated by periods of unemployment, some wage earners began to seek a collective regulation of their individual employment (Thompson, 1966; Scott, 1974; Dawley, 1976; Sewell, 1980; Wilentz, 1984; Blewett, 1988).

The labor movement within the broader movement for democracy

This new wage-labor regime led to the modern labor movement. Organizing propertyless workers who were laboring for capitalists, organized labor formed one wing of a broader democratic movement struggling for equality and for the rights of commoners (Friedman, 1998). Within the broader democratic movement for legal and political equality, labor fought the rise of a new aristocracy that controlled the machinery of modern industry just as the old aristocracy had monopolized land. Seen in this light, the fundamental idea of the labor movement, that employees should have a voice in the management of industry, is comparable to the demand that citizens should have a voice in the management of public affairs. Democratic values do not, by any means, guarantee that unions will be fair and evenhanded to all workers. In the United States, by reserving good jobs for their members, unions of white men sometimes contributed to the exploitation of women and nonwhites. Democracy only means that exploitation will be carried out at the behest of a political majority rather than at the say of an individual capitalist (Roediger, 1991; Arnesen, 2001; Foner, 1974; 1979; Milkman, 1985).

Craft unions’ strategy

Workers formed unions to voice their interests against their employers, and also against other workers. Rejecting broad alliances along class lines, alliances uniting workers on the basis of their lack of property and their common relationship with capitalists, craft unions followed a narrow strategy, uniting workers with the same skill against both the capitalists and against workers in different trades. By using their monopoly of knowledge of the work process to restrict access to the trade, craft unions could have a strong bargaining position that was enhanced by alliances with other craftsmen to finance long strikes. A narrow craft strategy was followed by the first successful unions throughout Europe and America, especially in small urban shops using technologies that still depended on traditional specialized skills, including printers, furniture makers, carpenters, gold beaters and jewelry makers, iron molders, engineers, machinists, and plumbers. Craft unions’ characteristic action was the small, local strike, the concerted withdrawal of labor by a few workers critical to production. Typically, craft unions would present a set of demands to local employers on a “take-it-or-leave-it” basis; either the employer accepted their demands or fought a contest of strength to determine whether the employers could do without the skilled workers for longer than the workers could manage without their jobs.

The craft strategy offered little to the great masses of workers. Because it depends on restricting access to trades it could not be applied by common laborers, who were untrained, nor by semi-skilled employees in modern mass-production establishments whose employers trained them on-the-job. Shunned by craft unions, most women and African-Americans in the United States were crowded into nonunion occupations. Some sought employment as strikebreakers in occupations otherwise monopolized by craft unions controlled by white, native-born males (Washington, 1913; Whatley, 1993).

Unions among unskilled workers

To form unions, the unskilled needed a strategy of the weak that would utilize their numbers rather than specialized knowledge and accumulated savings. Inclusive unions have succeeded but only when they attract allies among politicians, state officials, and the affluent public. Sponsoring unions and protecting them from employer repression, allies can allow organization among workers without specialized skills. When successful, inclusive unions can grow quickly in mass mobilization of common laborers. This happened, for example, in Germany at the beginning of the Weimar Republic, during the French Popular Front of 1936-37, and in the United States during the New Deal of the 1930s. These were times when state support rewarded inclusive unions for organizing the unskilled. The bill for mass mobilization usually came later. Each boom was followed by a reaction against the extensive promises of the inclusive labor movement when employers and conservative politicians worked to put labor’s genie back in the bottle.

Solidarity and the Trade Unions

Unionized occupations of the late 1800s

By the late-nineteenth century, trade unions had gained a powerful position in several skilled occupations in the United States and elsewhere. Outside of mining, craft unions were formed among well-paid skilled craft workers — workers whom historian Eric Hobsbawm labeled the “labor aristocracy” (Hobsbawm, 1964; Geary, 1981). In 1892, for example, nearly two-thirds of British coal miners were union members, as were a third of machinists, millwrights and metal workers, cobblers and shoe makers, glass workers, printers, mule spinners, and construction workers (Bain and Price, 1980). French miners had formed relatively strong unions, as had skilled workers in the railroad operating crafts, printers, jewelry makers, cigar makers, and furniture workers (Friedman, 1998). Cigar makers, printers, furniture workers, some construction and metal craftsmen took the lead in early German unions (Kocka, 1986). In the United States, there were about 160,000 union members in 1880, including 120,000 belonging to craft unions, including carpenters, engineers, furniture makers, stone-cutters, iron puddlers and rollers, printers, and several railroad crafts. Another 40,000 belonged to “industrial” unions organized without regard for trade. About half of these were coal miners; most of the rest belonged to the Knights of Labor (KOL) (Friedman, 1999).

The Knights of Labor

In Europe, these craft organizations were to be the basis of larger, mass unions uniting workers without regard for trade or, in some cases, industry (Ansell, 2001). This process began in the United States in the 1880s when craft workers in the Knights of Labor reached out to organize more broadly. Formed by skilled male, native-born garment cutters in 1869, the Knights of Labor would seem an odd candidate to mobilize the mass of unskilled workers. But from a few Philadelphia craft workers, the Knights grew to become a national and even international movement. Membership reached 20,000 in 1881 and grew to 100,000 in 1885. Then, in 1886, when successful strikes on some western railroads attracted a mass of previously unorganized unskilled workers, the KOL grew to a peak membership of a million workers. For a brief time, the Knights of Labor was a general movement of the American working class (Ware, 1929; Voss, 1993).

The KOL became a mass movement with an ideology and program that united workers without regard for occupation, industry, race or gender (Hattam, 1993). Never espousing Marxist or socialist doctrines, the Knights advanced an indigenous form of popular American radicalism, a “republicanism” that would overcome social problems by extending democracy to the workplace. Valuing citizens according to their work, their productive labor, the Knights were true heirs of earlier bourgeois radicals. Open to all producers, including farmers and other employers, they excluded only those seen to be parasitic on the labor of producers — liquor dealers, gamblers, bankers, stock manipulators and lawyers. Welcoming all others without regard for race, gender, or skill, the KOL was the first American labor union to attract significant numbers of women, African-Americans, and the unskilled (Foner, 1974; 1979; Rachleff, 1984).

The KOL’s strategy

In practice, most KOL local assemblies acted like craft unions. They bargained with employers, conducted boycotts, and called members out on strike to demand higher wages and better working conditions. But unlike craft unions that depended on the bargaining leverage of a few strategically positioned workers, the KOL’s tactics reflected its inclusive and democratic vision. Without a craft union’s resources or control over labor supply, the Knights sought to win labor disputes by widening them to involve political authorities and the outside public able to pressure employers to make concessions. Activists hoped that politicizing strikes would favor the KOL because its large membership would tempt ambitious politicians while its members’ poverty drew public sympathy.

In Europe, a strategy like that of the KOL succeeded in promoting the organization of inclusive unions. But it failed in the United States. Comparing the strike strategies of trade unions and the Knights provides insight into the survival and eventual success of the trade unions and their confederation, the American Federation of Labor (AFL) in late-nineteenth century America. Seeking to transform industrial relations, local assemblies of the KOL struck frequently with large but short strikes involving skilled and unskilled workers. The Knights’ industrial leverage depended on political and social influence. It could succeed where trade unions would not go because the KOL strategy utilized numbers, the one advantage held by common laborers. But this strategy could succeed only where political authorities and the outside public might sympathize with labor. Later industrial and regional unions tried the same strategy, conducting short but large strikes. By demonstrating sufficient numbers and commitment, French and Italian unions, for example, would win from state officials concessions they could not force from recalcitrant employers (Shorter and Tilly, 1974; Friedman, 1998). But compared with the small strikes conducted by craft unions, “solidarity” strikes must walk a fine line, aggressive enough to draw attention but not so threatening to provoke a hostile reaction from threatened authorities. Such a reaction doomed the KOL.

The Knights’ collapse in 1886

In 1886, the Knights became embroiled in a national general strike demanding an eight-hour workday, the world’s first May Day. This led directly to the collapse of the KOL. The May Day strike wave in 1886 and the bombing at Haymarket Square in Chicago provoked a “red scare” of historic proportions driving membership down to half a million in September 1887. Police in Chicago, for example, broke up union meetings, seized union records, and even banned the color red from advertisements. The KOL responded politically, sponsoring a wave of independent labor parties in the elections of 1886 and supporting the Populist Party in 1890 (Fink, 1983). But even relatively strong showings by these independent political movements could not halt the KOL’s decline. By 1890, its membership had fallen by half again, and it fell to under 50,000 members by 1897.

Unions and radical political movements in Europe in the late 1800s

The KOL spread outside the United States, attracting an energetic following in the Canada, the United Kingdom, France, and other European countries. Industrial and regional unionism fared better in these countries than in the United States. Most German unionists belonged to industrial unions allied with the Social Democratic Party. Under Marxist leadership, unions and political party formed a centralized labor movement to maximize labor’s political leverage. English union membership was divided between members of a stable core of craft unions and a growing membership in industrial and regional unions based in mining, cotton textiles, and transportation. Allied with political radicals, these industrial and regional unions formed the backbone of the Labor Party, which held the balance of power in British politics after 1906.

The most radical unions were found in France. By the early 1890s, revolutionary syndicalists controlled the national union center, the Confédération générale du travail (or CGT), which they tried to use as a base for a revolutionary general strike where the workers would seize economic and political power. Consolidating craft unions into industrial and regional unions, the Bourses du travail, syndicalists conducted large strikes designed to demonstrate labor’s solidarity. Paradoxically, the syndicalists’ large strikes were effective because they provoked friendly government mediation. In the United States, state intervention was fatal for labor because government and employers usually united to crush labor radicalism. But in France, officials were more concerned to maintain a center-left coalition with organized labor against reactionary employers opposed to the Third Republic. State intervention helped French unionists to win concessions beyond any they could win with economic leverage. A radical strategy of inclusive industrial and regional unionism could succeed in France because the political leadership of the early Third Republic needed labor’s support against powerful economic and social groups who would replace the Republic with an authoritarian regime. Reminded daily of the importance of republican values and the coalition that sustained the Republic, French state officials promoted collective bargaining and labor unions. Ironically, it was the support of liberal state officials that allowed French union radicalism to succeed, and allowed French unions to grow faster than American unions and to organize the semi-skilled workers in the large establishments of France’s modern industries (Friedman, 1997; 1998).

The AFL and American Exceptionalism

By 1914, unions outside the United States had found that broad organization reduced the availability of strike breakers, advanced labor’s political goals, and could lead to state intervention on behalf of the unions. The United States was becoming exceptional, the only advanced capitalist country without a strong, united labor movement. The collapse of the Knights of Labor cleared the way for the AFL. Formed in 1881 as the Federation of Trade and Labor Unions, the AFL was organized to uphold the narrow interests of craft workers against the general interests of common laborers in the KOL. In practice, AFL-craft unions were little labor monopolies, able to win concessions because of their control over uncommon skills and because their narrow strategy did not frighten state officials. Many early AFL leaders, notably the AFL’s founding president Samuel Gompers and P. J. McGuire of the Carpenters, had been active in radical political movements. But after 1886, they learned to reject political involvements for fear that radicalism might antagonize state officials or employers and provoke repression.

AFL successes in the early twentieth-century

Entering the twentieth century, the AFL appeared to have a winning strategy. Union membership rose sharply in the late 1890s, doubling between 1896 and 1900 and again between 1900 and 1904. Fewer than 5 percent of industrial wage earners belonged to labor unions in 1895, but this share rose to 7 percent in 1900 and 13 percent in 1904, including over 21 percent of industrial wage earners (workers outside of commerce, government, and the professions). Half of coal miners in 1904 belonged to an industrial union (the United Mine Workers of America), but otherwise, most union members belonged to craft organizations, including nearly half the printers, and a third of cigar makers, construction workers and transportation workers. As shown in Table 4, other pockets of union strength included skilled workers in the metal trades, leather, and apparel. These craft unions had demonstrated their economic power, raising wages by around 15 percent and reducing hours worked (Friedman, 1991; Mullin, 1993).

Table 4
Unionization rates by industry in the United States, 1880-2000

Industry 1880 1910 1930 1953 1974 1983 2000
Agriculture Forestry Fishing 0.0 0.1 0.4 0.6 4.0 4.8 2.1
Mining 11.2 37.7 19.8 64.7 34.7 21.1 10.9
Construction 2.8 25.2 29.8 83.8 38.0 28.0 18.3
Manufacturing 3.4 10.3 7.3 42.4 37.2 27.9 14.8
Transportation Communication Utilities 3.7 20.0 18.3 82.5 49.8 46.4 24.0
Private Services 0.1 3.3 1.8 9.5 8.6 8.7 4.8
Public Employment 0.3 4.0 9.6 11.3 38.0 31.1 37.5
All Private 1.7 8.7 7.0 31.9 22.4 18.4 10.9
All 1.7 8.5 7.1 29.6 24.8 20.4 14.1

Note: This table shows the unionization rate, the share of workers belonging to unions, in different industries in the United States, 1880-1996.

Sources: 1880 and 1910: Friedman (1999): 83; 1930: Union membership from Wolman (1936); employment from United States, Bureau of the Census (1932); 1953: Troy (1957); 1974, 1986, 2000: United States, Current Population Survey.

Limits to the craft strategy

Even at this peak, the craft strategy had clear limits. Craft unions succeeded only in a declining part of American industry among workers still performing traditional tasks where training was through apprenticeship programs controlled by the workers themselves. By contrast, there were few unions in the rapidly growing industries employing semi-skilled workers. Nor was the AFL able to overcome racial divisions and state opposition to organize in the South (Friedman, 2000; Letwin, 1998). Compared with the KOL in the early 1880s, or with France’s revolutionary syndicalist unions, American unions were weak in steel, textiles, chemicals, paper and metal fabrication using technologies without traditional craft skills. AFL strongholds included construction, printing, cigar rolling, apparel cutting and pressing, and custom metal engineering, employed craft workers in relatively small establishments little changed from 25 years earlier (see Table 4).

Dependent on skilled craftsmen’s economic leverage, the AFL was poorly organized to battle large, technologically dynamic corporations. For a brief time, the revolutionary International Workers of the World (IWW), formed in 1905, organized semi-skilled workers in some mass production industries. But by 1914, it too had failed. It was state support that forced powerful French employers to accept unions. Without such assistance, no union strategy could force large American employers to accept unions.

Unions in the World War I Era

The AFL and World War I

For all its limits, it must be acknowledged that the AFL and its craft affiliates survived after their rivals ignited and died. The AFL formed a solid union movement among skilled craftsmen that with favorable circumstances could form the core of a broader union movement like what developed in Europe after 1900. During World War I, the Wilson administration endorsed unionization and collective bargaining in exchange for union support for the war effort. AFL affiliates used state support to organize mass-production workers in shipbuilding, metal fabrication, meatpacking and steel doubling union membership between 1915 and 1919. But when Federal support ended after the war’s end, employers mobilized to crush the nascent unions. The post-war union collapse has been attributed to the AFL’s failings. The larger truth is that American unions needed state support to overcome the entrenched power of capital. The AFL did not fail because of its deficient economic strategy; it failed because it had an ineffective political strategy (Friedman, 1998; Frank, 1994; Montgomery, 1987).

International effects of World War I

War gave labor extraordinary opportunities. Combatant governments rewarded pro-war labor leaders with positions in the expanded state bureaucracy and support for collective bargaining and unions. Union growth also reflected economic conditions when wartime labor shortages strengthened the bargaining position of workers and unions. Unions grew rapidly during and immediately after the war. British unions, for example, doubled their membership between 1914 and 1920, to enroll eight million workers, almost half the nonagricultural labor force (Bain and Price, 1980; Visser, 1989). Union membership tripled in Germany and Sweden, doubled in Canada, Denmark, the Netherlands, and Norway, and almost doubled in the United States (see Table 5 and Table 1). For twelve countries, membership grew by 121 percent between 1913 and 1920, including 119 percent growth in seven combatant countries and 160 percent growth in five neutral states.

Table 5
Impact of World War I on Union Membership Growth
Membership Growth in Wartime and After

12 Countries 7 Combatants 5 Neutrals
War-Time 1913 12 498 000 11 742 000 756 000
1920 27 649 000 25 687 000 1 962 000
Growth 1913-20: 121% 119% 160%
Post-war 1920 27 649 000
1929 18 149 000
Growth 1920-29: -34%

Shift toward the revolutionary left

Even before the war, frustration with the slow pace of social reform had led to a shift towards the revolutionary socialist and syndicalist left in Germany, the United Kingdom, and the United States (Nolan, 1981; Montgomery, 1987). In Europe, frustrations with rising prices, declining real wages and working conditions, and anger at catastrophic war losses fanned the flames of discontent into a raging conflagration. Compared with pre-war levels, the number of strikers rose ten or even twenty times after the war, including 2.5 million strikers in France in 1919 and 1920, compared with 200,000 strikers in 1913, 13 million German strikers, up from 300,000 in 1913, and 5 million American strikers, up from under 1 million in 1913. British Prime Minister Lloyd George warned in March 1919 that “The whole of Europe is filled with the spirit of revolution. There is a deep sense not only of discontent, but of anger and revolt among the workmen . . . The whole existing order in its political, social and economic aspects is questioned by the masses of the population from one end of Europe to the other” (quoted in Cronin, 1983: 22).

Impact of Communists

Inspired by the success of the Bolshevik revolution in Russia, revolutionary Communist Parties were organized throughout the world to promote revolution by organizing labor unions, strikes, and political protest. Communism was a mixed blessing for labor. The Communists included some of labor’s most dedicated activists and organizers who contributed greatly to union organization. But Communist help came at a high price. Secretive, domineering, intolerant of opposition, the Communists divided unions between their dwindling allies and a growing collection of outraged opponents. Moreover, they galvanized opposition, depriving labor of needed allies among state officials and the liberal bourgeoisie.

The “Lean Years”: Welfare Capitalism and the Open Shop

Aftermath of World War I

As with most great surges in union membership, the postwar boom was self-limiting. Helped by a sharp post- war economic contraction, employers and state officials ruthlessly drove back the radical threat, purging their workforce of known union activists and easily absorbing futile strikes during a period of rising unemployment. Such campaigns drove membership down by a third from a 1920 peak of 26 million members in eleven countries in 1920 to fewer than 18 million in 1924. In Austria, France, Germany, and the United States, labor unrest contributed to the election of conservative governments; in Hungary, Italy, and Poland it led to the installation of anti- democratic dictatorships that ruthlessly crushed labor unions. Economic stagnation, state repression, and anti-union campaigns by employers prevented any union resurgence through the rest of the 1920s. By 1929, unions in these eleven countries had added only 30,000 members, one-fifth of one percent.

Injunctions and welfare capitalism

The 1920s was an especially dark period for organized labor in the United States where weaknesses visible before World War I became critical failures. Labor’s opponents used fear of Communism to foment a post-war red scare that targeted union activists for police and vigilante violence. Hundreds of foreign-born activists were deported, and mobs led by the American Legion and the Ku Klux Klan broke up union meetings and destroyed union offices (see, for example, Frank, 1994: 104-5). Judges added law to the campaign against unions. Ignoring the intent of the Clayton Anti-Trust Act (1914) they used anti-trust law and injunctions against unions, forbidding activists from picketing or publicizing disputes, holding signs, or even enrolling new union members. Employers competed for their workers’ allegiance, offering paternalist welfare programs and systems of employee representation as substitutes for independent unions. They sought to build a nonunion industrial relations system around welfare capitalism (Cohen, 1990).

Stagnation and decline

After the promises of the war years, the defeat of postwar union drives in mass production industries like steel and meatpacking inaugurated a decade of union stagnation and decline. Membership fell by a third between 1920 and 1924. Unions survived only in the older trades where employment was usually declining. By 1924, they were almost completely eliminated from the dynamic industries of the second industrial revolution: including steel, automobiles, consumer electronics, chemicals and rubber manufacture.

New Deals for Labor

Great Depression

The nonunion industrial relations system of the 1920s might have endured and produced a docile working class organized in company unions (Brody, 1985). But the welfare capitalism of the 1920s collapsed when the Great Depression of the 1930s exposed its weaknesses and undermined political support for the nonunion, open shop. Between 1929 and 1933, real national income in the United States fell by one third, nonagricultural employment fell by a quarter, and unemployment rose from under 2 million in 1929 to 13 million in 1933, a quarter of the civilian labor force. Economic decline was nearly as great elsewhere, raising unemployment to over 15 percent in Austria, Canada, Germany, and the United Kingdom (Maddison, 1991: 260-61). Only the Soviet Union, with its authoritarian political economy was largely spared the scourge of unemployment and economic collapse — a point emphasized by Communists throughout the 1930s and later. Depression discredited the nonunion industrial relations system by forcing welfare capitalists to renege on promises to stabilize employment and to maintain wages. Then, by ignoring protests from members of employee representation plans, welfare capitalists further exposed the fundamental weakness of their system. Lacking any independent support, paternalist promises had no standing but depended entirely on the variable good will of employers. And sometimes that was not enough (Cohen, 1990).

Depression-era political shifts

Voters, too, lost confidence in employers. The Great Depression discredited the old political economy. Even before Franklin Roosevelt’s election as President of the United States in 1932, American states enacted legislation restricting the rights of creditors and landlords, restraining the use of the injunction in labor disputes, and providing expanded relief for the unemployed (Ely, 1998; Friedman, 2001). European voters abandoned centrist parties, embracing extremists of both left and right, Communists and Fascists. In Germany, the Nazis won, but Popular Front governments uniting Communists and socialists with bourgeois liberals assumed power in other countries, including Sweden, France and Spain. (The Spanish Popular Front was overthrown by a Fascist rebellion that installed a dictatorship led by Francisco Franco.) Throughout there was an impulse to take public control over the economy because free market capitalism and orthodox finance had led to disaster (Temin, 1990).

Economic depression lowers union membership when unemployed workers drop their membership and employers use their stronger bargaining position to defeat union drives (Bain and Elsheikh, 1976). Indeed, union membership fell with the onset of the Great Depression but, contradicting the usual pattern, membership rebounded sharply after 1932 despite high unemployment, rising by over 76 percent in ten countries by 1938 (see Table 6 and Table 1). The fastest growth came in countries with openly pro-union governments. In France, where the Socialist Léon Blum led a Popular Front government, and the United States, during Franklin Roosevelt’s New Deal, membership rose by 160 percent 1933-38. But membership grew by 33 percent in eight other countries even without openly pro-labor governments.

Table 6
Impact of the Great Depression and World War II on Union Membership Growth

11 Countries (no Germany) 10 Countries (no Austria)
Depression 1929 12 401 000 11 508 000
1933 11 455 000 10 802 000
Growth 1929-33 -7.6% -6.1%
Popular Front Period 1933 10 802 000
1938 19 007 000
Growth 1933-38 76.0%
Second World War 1938 19 007 000
1947 35 485 000
Growth 1938-47 86.7%

French unions and the Matignon agreements

French union membership rose from under 900,000 in 1935 to over 4,500,000 in 1937. The Popular Front’s victory in the elections of June 1936 precipitated a massive strike wave and the occupation of factories and workplaces throughout France. Remembered in movie, song and legend, the factory occupations were a nearly spontaneous uprising of French workers that brought France’s economy to a halt. Contemporaries were struck by the extraordinarily cheerful feelings that prevailed, the “holiday feeling” and sense that the strikes were a new sort of non-violent revolution that would overturn hierarchy and replace capitalist authoritarianism with true social democracy (Phillippe and Dubief, 1993: 307-8). After Blum assumed office, he brokered the Matignon agreements, named after the premier’s official residence in Paris. Union leaders and heads of France’s leading employer associations agreed to end the strikes and occupations in exchange for wage increases of around 15 percent, a 40 hour workweek, annual vacations, and union recognition. Codified in statute by the Popular Front government, French unions gained new rights and protections from employer repression. Only then did workers flock into unions. In a few weeks, French unions gained four million members with the fastest growth in the new industries of the second industrial revolution. Unions in metal fabrication and chemicals grew by 1,450 percent and 4,000 percent respectively (Magraw, 1992: 2, 287-88).

French union leader Léon Jouhaux hailed the Matignon agreements as “the greatest victory of the workers’ movement.” It included lasting gains, including annual vacations and shorter workweeks. But Simone Weil described the strikers of May 1936 as “soldiers on leave,” and they were soon returned to work. Regrouping, employers discharged union activists and attacked the precarious unity of the Popular Front government. Fighting an uphill battle against renewed employer resistance, the Popular Front government fell before it could build a new system of cooperative industrial relations. Contained, French unions were unable to maintain their momentum towards industrial democracy. Membership fell by a third in 1937-39.

The National Industrial Recovery Act

A different union paradigm was developed in the United States. Rather than vehicles for a democratic revolution, the New Deal sought to integrate organized labor into a reformed capitalism that recognized capitalist hierarchy in the workplace, using unions only to promote macroeconomic stabilization by raising wages and consumer spending (Brinkley, 1995). Included as part of a program for economic recovery was section 7(a) of the National Industrial Recovery Act (NIRA) giving “employees . . . the right to organize and bargain collectively through representatives of their own choosing . . . free from the interference, restraint, or coercion of employers.” AFL-leader William Green pronounced this a “charter of industrial freedom” and workers rushed into unions in a wave unmatched since the Knights of Labor in 1886. As with the KOL, the greatest increase came among the unskilled. Coal miners, southern textile workers, northern apparel workers, Ohio tire makers, Detroit automobile workers, aluminum, lumber and sawmill workers all rushed into unions. For the first time in fifty years, American unions gained a foothold in mass production industries.

AFL’s lack of enthusiasm

Promises of state support brought common laborers into unions. But once there, the new unionists received little help from aging AFL leaders. Fearing that the new unionists’ impetuous zeal and militant radicalism would provoke repression, AFL leaders tried to scatter the new members among contending craft unions with archaic craft jurisdictions. The new unionists were swept up in the excitement of unity and collective action but a half-century of experience had taught the AFL’s leadership to fear such enthusiasms.

The AFL dampened the union boom of 1933-34, but, again, the larger problem was not with the AFL’s flawed tactics but with its lack of political leverage. Doing little to enforce the promises of Section 7(a), the Federal government left employers free to ignore the law. Some flatly prohibited union organization; others formally honored the law but established anemic employee representation plans while refusing to deal with independent unions (Irons, 2000). By 1935 almost as many industrial establishments had employer-dominated employee- representation plans (27 percent) as had unions (30 percent). The greatest number had no labor organization at all (43 percent).

Birth of the CIO

Implacable management resistance and divided leadership killed the early New Deal union surge. It died even before the NIRA was ruled unconstitutional in 1935. Failure provoked rebellion within the AFL. Led by John L. Lewis of the United Mine Workers, eight national unions launched a campaign for industrial organization as the Committee for Industrial Organization. After Lewis punched Carpenter’s Union leader William L Hutcheson on the floor of the AFL convention in 1935, the Committee became an independent Congress of Industrial Organization (CIO). Including many Communist activists, CIO committees fanned out to organize workers in steel, automobiles, retail trade, journalism and other industries. Building effectively on local rank and file militancy, including sitdown strikes in automobiles, rubber, and other industries, the CIO quickly won contracts from some of the strongest bastions of the open shop, including United States Steel and General Motors (Zieger, 1995).

The Wagner Act

Creative strategy and energetic organizing helped. But the CIO owed its lasting success to state support. After the failure of the NIRA, New Dealers sought another way to strengthen labor as a force for economic stimulus. This led to the enactment in 1935 of the National Labor Relations Act, also known as the “Wagner Act.” The Wagner Act established a National Labor Relations Board charged to enforce employees’ “right to self-organization, to form, join, or assist labor organizations to bargain collectively through representatives of their own choosing and to engage in concerted activities for the purpose of collective bargaining or other mutual aid or protection.” It provided for elections to choose union representation and required employers to negotiate “in good faith” with their workers’ chosen representatives. Shifting labor conflict from strikes to elections and protecting activists from dismissal for their union work, the Act lowered the cost to individual workers of supporting collective action. It also put the Federal government’s imprimatur on union organization.

Crucial role of rank-and-file militants and state government support

Appointed by President Roosevelt, the first NLRB was openly pro-union, viewing the Act’s preamble as mandate to promote organization. By 1945 the Board had supervised 24,000 union elections involving some 6,000,000 workers, leading to the unionization of nearly 5,000,000 workers. Still, the NLRB was not responsible for the period’s union boom. The Wagner Act had no direct role in the early CIO years because it was ignored for two years until its constitutionality was established by the Supreme Court in National Labor Relations Board v. Jones and Laughlin Steel Company (1937). Furthermore, the election procedure’s gross contribution of 5,000,000 members was less than half of the period’s net union growth of 11,000,000 members. More important than the Wagner Act were crucial union victories over prominent open shop employers in cities like Akron, Ohio, Flint, Michigan, and among Philadelphia-area metal workers. Dedicated rank-and-file militants and effective union leadership were crucial in these victories. As important was the support of pro-New Deal local and state governments. The Roosevelt landslides of 1934 and 1936 brought to office liberal Democratic governors and mayors who gave crucial support to the early CIO. Placing a right to collective bargaining above private property rights, liberal governors and other elected officials in Michigan, Ohio, Pennsylvania and elsewhere refused to send police to evict sit-down strikers who had seized control of factories. This state support allowed the minority of workers who actively supported unionization to use force to overcome the passivity of the majority of workers and the opposition of the employers. The Open Shop of the 1920s was not abandoned; it was overwhelmed by an aggressive, government-backed labor movement (Gall, 1999; Harris, 2000).

World War II

Federal support for union organization was also crucial during World War II. Again, war helped unions both by eliminating unemployment and because state officials supported unions to gain support for the war effort. Established to minimize labor disputes that might disrupt war production, the National War Labor Board instituted a labor truce where unions exchanged a no-strike pledge for employer recognition. During World War II, employers conceded union security and “maintenance of membership” rules requiring workers to pay their union dues. Acquiescing to government demands, employers accepted the institutionalization of the American labor movement, guaranteeing unions a steady flow of dues to fund an expanded bureaucracy, new benefit programs, and even to raise funds for political action. After growing from 3.5 to 10.2 million members between 1935 and 1941, unions added another 4 million members during the war. “Maintenance of membership” rules prevented free riders even more effectively than had the factory takeovers and violence of the late-1930s. With millions of members and money in the bank, labor leaders like Sidney Hillman and Phillip Murray had the ear of business leaders and official Washington. Large, established, and respected: American labor had made it, part of a reformed capitalism committed to both property and prosperity.

Even more than the First World War, World War Two promoted unions and social change. A European civil war, the war divided the continent not only between warring countries but within countries between those, usually on the political right, who favored fascism over liberal parliamentary government and those who defended democracy. Before the war, left and right contended over the appeasement of Nazi Germany and fascist Italy; during the war, many businesses and conservative politicians collaborated with the German occupation against a resistance movement dominated by the left. Throughout Europe, victory over Germany was a triumph for labor that led directly to the entry into government of socialists and Communists.

Successes and Failures after World War II

Union membership exploded during and after the war, nearly doubling between 1938 and 1946. By 1947, unions had enrolled a majority of nonagricultural workers in Scandinavia, Australia, and Italy, and over 40 percent in most other European countries (see Table 1). Accumulated depression and wartime grievances sparked a post- war strike wave that included over 6 million strikers in France in 1948, 4 million in Italy in 1949 and 1950, and 5 million in the United States in 1946. In Europe, popular unrest led to a dramatic political shift to the left. The Labor Party government elected in the United Kingdom in 1945 established a new National Health Service, and nationalized mining, the railroads, and the Bank of England. A center-left post-war coalition government in France expanded the national pension system and nationalized the Bank of France, Renault, and other companies associated with the wartime Vichy regime. Throughout Europe, the share of national income devoted to social services jumped dramatically, as did the share of income going to the working classes.

Europeans unions and the state after World War II

Unions and the political left were stronger everywhere throughout post-war Europe, but in some countries labor’s position deteriorated quickly. In France, Italy, and Japan, the popular front uniting Communists, socialists, and bourgeois liberals dissolved, and labor’s management opponents recovered state support, with the onset of the Cold War. In these countries, union membership dropped after 1947 and unions remained on the defensive for over a decade in a largely adversarial industrial relations system. Elsewhere, notably in countries with weak Communist movements, such as in Scandinavia but also in Austria, Germany, and the Netherlands, labor was able to compel management and state officials to accept strong and centralized labor movements as social partners. In these countries, stable industrial relations allowed cooperation between management and labor to raise productivity and to open new markets for national companies. High-union-density and high-union-centralization allowed Scandinavian and German labor leaders to negotiate incomes policies with governments and employers restraining wage inflation in exchange for stable employment, investment, and wages linked to productivity growth. Such policies could not be instituted in countries with weaker and less centralized labor movements, including France, Italy, Japan, the United Kingdom and the United States because their unions had not been accepted as bargaining partners by management and they lacked the centralized authority to enforce incomes policies and productivity bargains (Alvarez, Garrett, and Lange, 1992).

Europe since the 1960s

Even where European labor was the weakest, in France or Italy in the 1950s, unions were stronger than before World War II. Working with entrenched socialist and labor political parties, European unions were able to maintain high wages, restrictions on managerial autonomy, and social security. The wave of popular unrest in the late 1960s and early 1970s would carry most European unions to new heights, briefly bringing membership to over 50 percent of the labor force in the United Kingdom and in Italy, and bringing socialists into the government in France, Germany, Italy, and the United Kingdom. Since 1980, union membership has declined some and there has been some retrenchment in the welfare state. But the essentials of European welfare states and labor relations have remained (Western, 1997; Golden and Pontusson, 1992).

Unions begin to decline in the US

It was after World War II that American Exceptionalism became most valid, when the United States emerged as the advanced, capitalist democracy with the weakest labor movement. The United States was the only advanced capitalist democracy where unions went into prolonged decline right after World War II. At 35 percent, the unionization rate in 1945 was the highest in American history, but even then it was lower than in most other advanced capitalist economies. It has been falling since. The post-war strike wave, including three million strikers in 1945 and five million in 1946, was the largest in American history but it did little to enhance labor’s political position or bargaining leverage. Instead, it provoked a powerful reaction among employers and others suspicious of growing union power. A concerted drive by the CIO to organize the South, “Operation Dixie,” failed dismally in 1946. Unable to overcome private repression, racial divisions, and the pro-employer stance of southern local and state governments, the CIO’s defeat left the South as a nonunion, low-wage domestic enclave and a bastion of anti- union politics (Griffith, 1988). Then, in 1946, a conservative Republican majority was elected to Congress, dashing hopes for a renewed, post-war New Deal.

The Taft-Hartley Act and the CIO’s Expulsion of Communists

Quickly, labor’s wartime dreams turned to post-war nightmares. The Republican Congress amended the Wagner Act, enacting the Taft-Hartley Act in 1947 to give employers and state officials new powers against strikers and unions. The law also required union leaders to sign a non-Communist affidavit as a condition for union participation in NLRB-sponsored elections. This loyalty oath divided labor during a time of weakness. With its roots in radical politics and an alliance of convenience between Lewis and the Communists, the CIO was torn by the new Red Scare. Hoping to appease the political right, the CIO majority in 1949 expelled ten Communist-led unions with nearly a third of the organization’s members. This marked the end of the CIO’s expansive period. Shorn of its left, the CIO lost its most dynamic and energetic organizers and leaders. Worse, it plunged the CIO into a civil war; non-Communist affiliates raided locals belonging to the “communist-led” unions fatally distracting both sides from the CIO’s original mission to organize the unorganized and empower the dispossessed. By breaking with the Communists, the CIO’s leadership signaled that it had accepted its place within a system of capitalist hierarchy. Little reason remained for the CIO to remain independent. In 1955 it merged with the AFL to form the AFL-CIO.

The Golden Age of American Unions

Without the revolutionary aspirations now associated with the discredited Communists, America’s unions settled down to bargain over wages and working conditions without challenging such managerial prerogatives as decisions about prices, production, and investment. Some labor leaders, notably James Hoffa of the Teamsters but also local leaders in construction and service trades, abandoned all higher aspirations to use their unions for purely personal financial gain. Allying themselves with organized crime, they used violence to maintain their power over employers and their own rank-and-file membership. Others, including former-CIO leaders, like Walter Reuther of the United Auto Workers, continued to push the envelope of legitimate bargaining topics, building challenges to capitalist authority at the workplace. But even the UAW was unable to force major managerial prerogatives onto the bargaining table.

The quarter century after 1950 formed a ‘golden age’ for American unions. Established unions found a secure place at the bargaining table with America’s leading firms in such industries as autos, steel, trucking, and chemicals. Contracts were periodically negotiated providing for the exchange of good wages for cooperative workplace relations. Rules were negotiated providing a system of civil authority at work, with negotiated regulations for promotion and layoffs, and procedures giving workers opportunities to voice grievances before neutral arbitrators. Wages rose steadily, by over 2 percent per year and union workers earned a comfortable 20 percent more than nonunion workers of similar age, experience and education. Wages grew faster in Europe but American wages were higher and growth was rapid enough to narrow the gap between rich and poor, and between management salaries and worker wages. Unions also won a growing list of benefit programs, medical and dental insurance, paid holidays and vacations, supplemental unemployment insurance, and pensions. Competition for workers forced many nonunion employers to match the benefit packages won by unions, but unionized employers provided benefits worth over 60 percent more than were given nonunion workers (Freeman and Medoff, 1984; Hirsch and Addison, 1986).

Impact of decentralized bargaining in the US

In most of Europe, strong labor movements limit the wage and benefit advantages of union membership by forcing governments to extend union gains to all workers in an industry regardless of union status. By compelling nonunion employers to match union gains, this limited the competitive penalty borne by unionized firms. By contrast, decentralized bargaining and weak unions in the United States created large union wage differentials that put unionized firms at a competitive disadvantage, encouraging them to seek out nonunion labor and localities. A stable and vocal workforce with more experience and training did raise unionized firms’ labor productivity by 15 percent or more above the level of nonunion firms and some scholars have argued that unionized workers earn much of their wage gain. Others, however, find little productivity gain for unionized workers after account is taken of greater use of machinery and other nonlabor inputs by unionized firms (compare Freeman and Medoff, 1984 and Hirsch and Addison, 1986). But even unionized firms with higher labor productivity were usually more conscious of the wages and benefits paid to union worker than they were of unionization’s productivity benefits.

Unions and the Civil Rights Movement

Post-war unions remained politically active. European unions were closely associated with political parties, Communists in France and Italy, socialists or labor parties elsewhere. In practice, notwithstanding revolutionary pronouncements, even the Communist’s political agenda came to resemble that of unions in the United States, liberal reform including a commitment to full employment and the redistribution of income towards workers and the poor (Boyle, 1998). Golden age unions have also been at the forefront of campaigns to extend individual rights. The major domestic political issue of the post-war United States, civil rights, was troubling for many unions because of the racist provisions in their own practice. Nonetheless, in the 1950s and 1960s, the AFL-CIO strongly supported the civil rights movement, funded civil rights organizations and lobbied in support of civil rights legislation. The AFL-CIO pushed unions to open their ranks to African-American workers, even at the expense of losing affiliates in states like Mississippi. Seizing the opportunity created by the civil rights movement, some unions gained members among nonwhites. The feminist movement of the 1970s created new challenges for the masculine and sometimes misogynist labor movement. But, here too, the search for members and a desire to remove sources of division eventually brought organized labor to the forefront. The AFL-CIO supported the Equal Rights Amendment and began to promote women to leadership positions.

Shift of unions to the public sector

In no other country have women and members of racial minorities assumed such prominent positions in the labor movement as they have in the United States. The movement of African-American and women to leadership positions in the late-twentieth century labor movement was accelerated by a shift in the membership structure of the United States union movement. Maintaining their strength in traditional, masculine occupations in manufacturing, construction, mining, and transportation, European unions remained predominantly male. Union decline in these industries combined with growth in heavily female public sector employments in the United States led to the femininization of the American labor movement. Union membership began to decline in the private sector in the United States immediately after World War II. Between 1953 and 1983, for example, the unionization rate fell from 42 percent to 28 percent in manufacturing, by nearly half in transportation, and by over half in construction and mining (see Table 4). By contrast, after 1960, public sector workers won new opportunities to form unions. Because women and racial minorities form a disproportionate share of these public sector workers, increasing union membership there has changed the American labor movement’s racial and gender composition. Women comprised only 19 percent of American union members in the mid-1950s but their share rose to 40 percent by the late 1990s. By then, the most unionized workers were no longer the white male skilled craftsmen of old. Instead, they were nurses, parole officers, government clerks, and most of all, school teachers.

Union Collapse and Union Avoidance in the US

Outside the United States, unions grew through the 1970s and, despite some decline since the 1980s, European and Canadian unions remain large and powerful. The United States is different. Union decline since World War II has brought the United States private-sector labor movement down to early twentieth century levels. As a share of the nonagricultural labor force, union membership fell from its 1945 peak of 35 percent down to under 30 percent in the early 1970s. From there, decline became a general rout. In the 1970s, rising unemployment, increasing international competition, and the movement of industry to the nonunion South and to rural areas undermined the bargaining position of many American unions leaving them vulnerable to a renewed management offensive. Returning to pre-New Deal practices, some employers established new welfare and employee representation programs, hoping to lure worker away from unions (Heckscher, 1987; Jacoby, 1997). Others returned to pre-New Deal repression. By the early 1980s, union avoidance had become an industry. Anti-union consultants and lawyers openly counseled employers how to use labor law to evade unions. Findings of employers’ unfair labor practices in violation of the Wagner Act tripled in the 1970s; by the 1980s, the NLRB reinstated over 10,000 workers a year who were illegally discharged for union activity, nearly one for every twenty who voted for a union in an NLRB election (Weiler, 1983). By the 1990s, the unionization rate in the United States fell to under 14 percent, including only 9 percent of the private sector workers and 37 percent of those in the public sector. Unions now have minimal impact on wages or working conditions for most American workers.

Nowhere else have unions collapsed as in the United States. With a unionization rate dramatically below that of other countries, including Canada, the United States has achieved exceptional status (see Table 7). There remains great interest in unions among American workers; where employers do not resist, unions thrive. In the public sector and in some private employers where workers have free choice to join a union, they are as likely as they ever were, and as likely as workers anywhere. In the past, as after 1886 and in the 1920s, when American employers broke unions, they revived when a government committed to workplace democracy sheltered them from employer repression. If we see another such government, we may yet see another union revival.

Table 7
Union Membership Rates for the United States and Six Other Leading Industrial Economies, 1970 to 1990

1970 1980 1990
U.S.: Unionization Rate: All industries 30.0 24.7 17.6
U.S.: Unionization Rate: Manufacturing 41.0 35.0 22.0
U.S.: Unionization Rate: Financial services 5.0 4.0 2.0
Six Countries: Unionization Rate: All industries 37.1 39.7 35.3
Six Countries: Unionization Rate: Manufacturing 38.8 44.0 35.2
Five Countries: Unionization Rate: Financial services 23.9 23.8 24.0
Ratio: U.S./Six Countries: All industries 0.808 0.622 0.499
Ratio: U.S./Six Countries: Manufacturing 1.058 0.795 0.626
Ratio: U.S./Five Countries: Financial services 0.209 0.168 0.083

Note: The unionization rate reported is the number of union members out of 100 workers in the specified industry. The ratio shown is the unionization rate for the United States divided by the unionization rate for the other countries. The six countries are Canada, France, Germany, Italy, Japan, and the United Kingdom. Data on union membership in financial services in France are not available.

Source: Visser (1991): 110.

References

Alvarez, R. Michael, Geoffrey Garrett and Peter Lange. “Government Partisanship, Labor Organization, and Macroeconomic Performance,” American Political Science Review 85 (1992): 539-556.

Ansell, Christopher K. Schism and Solidarity in Social Movements: The Politics of Labor in the French Third Republic. Cambridge: Cambridge University Press, 2001.

Arnesen, Eric, Brotherhoods of Color: Black Railroad Workers and the Struggle for Equality. Cambridge, MA: Harvard University Press, 2001.

Bain, George S., and Farouk Elsheikh. Union Growth and the Business Cycle: An Econometric Analysis. Oxford: Basil Blackwell, 1976.

Bain, George S. and Robert Price. Profiles of Union Growth: A Comparative Statistical Portrait of Eight Countries. Oxford: Basil Blackwell, 1980.

Bernard, Phillippe and Henri Dubief. The Decline of the Third Republic, 1914-1938. Cambridge: Cambridge University Press, 1993.

Blewett, Mary H. Men, Women, and Work: Class, Gender and Protest in the New England Shoe Industry, 1780-1910. Urbana, IL: University of Illinois Press, 1988.

Boyle, Kevin, editor. Organized Labor and American Politics, 1894-1994: The Labor-Liberal Alliance. Albany, NY: State University of New York Press, 1998.

Brinkley, Alan. The End of Reform: New Deal Liberalism in Recession and War. New York: Alfred A. Knopf, 1995.

Brody, David. Workers in Industrial America: Essays on the Twentieth-Century Struggle. New York: Oxford University Press, 1985.

Cazals, Rémy. Avec les ouvriers de Mazamet dans la grève et l’action quotidienne, 1909-1914. Paris: Maspero, 1978.

Cohen, Lizabeth. Making A New Deal: Industrial Workers in Chicago, 1919-1939. Cambridge: Cambridge University Press, 1990.

Cronin, James E. Industrial Conflict in Modern Britain. London: Croom Helm, 1979.

Cronin, James E. “Labor Insurgency and Class Formation.” In Work, Community, and Power: The Experience of Labor in Europe and America, 1900-1925, edited by James E. Cronin and Carmen Sirianni. Philadelphia: Temple University Press, 1983. .

Cronin, James E. and Carmen Sirianni, editors. Work, Community, and Power: The Experience of Labor in Europe and America, 1900-1925. Philadelphia: Temple University Press, 1983.

Dawley, Alan. Class and Community: The Industrial Revolution in Lynn. Cambridge, MA: Harvard University Press, 1976.

Ely, James W., Jr. The Guardian of Every Other Right: A Constitutional History of Property Rights. New York: Oxford, 1998.

Fink, Leon. Workingmen’s Democracy: The Knights of Labor and American Politics. Urbana, IL: University of Illinois Press, 1983.

Fink, Leon. “The New Labor History and the Powers of Historical Pessimism: Consensus, Hegemony, and the Case of the Knights of Labor.” Journal of American History 75 (1988): 115-136.

Foner, Philip S. Organized Labor and the Black Worker, 1619-1973. New York: International Publishers, 1974.

Foner, Philip S. Women and the American Labor Movement: From Colonial Times to the Eve of World War I. New York: Free Press, 1979.

Frank, Dana. Purchasing Power: Consumer Organizing, Gender, and the Seattle Labor Movement, 1919- 1929. Cambridge: Cambridge University Press, 1994.

Freeman, Richard and James Medoff. What Do Unions Do? New York: Basic Books, 1984.

Friedman, Gerald. “Dividing Labor: Urban Politics and Big-City Construction in Late-Nineteenth Century America.” In Strategic Factors in Nineteenth-Century American Economic History, edited by Claudia Goldin and Hugh Rockoff, 447-64. Chicago: University of Chicago Press, 1991.

Friedman, Gerald. “Revolutionary Syndicalism and French Labor: The Rebels Behind the Cause.” French Historical Studies 20 (Spring 1997).

Friedman, Gerald. State-Making and Labor Movements: France and the United States 1876-1914. Ithaca, NY: Cornell University Press, 1998.

Friedman, Gerald. “New Estimates of United States Union Membership, 1880-1914.” Historical Methods 32 (Spring 1999): 75-86.

Friedman, Gerald. “The Political Economy of Early Southern Unionism: Race, Politics, and Labor in the South, 1880-1914.” Journal of Economic History 60, no. 2 (2000): 384-413.

Friedman, Gerald. “The Sanctity of Property in American Economic History” (manuscript, University of Massachusetts, July 2001).

Gall, Gilbert. Pursuing Justice: Lee Pressman, the New Deal, and the CIO. Albany, NY: State University of New York Press, 1999.

Gamson, William A. The Strategy of Social Protest. Homewood, IL: Dorsey Press, 1975.

Geary, Richard. European Labour Protest, 1848-1939. New York: St. Martin’s Press, 1981.

Golden, Miriam and Jonas Pontusson, editors. Bargaining for Change: Union Politics in North America and Europe. Ithaca, NY: Cornell University Press, 1992.

Griffith, Barbara S. The Crisis of American Labor: Operation Dixie and the Defeat of the CIO. Philadelphia: Temple University Press, 1988.

Harris, Howell John. Bloodless Victories: The Rise and Fall of the Open Shop in the Philadelphia Metal Trades, 1890-1940. Cambridge: Cambridge University Press, 2000.

Hattam, Victoria C. Labor Visions and State Power: The Origins of Business Unionism in the United States. Princeton: Princeton University Press, 1993.

Heckscher, Charles C. The New Unionism: Employee Involvement in the Changing Corporation. New York: Basic Books, 1987.

Hirsch, Barry T. and John T. Addison. The Economic Analysis of Unions: New Approaches and Evidence. Boston: Allen and Unwin, 1986.

Hirschman, Albert O. Exit, Voice and Loyalty: Responses to Decline in Firms, Organizations, and States. Cambridge, MA, Harvard University Press, 1970.

Hirschman, Albert O. Shifting Involvements: Private Interest and Public Action. Princeton: Princeton University Press, 1982.

Hobsbawm, Eric J. Labouring Men: Studies in the History of Labour. London: Weidenfeld and Nicolson, 1964.

Irons, Janet. Testing the New Deal: The General Textile Strike of 1934 in the American South. Urbana, IL: University of Illinois Press, 2000.

Jacoby, Sanford. Modern Manors: Welfare Capitalism Since the New Deal. Princeton: Princeton University Press, 1997.

Katznelson, Ira and Aristide R. Zolberg, editors. Working-Class Formation: Nineteenth-Century Patterns in Western Europe and the United States. Princeton: Princeton University Press, 1986. Kocka, Jurgen. “Problems of Working-Class Formation in Germany: The Early Years, 1800-1875.” In Working- Class Formation: Nineteenth-Century Patterns in Western Europe and the United States, edited by Ira Katznelson and Aristide R. Zolberg, 279-351. Princeton: Princeton University Press, 1986. Letwin, Daniel. The Challenge of Interracial Unionism: Alabama Coal Miners, 1878-1921. Chapel Hill: University of North Carolina Press, 1998. Maddison, Angus. Dynamic Forces in Capitalist Development: A Long-Run Comparative View. Oxford: Oxford University Press, 1991. Magraw, Roger. A History of the French Working Class, two volumes. London: Blackwell, 1992. Milkman, Ruth. Women, Work, and Protest: A Century of United States Women’s Labor. Boston: Routledge and Kegan Paul, 1985.

Montgomery, David. The Fall of the House of Labor: The Workplace, the State, and American Labor Activism, 1865-1920. Cambridge: Cambridge University Press, 1987.

Mullin, Debbie Dudley. “The Porous Umbrella of the AFL: Evidence From Late Nineteenth-Century State Labor Bureau Reports on the Establishment of American Unions.” Ph.D. diss., University of Virginia, 1993.

Nolan, Mary. Social Democracy and Society: Working-Class Radicalism in Dusseldorf, 1890-1920. Cambridge: Cambridge University Press, 1981.

Olson, Mancur. The Logic of Collective Action: Public Goods and the Theory of Groups. Cambridge, MA: Harvard University Press, 1971.

Perlman, Selig. A Theory of the Labor Movement. New York: MacMillan, 1928.

Rachleff, Peter J. Black Labor in the South, 1865-1890. Philadelphia: Temple University Press, 1984.

Roediger, David. The Wages of Whiteness: Race and the Making of the American Working Class. London: Verso, 1991.

Scott, Joan. The Glassworkers of Carmaux: French Craftsmen in Political Action in a Nineteenth-Century City. Cambridge, MA: Harvard University Press, 1974.

Sewell, William H. Jr. Work and Revolution in France: The Language of Labor from the Old Regime to 1848. Cambridge: Cambridge University Press, 1980.

Shorter, Edward and Charles Tilly. Strikes in France, 1830-1968. Cambridge: Cambridge University Press, 1974.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: MIT Press, 1990.

Thompson, Edward P. The Making of the English Working Class. New York: Vintage, 1966.

Troy, Leo. Distribution of Union Membership among the States, 1939 and 1953. New York: National Bureau of Economic Research, 1957.

United States, Bureau of the Census. Census of Occupations, 1930. Washington, DC: Government Printing Office, 1932.

Visser, Jelle. European Trade Unions in Figures. Boston: Kluwer, 1989.

Voss, Kim. The Making of American Exceptionalism: The Knights of Labor and Class Formation in the Nineteenth Century. Ithaca, NY: Cornell University Press, 1993.

Ware, Norman. The Labor Movement in the United States, 1860-1895: A Study in Democracy. New York: Vintage, 1929.

Washington, Booker T. “The Negro and the Labor Unions.” Atlantic Monthly (June 1913).

Weiler, Paul. “Promises to Keep: Securing Workers Rights to Self-Organization Under the NLRA.” Harvard Law Review 96 (1983).

Western, Bruce. Between Class and Market: Postwar Unionization in the Capitalist Democracies. Princeton: Princeton University Press, 1997.

Whatley, Warren. “African-American Strikebreaking from the Civil War to the New Deal.” Social Science History 17 (1993), 525-58.

Wilentz, Robert Sean. Chants Democratic: New York City and the Rise of the American Working Class, 1788-1850. Oxford: Oxford University Press, 1984.

Wolman, Leo. Ebb and Flow in Trade Unionism. New York: National Bureau of Economic Research, 1936.

Zieger, Robert. The CIO, 1935-1955. Chapel Hill: University of North Carolina Press, 1995.

Zolberg, Aristide. “Moments of Madness.” Politics and Society 2 (Winter 1972): 183-207. 60

Citation: Friedman, Gerald. “Labor Unions in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/labor-unions-in-the-united-states/

The History of American Labor Market Institutions and Outcomes

Joshua Rosenbloom, University of Kansas

One of the most important implications of modern microeconomic theory is that perfectly competitive markets produce an efficient allocation of resources. Historically, however, most markets have not approached the level of organization of this theoretical ideal. Instead of the costless and instantaneous communication envisioned in theory, market participants must rely on a set of incomplete and often costly channels of communication to learn about conditions of supply and demand; and they may face significant transaction costs to act on the information that they have acquired through these channels.

The economic history of labor market institutions is concerned with identifying the mechanisms that have facilitated the allocation of labor effort in the economy at different times, tracing the historical processes by which they have responded to shifting circumstances, and understanding how these mechanisms affected the allocation of labor as well as the distribution of labor’s products in different epochs.

Labor market institutions include both formal organizations (such as union hiring halls, government labor exchanges, and third party intermediaries such as employment agents), and informal mechanisms of communication such as word-of-mouth about employment opportunities passed between family and friends. The impact of these institutions is broad ranging. It includes the geographic allocation of labor (migration and urbanization), decisions about education and training of workers (investment in human capital), inequality (relative wages), the allocation of time between paid work and other activities such as household production, education, and leisure, and fertility (the allocation of time between production and reproduction).

Because each worker possesses a unique bundle of skills and attributes and each job is different, labor market transactions require the communication of a relatively large amount of information. In other words, the transactions costs involved in the exchange of labor are relatively high. The result is that the barriers separating different labor markets have sometimes been quite high, and these markets are relatively poorly integrated with one another.

The frictions inherent in the labor market mean that even during macroeconomic expansions there may be both a significant number of unemployed workers and a large number of unfilled vacancies. When viewed from some distance and looked at in the long-run, however, what is most striking is how effective labor market institutions have been in adapting to the shifting patterns of supply and demand in the economy. Over the past two centuries American labor markets have accomplished a massive redistribution of labor out of agriculture into manufacturing, and then from manufacturing into services. At the same time they have accomplished a huge geographic reallocation of labor between the United States and other parts of the world as well as within the United States itself, both across states and regions and from rural locations to urban areas.

This essay is organized topically, beginning with a discussion of the evolution of institutions involved in the allocation of labor across space and then taking up the development of institutions that fostered the allocation of labor across industries and sectors. The third section considers issues related to labor market performance.

The Geographic Distribution of Labor

One of the dominant themes of American history is the process of European settlement (and the concomitant displacement of the native population). This movement of population is in essence a labor market phenomenon. From the beginning of European settlement in what became the United States, labor markets were characterized by the scarcity of labor in relation to abundant land and natural resources. Labor scarcity raised labor productivity and enabled ordinary Americans to enjoy a higher standard of living than comparable Europeans. Counterbalancing these inducements to migration, however, were the high costs of travel across the Atlantic and the significant risks posed by settlement in frontier regions. Over time, technological changes lowered the costs of communication and transportation. But exploiting these advantages required the parallel development of new labor market institutions.

Trans-Atlantic Migration in the Colonial Period

During the seventeenth and eighteenth centuries a variety of labor market institutions developed to facilitate the movement of labor in response to the opportunities created by American factor proportions. While some immigrants migrated on their own, the majority of immigrants were either indentured servants or African slaves.

Because of the cost of passage—which exceeded half a year’s income for a typical British immigrant and a full year’s income for a typical German immigrant—only a small portion of European migrants could afford to pay for their passage to the Americas (Grubb 1985a). They did so by signing contracts, or “indentures,” committing themselves to work for a fixed number of years in the future—their labor being their only viable asset—with British merchants, who then sold these contracts to colonists after their ship reached America. Indentured servitude was introduced by the Virginia Company in 1619 and appears to have arisen from a combination of the terms of two other types of labor contract widely used in England at the time: service in husbandry and apprenticeship (Galenson 1981). In other cases, migrants borrowed money for their passage and committed to repay merchants by pledging to sell themselves as servants in America, a practice known as “redemptioner servitude (Grubb 1986). Redemptioners bore increased risk because they could not predict in advance what terms they might be able to negotiate for their labor, but presumably they did so because of other benefits, such as the opportunity to choose their own master, and to select where they would be employed.

Although data on immigration for the colonial period are scattered and incomplete a number of scholars have estimated that between half and three quarters of European immigrants arriving in the colonies came as indentured or redemptioner servants. Using data for the end of the colonial period Grubb (1985b) found that close to three-quarters of English immigrants to Pennsylvania and nearly 60 percent of German immigrants arrived as servants.

A number of scholars have examined the terms of indenture and redemptioner contracts in some detail (see, e.g., Galenson 1981; Grubb 1985a). They find that consistent with the existence of a well-functioning market, the terms of service varied in response to differences in individual productivity, employment conditions, and the balance of supply and demand in different locations.

The other major source of labor for the colonies was the forced migration of African slaves. Slavery had been introduced in the West Indies at an early date, but it was not until the late seventeenth century that significant numbers of slaves began to be imported into the mainland colonies. From 1700 to 1780 the proportion of blacks in the Chesapeake region grew from 13 percent to around 40 percent. In South Carolina and Georgia, the black share of the population climbed from 18 percent to 41 percent in the same period (McCusker and Menard, 1985, p. 222). Galenson (1984) explains the transition from indentured European to enslaved African labor as the result of shifts in supply and demand conditions in England and the trans-Atlantic slave market. Conditions in Europe improved after 1650, reducing the supply of indentured servants, while at the same time increased competition in the slave trade was lowering the price of slaves (Dunn 1984). In some sense the colonies’ early experience with indentured servants paved the way for the transition to slavery. Like slaves, indentured servants were unfree, and ownership of their labor could be freely transferred from one owner to another. Unlike slaves, however, they could look forward to eventually becoming free (Morgan 1971).

Over time a marked regional division in labor market institutions emerged in colonial America. The use of slaves was concentrated in the Chesapeake and Lower South, where the presence of staple export crops (rice, indigo and tobacco) provided economic rewards for expanding the scale of cultivation beyond the size achievable with family labor. European immigrants (primarily indentured servants) tended to concentrate in the Chesapeake and Middle Colonies, where servants could expect to find the greatest opportunities to enter agriculture once they had completed their term of service. While New England was able to support self-sufficient farmers, its climate and soil were not conducive to the expansion of commercial agriculture, with the result that it attracted relatively few slaves, indentured servants, or free immigrants. These patterns are illustrated in Table 1, which summarizes the composition and destinations of English emigrants in the years 1773 to 1776.

Table 1

English Emigration to the American Colonies, by Destination and Type, 1773-76

Total Emigration
Destination Number Percentage Percent listed as servants
New England 54 1.20 1.85
Middle Colonies 1,162 25.78 61.27
New York 303 6.72 11.55
Pennsylvania 859 19.06 78.81
Chesapeake 2,984 66.21 96.28
Maryland 2,217 49.19 98.33
Virginia 767 17.02 90.35
Lower South 307 6.81 19.54
Carolinas 106 2.35 23.58
Georgia 196 4.35 17.86
Florida 5 0.11 0.00
Total 4,507 80.90

Source: Grubb (1985b, p. 334).

International Migration in the Nineteenth and Twentieth Centuries

American independence marks a turning point in the development of labor market institutions. In 1808 Congress prohibited the importation of slaves. Meanwhile, the use of indentured servitude to finance the migration of European immigrants fell into disuse. As a result, most subsequent migration was at least nominally free migration.

The high cost of migration and the economic uncertainties of the new nation help to explain the relatively low level of immigration in the early years of the nineteenth century. But as the costs of transportation fell, the volume of immigration rose dramatically over the course of the century. Transportation costs were of course only one of the obstacles to international population movements. At least as important were problems of communication. Potential migrants might know in a general way that the United States offered greater economic opportunities than were available at home, but acting on this information required the development of labor market institutions that could effectively link job-seekers with employers.

For the most part, the labor market institutions that emerged in the nineteenth century to direct international migration were “informal” and thus difficult to document. As Rosenbloom (2002, ch. 2) describes, however, word-of-mouth played an important role in labor markets at this time. Many immigrants were following in the footsteps of friends or relatives already in the United States. Often these initial pioneers provided material assistance—helping to purchase ship and train tickets, providing housing—as well as information. The consequences of this so-called “chain migration” are readily reflected in a variety of kinds of evidence. Numerous studies of specific migration streams have documented the role of a small group of initial migrants in facilitating subsequent migration (for example, Barton 1975; Kamphoefner 1987; Gjerde 1985). At a more aggregate level, settlement patterns confirm the tendency of immigrants from different countries to concentrate in different cities (Ward 1971, p. 77; Galloway, Vedder and Shukla 1974).

Informal word-of-mouth communication was an effective labor market institution because it served both employers and job-seekers. For job-seekers the recommendations of friends and relatives were more reliable than those of third parties and often came with additional assistance. For employers the recommendations of current employees served as a kind of screening mechanism, since their employees were unlikely to encourage the immigration of unreliable workers.

While chain migration can explain a quantitatively large part of the redistribution of labor in the nineteenth century it is still necessary to explain how these chains came into existence in the first place. Chain migration always coexisted with another set of more formal labor market institutions that grew up largely to serve employers who could not rely on their existing labor force to recruit new hires (such as railroad construction companies). Labor agents, often themselves immigrants, acted as intermediaries between these employers and job-seekers, providing labor market information and frequently acting as translators for immigrants who could not speak English. Steamship companies operating between Europe and the United States also employed agents to help recruit potential migrants (Rosenbloom 2002, ch. 3).

By the 1840s networks of labor agents along with boarding houses serving immigrants and other similar support networks were well established in New York, Boston, and other major immigrant destinations. The services of these agents were well documented in published guides and most Europeans considering immigration must have known that they could turn to these commercial intermediaries if they lacked friends and family to guide them. After some time working in America these immigrants, if they were successful, would find steadier employment and begin to direct subsequent migration, thus establishing a new link in the stream of chain migration.

The economic impacts of immigration are theoretically ambiguous. Increased labor supply, by itself, would tend to lower wages—benefiting employers and hurting workers. But because immigrants are also consumers, the resulting increase in demand for goods and services will increase the demand for labor, partially offsetting the depressing effect of immigration on wages. As long as the labor to capital ratio rises, however, immigration will necessarily lower wages. But if, as was true in the late nineteenth century, foreign lending follows foreign labor, then there may be no negative impact on wages (Carter and Sutch 1999). Whatever the theoretical considerations, however, immigration became an increasingly controversial political issue during the late nineteenth and early twentieth centuries. While employers and some immigrant groups supported continued immigration, there was a growing nativist sentiment among other segments of the population. Anti-immigrant sentiments appear to have arisen out of a mix of perceived economic effects and concern about the implications of the ethnic, religious and cultural differences between immigrants and the native born.

In 1882, Congress passed the Chinese Exclusion Act. Subsequent legislative efforts to impose further restrictions on immigration passed Congress but foundered on presidential vetoes. The balance of political forces shifted, however, in the wake of World War I. In 1917 a literacy requirement was imposed for the first time, and in 1921 an Emergency Quota Act was passed (Goldin 1994).

With the passage of the Emergency Quota Act in 1921 and subsequent legislation culminating in the National Origins Act, the volume of immigration dropped sharply. Since this time international migration into the United States has been controlled to varying degrees by legal restrictions. Variations in the rules have produced variations in the volume of legal immigration. Meanwhile the persistence of large wage gaps between the United States and Mexico and other developing countries has encouraged a substantial volume of illegal immigration. It remains the case, however, that most of this migration—both legal and illegal—continues to be directed by chains of friends and relatives.

Recent trends in outsourcing and off-shoring have begun to create a new channel by which lower-wage workers outside the United States can respond to the country’s high wages without physically relocating. Workers in India, China, and elsewhere possessing technical skills can now provide services such as data entry or technical support by phone and over the internet. While the novelty of this phenomenon has attracted considerable attention, the actual volume of jobs moved off-shore remains limited, and there are important obstacles to overcome before more jobs can be carried out remotely (Edwards 2004).

Internal Migration in the Nineteenth and Twentieth Centuries

At the same time that American economic development created international imbalances between labor supply and demand it also created internal disequilibrium. Fertile land and abundant natural resources drew population toward less densely settled regions in the West. Over the course of the century, advances in transportation technologies lowered the cost of shipping goods from interior regions, vastly expanding the area available for settlement. Meanwhile transportation advances and technological innovations encouraged the growth of manufacturing and fueled increased urbanization. The movement of population and economic activity from the Eastern Seaboard into the interior of the continent and from rural to urban areas in response to these incentives is an important element of U.S. economic history in the nineteenth century.

In the pre-Civil War era, the labor market response to frontier expansion differed substantially between North and South, with profound effects on patterns of settlement and regional development. Much of the cost of migration is a result of the need to gather information about opportunities in potential destinations. In the South, plantation owners could spread these costs over a relatively large number of potential migrants—i.e., their slaves. Plantations were also relatively self-sufficient, requiring little urban or commercial infrastructure to make them economically viable. Moreover, the existence of well-established markets for slaves allowed western planters to expand their labor force by purchasing additional labor from eastern plantations.

In the North, on the other hand, migration took place through the relocation of small, family farms. Fixed costs of gathering information and the risks of migration loomed larger in these farmers’ calculations than they did for slaveholders, and they were more dependent on the presence of urban merchants to supply them with inputs and market their products. Consequently the task of mobilizing labor fell to promoters who bought up large tracts of land at low prices and then subdivided them into individual lots. To increase the value of these lands promoters offered loans, actively encourage the development of urban services such as blacksmith shops, grain merchants, wagon builders and general stores, and recruited settlers. With the spread of railroads, railroad construction companies also played a role in encouraging settlement along their routes to speed the development of traffic.

The differences in processes of westward migration in the North and South were reflected in the divergence of rates of urbanization, transportation infrastructure investment, manufacturing employment, and population density, all of which were higher in the North than in the South in 1860 (Wright 1986, pp. 19-29).

The Distribution of Labor among Economic Activities

Over the course of U.S. economic development technological changes and shifting consumption patterns have caused the demand for labor to increase in manufacturing and services and decline in agriculture and other extractive activities. These broad changes are illustrated in Table 2. As technological changes have increased the advantages of specialization and the division of labor, more and more economic activity has moved outside the scope of the household, and the boundaries of the labor market have been enlarged. As a result more and more women have moved into the paid labor force. On the other hand, with the increasing importance of formal education, there has been a decline in the number of children in the labor force (Whaples 2005).

Table 2

Sectoral Distribution of the Labor Force, 1800-1999

Share in
Non-Agriculture
Year Total Labor Force (1000s) Agriculture Total Manufacturing Services
1800 1,658 76.2 23.8
1850 8,199 53.6 46.4
1900 29,031 37.5 59.4 35.8 23.6
1950 57,860 11.9 88.1 41.0 47.1
1999 133,489 2.3 97.7 24.7 73.0

Notes and Sources: 1800 and 1850 from Weiss (1986), pp. 646-49; remaining years from Hughes and Cain (2003), 547-48. For 1900-1999 Forestry and Fishing are included in the Agricultural labor force.

As these changes have taken place they have placed strains on existing labor market institutions and encouraged the development of new mechanisms to facilitate the distribution of labor. Over the course of the last century and a half the tendency has been a movement away from something approximating a “spot” market characterized by short-term employment relationships in which wages are equated to the marginal product of labor, and toward a much more complex and rule-bound set of long-term transactions (Goldin 2000, p. 586) While certain segments of the labor market still involve relatively anonymous and short-lived transactions, workers and employers are much more likely today to enter into long-term employment relationships that are expected to last for many years.

The evolution of labor market institutions in response to these shifting demands has been anything but smooth. During the late nineteenth century the expansion of organized labor was accompanied by often violent labor-management conflict (Friedman 2002). Not until the New Deal did unions gain widespread acceptance and a legal right to bargain. Yet even today, union organizing efforts are often met with considerable hostility.

Conflicts over union organizing efforts inevitably involved state and federal governments because the legal environment directly affected the bargaining power of both sides, and shifting legal opinions and legislative changes played an important part in determining the outcome of these contests. State and federal governments were also drawn into labor markets as various groups sought to limit hours of work, set minimum wages, provide support for disabled workers, and respond to other perceived shortcomings of existing arrangements. It would be wrong, however, to see the growth of government regulation as simply a movement from freer to more regulated markets. The ability to exchange goods and services rests ultimately on the legal system, and to this extent there has never been an entirely unregulated market. In addition, labor market transactions are never as simple as the anonymous exchange of other goods or services. Because the identities of individual buyers and sellers matter and the long-term nature of many employment relationships, adjustments can occur along other margins besides wages, and many of these dimensions involve externalities that affect all workers at a particular establishment, or possibly workers in an entire industry or sector.

Government regulations have responded in many cases to needs voiced by participants on both sides of the labor market for assistance to achieve desired ends. That has not, of course, prevented both workers and employers from seeking to use government to alter the way in which the gains from trade are distributed within the market.

The Agricultural Labor Market

At the beginning of the nineteenth century most labor was employed in agriculture, and, with the exception of large slave plantations, most agricultural labor was performed on small, family-run farms. There were markets for temporary and seasonal agricultural laborers to supplement family labor supply, but in most parts of the country outside the South, families remained the dominant institution directing the allocation of farm labor. Reliable estimates of the number of farm workers are not readily available before 1860, when the federal Census first enumerated “farm laborers.” At this time census enumerators found about 800 thousand such workers, implying an average of less than one-half farm worker per farm. Interpretation of this figure is complicated, however, and it may either overstate the amount of hired help—since farm laborers included unpaid family workers—or understate it—since it excluded those who reported their occupation simply as “laborer” and may have spent some of their time working in agriculture (Wright 1988, p. 193). A possibly more reliable indicator is provided by the percentage of gross value of farm output spent on wage labor. This figure fell from 11.4 percent in 1870 to around 8 percent by 1900, indicating that hired labor was on average becoming even less important (Wright 1988, pp. 194-95).

In the South, after the Civil War, arrangements were more complicated. Former plantation owners continued to own large tracts of land that required labor if they were to be made productive. Meanwhile former slaves needed access to land and capital if they were to support themselves. While some land owners turned to wage labor to work their land, most relied heavily on institutions like sharecropping. On the supply side, croppers viewed this form of employment as a rung on the “agricultural ladder” that would lead eventually to tenancy and possibly ownership. Because climbing the agricultural ladder meant establishing one’s credit-worthiness with local lenders, southern farm laborers tended to sort themselves into two categories: locally established (mostly older, married men) croppers and renters on the one hand, and mobile wage laborers (mostly younger and unmarried) on the other. While the labor market for each of these types of workers appears to have been relatively competitive, the barriers between the two markets remained relatively high (Wright 1987, p. 111).

While the predominant pattern in agriculture then was one of small, family-operated units, there was an important countervailing trend toward specialization that both depended on, and encouraged the emergence of a more specialized market for farm labor. Because specialization in a single crop increased the seasonality of labor demand, farmers could not afford to employ labor year-round, but had to depend on migrant workers. The use of seasonal gangs of migrant wage laborers developed earliest in California in the 1870s and 1880s, where employers relied heavily on Chinese immigrants. Following restrictions on Chinese entry, they were replaced first by Japanese, and later by Mexican workers (Wright 1988, pp. 201-204).

The Emergence of Internal Labor Markets

Outside of agriculture, at the beginning of the nineteenth century most manufacturing took place in small establishments. Hired labor might consist of a small number of apprentices, or, as in the early New England textile mills, a few child laborers hired from nearby farms (Ware 1931). As a result labor market institutions remained small-scale and informal, and institutions for training and skill acquisition remained correspondingly limited. Workers learned on the job as apprentices or helpers; advancement came through establishing themselves as independent producers rather than through internal promotion.

With the growth of manufacturing, and the spread of factory methods of production, especially in the years after the end of the Civil War, an increasing number of people could expect to spend their working-lives as employees. One reflection of this change was the emergence in the 1870s of the problem of unemployment. During the depression of 1873 for the first time cities throughout the country had to contend with large masses of industrial workers thrown out of work and unable to support themselves through, in the language of the time, “no fault of their own” (Keyssar 1986, ch. 2).

The growth of large factories and the creation of new kinds of labor skills specific to a particular employer created returns to sustaining long-term employment relationships. As workers acquired job- and employer-specific skills their productivity increased giving rise to gains that were available only so long as the employment relationship persisted. Employers did little, however, to encourage long-term employment relationships. Instead authority over hiring, promotion and retention was commonly delegated to foremen or inside contractors (Nelson 1975, pp. 34-54). In the latter case, skilled craftsmen operated in effect as their own bosses contracting with the firm to supply components or finished products for an agreed price, and taking responsibility for hiring and managing their own assistants.

These arrangements were well suited to promoting external mobility. Foremen were often drawn from the immigrant community and could easily tap into word-of-mouth channels of recruitment. But these benefits came increasingly into conflict with rising costs of hiring and training workers.

The informality of personnel policies prior to World War I seems likely to have discouraged lasting employment relationships, and it is true that rates of labor turnover at the beginning of the twentieth century were considerably higher than they were to be later (Owen, 2004). Scattered evidence on the duration of employment relationships gathered by various state labor bureaus at the end of the century suggests, however, at least some workers did establish lasting employment relationship (Carter 1988; Carter and Savocca 1990; Jacoby and Sharma 1992; James 1994).

The growing awareness of the costs of labor-turnover and informal, casual labor relations led reformers to advocate the establishment of more centralized and formal processes of hiring, firing and promotion, along with the establishment of internal job-ladders, and deferred payment plans to help bind workers and employers. The implementation of these reforms did not make significant headway, however, until the 1920s (Slichter 1929). Why employers began to establish internal labor markets in the 1920s remains in dispute. While some scholars emphasize pressure from workers (Jacoby 1984; 1985) others have stressed that it was largely a response to the rising costs of labor turnover (Edwards 1979).

The Government and the Labor Market

The growth of large factories contributed to rising labor tensions in the late nineteenth- and early twentieth-centuries. Issues like hours of work, safety, and working conditions all have a significant public goods aspect. While market forces of entry and exit will force employers to adopt policies that are sufficient to attract the marginal worker (the one just indifferent between staying and leaving), less mobile workers may find that their interests are not adequately represented (Freeman and Medoff 1984). One solution is to establish mechanisms for collective bargaining, and the years after the American Civil War were characterized by significant progress in the growth of organized labor (Friedman 2002). Unionization efforts, however, met strong opposition from employers, and suffered from the obstacles created by the American legal system’s bias toward protecting property and the freedom of contract. Under prevailing legal interpretation, strikes were often found by the courts to be conspiracies in restraint of trade with the result that the apparatus of government was often arrayed against labor.

Although efforts to win significant improvements in working conditions were rarely successful, there were still areas where there was room for mutually beneficial change. One such area involved the provision of disability insurance for workers injured on the job. Traditionally, injured workers had turned to the courts to adjudicate liability for industrial accidents. Legal proceedings were costly and their outcome unpredictable. By the early 1910s it became clear to all sides that a system of disability insurance was preferable to reliance on the courts. Resolution of this problem, however, required the intervention of state legislatures to establish mandatory state workers compensation insurance schemes and remove the issue from the courts. Once introduced workers compensation schemes spread quickly: nine states passed legislation in 1911; 13 more had joined the bandwagon by 1913, and by 1920 44 states had such legislation (Fishback 2001).

Along with workers compensation state legislatures in the late nineteenth century also considered legislation restricting hours of work. Prevailing legal interpretations limited the effectiveness of such efforts for adult males. But rules restricting hours for women and children were found to be acceptable. The federal government passed legislation restricting the employment of children under 14 in 1916, but this law was found unconstitutional in 1916 (Goldin 2000, p. 612-13).

The economic crisis of the 1930s triggered a new wave of government interventions in the labor market. During the 1930s the federal government granted unions the right to organize legally, established a system of unemployment, disability and old age insurance, and established minimum wage and overtime pay provisions.

In 1933 the National Industrial Recovery Act included provisions legalizing unions’ right to bargain collectively. Although the NIRA was eventually ruled to be unconstitutional, the key labor provisions of the Act were reinstated in the Wagner Act of 1935. While some of the provisions of the Wagner Act were modified in 1947 by the Taft-Hartley Act, its passage marks the beginning of the golden age of organized labor. Union membership jumped very quickly after 1935 from around 12 percent of the non-agricultural labor force to nearly 30 percent, and by the late 1940s had attained a peak of 35 percent, where it stabilized. Since the 1960s, however, union membership has declined steadily, to the point where it is now back at pre-Wagner Act levels.

The Social Security Act of 1935 introduced a federal unemployment insurance scheme that was operated in partnership with state governments and financed through a tax on employers. It also created government old age and disability insurance. In 1938, the federal Fair Labor Standards Act provided for minimum wages and for overtime pay. At first the coverage of these provisions was limited, but it has been steadily increased in subsequent years to cover most industries today.

In the post-war era, the federal government has expanded its role in managing labor markets both directly—through the establishment of occupational safety regulations, and anti-discrimination laws, for example—and indirectly—through its efforts to manage the macroeconomy to insure maximum employment.

A further expansion of federal involvement in labor markets began in 1964 with passage of the Civil Rights Act, which prohibited employment discrimination against both minorities and women. In 1967 the Age Discrimination and Employment Act was passed prohibiting discrimination against people aged 40 to 70 in regard to hiring, firing, working conditions and pay. The Family and Medical Leave Act of 1994 allows for unpaid leave to care for infants, children and other sick relatives (Goldin 2000, p. 614).

Whether state and federal legislation has significantly affected labor market outcomes remains unclear. Most economists would argue that the majority of labor’s gains in the past century would have occurred even in the absence of government intervention. Rather than shaping market outcomes, many legislative initiatives emerged as a result of underlying changes that were making advances possible. According to Claudia Goldin (2000, p. 553) “government intervention often reinforced existing trends, as in the decline of child labor, the narrowing of the wage structure, and the decrease in hours of work.” In other cases, such as Workers Compensation and pensions, legislation helped to establish the basis for markets.

The Changing Boundaries of the Labor Market

The rise of factories and urban employment had implications that went far beyond the labor market itself. On farms women and children had found ready employment (Craig 1993, ch. 4). But when the male household head worked for wages, employment opportunities for other family members were more limited. Late nineteenth-century convention largely dictated that married women did not work outside the home unless their husband was dead or incapacitated (Goldin 1990, p. 119-20). Children, on the other hand, were often viewed as supplementary earners in blue-collar households at this time.

Since 1900 changes in relative earnings power related to shifts in technology have encouraged women to enter the paid labor market while purchasing more of the goods and services that were previously produced within the home. At the same time, the rising value of formal education has lead to the withdrawal of child labor from the market and increased investment in formal education (Whaples 2005). During the first half of the twentieth century high school education became nearly universal. And since World War II, there has been a rapid increase in the number of college educated workers in the U.S. economy (Goldin 2000, p. 609-12).

Assessing the Efficiency of Labor Market Institutions

The function of labor markets is to match workers and jobs. As this essay has described the mechanisms by which labor markets have accomplished this task have changed considerably as the American economy has developed. A central issue for economic historians is to assess how changing labor market institutions have affected the efficiency of labor markets. This leads to three sets of questions. The first concerns the long-run efficiency of market processes in allocating labor across space and economic activities. The second involves the response of labor markets to short-run macroeconomic fluctuations. The third deals with wage determination and the distribution of income.

Long-Run Efficiency and Wage Gaps

Efforts to evaluate the efficiency of market allocation begin with what is commonly know as the “law of one price,” which states that within an efficient market the wage of similar workers doing similar work under similar circumstances should be equalized. The ideal of complete equalization is, of course, unlikely to be achieved given the high information and transactions costs that characterize labor markets. Thus, conclusions are usually couched in relative terms, comparing the efficiency of one market at one point in time with those of some other markets at other points in time. A further complication in measuring wage equalization is the need to compare homogeneous workers and to control for other differences (such as cost of living and non-pecuniary amenities).

Falling transportation and communications costs have encouraged a trend toward diminishing wage gaps over time, but this trend has not always been consistent over time, nor has it applied to all markets in equal measure. That said, what stands out is in fact the relative strength of forces of market arbitrage that have operated in many contexts to promote wage convergence.

At the beginning of the nineteenth century, the costs of trans-Atlantic migration were still quite high and international wage gaps large. By the 1840s, however, vast improvements in shipping cut the costs of migration, and gave rise to an era of dramatic international wage equalization (O’Rourke and Williamson 1999, ch. 2; Williamson 1995). Figure 1 shows the movement of real wages relative to the United States in a selection of European countries. After the beginning of mass immigration wage differentials began to fall substantially in one country after another. International wage convergence continued up until the 1880s, when it appears that the accelerating growth of the American economy outstripped European labor supply responses and reversed wage convergence briefly. World War I and subsequent immigration restrictions caused a sharper break, and contributed to widening international wage differences during the middle portion of the twentieth century. From World War II until about 1980, European wage levels once again began to converge toward the U.S., but this convergence reflected largely internally-generated improvements in European living standards rather then labor market pressures.

Figure 1

Relative Real Wages of Selected European Countries, 1830-1980 (US = 100)

Source: Williamson (1995), Tables A2.1-A2.3.

Wage convergence also took place within some parts of the United States during the nineteenth century. Figure 2 traces wages in the North Central and Southern regions of the U.S relative to those in the Northeast across the period from 1820 to the early twentieth century. Within the United States, wages in the North Central region of the country were 30 to 40 percent higher than in the East in the 1820s (Margo 2000a, ch. 5). Thereafter, wage gaps declined substantially, falling to the 10-20 percent range before the Civil War. Despite some temporary divergence during the war, wage gaps had fallen to 5 to 10 percent by the 1880s and 1890s. Much of this decline was made possible by faster and less expensive means of transportation, but it was also dependent on the development of labor market institutions linking the two regions, for while transportation improvements helped to link East and West, there was no corresponding North-South integration. While southern wages hovered near levels in the Northeast prior to the Civil War, they fell substantially below northern levels after the Civil War, as Figure 2 illustrates.

Figure 2

Relative Regional Real Wage Rates in the United States, 1825-1984

(Northeast = 100 in each year)

Notes and sources: Rosenbloom (2002, p. 133); Montgomery (1992). It is not possible to assemble entirely consistent data on regional wage variations over such an extended period. The nature of the wage data, the precise geographic coverage of the data, and the estimates of regional cost-of-living indices are all different. The earliest wage data—Margo (2000); Sundstrom and Rosenbloom (1993) and Coelho and Shepherd (1976) are all based on occupational wage rates from payroll records for specific occupations; Rosenbloom (1996) uses average earnings across all manufacturing workers; while Montgomery (1992) uses individual level wage data drawn from the Current Population Survey, and calculates geographic variations using a regression technique to control for individual differences in human capital and industry of employment. I used the relative real wages that Montgomery (1992) reported for workers in manufacturing, and used an unweighted average of wages across the cities in each region to arrive at relative regional real wages. Interested readers should consult the various underlying sources for further details.

Despite the large North-South wage gap Table 3 shows there was relatively little migration out of the South until large-scale foreign immigration came to an end. Migration from the South during World War I and the 1920s created a basis for future chain migration, but the Great Depression of the 1930s interrupted this process of adjustment. Not until the 1940s did the North-South wage gap begin to decline substantially (Wright 1986, pp. 71-80). By the 1970s the southern wage disadvantage had largely disappeared, and because of the decline fortunes of older manufacturing districts and the rise of Sunbelt cities, wages in the South now exceed those in the Northeast (Coelho and Ghali 1971; Bellante 1979; Sahling and Smith 1983; Montgomery 1992). Despite these shocks, however, the overall variation in wages appears comparable to levels attained by the end of the nineteenth century. Montgomery (1992), for example finds that from 1974 to 1984 the standard deviation of wages across SMSAs was only about 10 percent of the average wage.

Table 3

Net Migration by Region, and Race, 1870-1950

South Northeast North Central West
Period White Black White Black White Black White Black
Number (in 1,000s)
1870-80 91 -68 -374 26 26 42 257 0
1880-90 -271 -88 -240 61 -43 28 554 0
1890-00 -30 -185 101 136 -445 49 374 0
1900-10 -69 -194 -196 109 -1,110 63 1,375 22
1910-20 -663 -555 -74 242 -145 281 880 32
1920-30 -704 -903 -177 435 -464 426 1,345 42
1930-40 -558 -480 55 273 -747 152 1,250 55
1940-50 -866 -1581 -659 599 -1,296 626 2,822 356
Rate (migrants/1,000 Population)
1870-80 11 -14 -33 55 2 124 274 0
1880-90 -26 -15 -18 107 -3 65 325 0
1890-00 -2 -26 6 200 -23 104 141 0
1900-10 -4 -24 -11 137 -48 122 329 542
1910-20 -33 -66 -3 254 -5 421 143 491
1920-30 -30 -103 -7 328 -15 415 160 421
1930-40 -20 -52 2 157 -22 113 116 378
1940-50 -28 -167 -20 259 -35 344 195 964

Note: Net migration is calculated as the difference between the actual increase in population over each decade and the predicted increase based on age and sex specific mortality rates and the demographic structure of the region’s population at the beginning of the decade. If the actual increase exceeds the predicted increase this implies a net migration into the region; if the actual increase is less than predicted this implies net migration out of the region. The states included in the Southern region are Oklahoma, Texas, Arkansas, Louisiana, Mississippi, Alabama, Tennessee, Kentucky, West Virginia, Virginia, North Carolina, South Carolina, Georgia, and Florida.

Source: Eldridge and Thomas (1964, pp. 90, 99).

In addition to geographic wage gaps economists have considered gaps between farm and city, between black and white workers, between men and women, and between different industries. The literature on these topics is quite extensive and this essay can only touch on a few of the more general themes raised here as they relate to U.S. economic history.

Studies of farm-city wage gaps are a variant of the broader literature on geographic wage variation, related to the general movement of labor from farms to urban manufacturing and services. Here comparisons are complicated by the need to adjust for the non-wage perquisites that farm laborers typically received, which could be almost as large as cash wages. The issue of whether such gaps existed in the nineteenth century has important implications for whether the pace of industrialization was impeded by the lack of adequate labor supply responses. By the second half of the nineteenth century at least, it appears that farm-manufacturing wage gaps were small and markets were relatively integrated (Wright 1988, pp. 204-5). Margo (2000, ch. 4) offers evidence of a high degree of equalization within local labor markets between farm and urban wages as early as 1860. Making comparisons within counties and states, he reports that farm wages were within 10 percent of urban wages in eight states. Analyzing data from the late nineteenth century through the 1930s, Hatton and Williamson (1991) find that farm and city wages were nearly equal within U.S. regions by the 1890s. It appears, however that during the Great Depression farm wages were much more flexible than urban wages causing a large gap to emerge at this time (Alston and Williamson 1991).

Much attention has been focused on trends in wage gaps by race and sex. The twentieth century has seen a substantial convergence in both of these differentials. Table 4 displays comparisons of earnings of black males relative to white males for full time workers. In 1940, full-time black male workers earned only about 43 percent of what white male full-time workers did. By 1980 the racial pay ratio had risen to nearly 73 percent, but there has been little subsequent progress. Until the mid-1960s these gains can be attributed primarily to migration from the low-wage South to higher paying areas in the North, and to increases in the quantity and quality of black education over time (Margo 1995; Smith and Welch 1990). Since then, however, most gains have been due to shifts in relative pay within regions. Although it is clear that discrimination was a key factor in limiting access to education, the role of discrimination within the labor market in contributing to these differentials has been a more controversial topic (see Wright 1986, pp. 127-34). But the episodic nature of black wage gains, especially after 1964 is compelling evidence that discrimination has played a role historically in earnings differences and that federal anti-discrimination legislation was a crucial factor in reducing its effects (Donohue and Heckman 1991).

Table 4

Black Male Wages as a Percentage of White Male Wages, 1940-2004

Date Black Relative Wage
1940 43.4
1950 55.2
1960 57.5
1970 64.4
1980 72.6
1990 70.0
2004 77.0

Notes and Sources: Data for 1940 through 1980 are based on Census data as reported in Smith and Welch (1989, Table 8). Data for 1990 are from Ehrenberg and Smith (2000, Table 12.4) and refer to earnings of full time, full year workers. Data from 2004 are for median weekly earnings of full-time wage and salary workers derived from data in the Current Population Survey accessed on-line from the Bureau of Labor Statistic on 13 December 2005; URL ftp://ftp.bls.gov/pub/special.requests/lf/aat37.txt.

Male-Female wage gaps have also narrowed substantially over time. In the 1820s women’s earnings in manufacturing were a little less than 40 percent of those of men, but this ratio rose over time reaching about 55 percent by the 1920s. Across all sectors women’s relative pay rose during the first half of the twentieth century, but gains in female wages stalled during the 1950s and 1960s at the time when female labor force participation began to increase rapidly. Beginning in the late 1970s or early 1980s, relative female pay began to rise again, and today women earn about 80 percent what men do (Goldin 1990, table 3.2; Goldin 2000, pp. 606-8). Part of this remaining difference is explained by differences in the occupational distribution of men and women, with women tending to be concentrated in lower paying jobs. Whether these differences are the result of persistent discrimination or arise because of differences in productivity or a choice by women to trade off greater flexibility in terms of labor market commitment for lower pay remains controversial.

In addition to locational, sectoral, racial and gender wage differentials, economists have also documented and analyzed differences by industry. Krueger and Summers (1987) find that there are pronounced differences in wages by industry within well-specified occupational classes, and that these differentials have remained relatively stable over several decades. One interpretation of this phenomenon is that in industries with substantial market power workers are able to extract some of the monopoly rents as higher pay. An alternative view is that workers are in fact heterogeneous, and differences in wages reflect a process of sorting in which higher paying industries attract more able workers.

The Response to Short-run Macroeconomic Fluctuations

The existence of unemployment is one of the clearest indications of the persistent frictions that characterize labor markets. As described earlier, the concept of unemployment first entered common discussion with the growth of the factory labor force in the 1870s. Unemployment was not a visible social phenomenon in an agricultural economy, although there was undoubtedly a great deal of hidden underemployment.

Although one might have expected that the shift from spot toward more contractual labor markets would have increased rigidities in the employment relationship that would result in higher levels of unemployment there is in fact no evidence of any long-run increase in the level of unemployment.

Contemporaneous measurements of the rate of unemployment only began in 1940. Prior to this date, economic historians have had to estimate unemployment levels from a variety of other sources. Decennial censuses provide benchmark levels, but it is necessary to interpolate between these benchmarks based on other series. Conclusions about long-run changes in unemployment behavior depend to a large extent on the method used to interpolate between benchmark dates. Estimates prepared by Stanley Lebergott (1964) suggest that the average level of unemployment and its volatility have declined between the pre-1930 and post-World War II periods. Christina Romer (1986a, 1986b), however, has argued that there was no decline in volatility. Rather, she argues that the apparent change in behavior is the result of Lebergott’s interpolation procedure.

While the aggregate behavior of unemployment has changed surprisingly little over the past century, the changing nature of employment relationships has been reflected much more clearly in changes in the distribution of the burden of unemployment (Goldin 2000, pp. 591-97). At the beginning of the twentieth century, unemployment was relatively widespread, and largely unrelated to personal characteristics. Thus many employees faced great uncertainty about the permanence of their employment relationship. Today, on the other hand, unemployment is highly concentrated: falling heavily on the least skilled, the youngest, and the non-white segments of the labor force. Thus, the movement away from spot markets has tended to create a two-tier labor market in which some workers are highly vulnerable to economic fluctuations, while others remain largely insulated from economic shocks.

Wage Determination and Distributional Issues

American economic growth has generated vast increases in the material standard of living. Real gross domestic product per capita, for example, has increased more than twenty-fold since 1820 (Steckel 2002). This growth in total output has in large part been passed on to labor in the form of higher wages. Although labor’s share of national output has fluctuated somewhat, in the long-run it has remained surprisingly stable. According to Abramovitz and David (2000, p. 20), labor received 65 percent of national income in the years 1800-1855. Labor’s share dropped in the late nineteenth and early twentieth centuries, falling to a low of 54 percent of national income between 1890 and 1927, but has since risen, reaching 65 percent again in 1966-1989. Thus, over the long term, labor income has grown at the same rate as total output in the economy.

The distribution of labor’s gains across different groups in the labor force has also varied over time. I have already discussed patterns of wage variation by race and gender, but another important issue revolves around the overall level of inequality of pay, and differences in pay between groups of skilled and unskilled workers. Careful research by Picketty and Saez (2003) using individual income tax returns has documented changes in the overall distribution of income in the United States since 1913. They find that inequality has followed a U-shaped pattern over the course of the twentieth century. Inequality was relatively high at the beginning of the period they consider, fell sharply during World War II, held steady until the early 1970s and then began to increase, reaching levels comparable to those in the early twentieth century by the 1990s.

An important factor in the rising inequality of income since 1970 has been growing dispersion in wage rates. The wage differential between workers in the 90th percentile of the wage distribution and those in the 10th percentile increased by 49 percent between 1969 and 1995 (Plotnick et al 2000, pp. 357-58). These shifts are mirrored in increased premiums earned by college graduates relative to high school graduates. Two primary explanations have been advanced for these trends. First, there is evidence that technological changes—especially those associated with the increased use of information technology—has increased relative demand for more educated workers (Murnane, Willett and Levy (1995). Second, increased global integration has allowed low-wage manufacturing industries overseas to compete more effectively with U.S. manufacturers, thus depressing wages in what have traditionally been high-paying blue collar jobs.

Efforts to expand the scope of analysis over a longer-run encounter problems with more limited data. Based on selected wage ratios of skilled and unskilled workers Willamson and Lindert (1980) have argued that there was an increase in wage inequality over the course of the nineteenth century. But other scholars have argued that the wage series that Williamson and Lindert used are unreliable (Margo 2000b, pp. 224-28).

Conclusions

The history of labor market institutions in the United States illustrates the point that real world economies are substantially more complex than the simplest textbook models. Instead of a disinterested and omniscient auctioneer, the process of matching buyers and sellers takes place through the actions of self-interested market participants. The resulting labor market institutions do not respond immediately and precisely to shifting patterns of incentives. Rather they are subject to historical forces of increasing-returns and lock-in that cause them to change gradually and along path-dependent trajectories.

For all of these departures from the theoretically ideal market, however, the history of labor markets in the United States can also be seen as a confirmation of the remarkable power of market processes of allocation. From the beginning of European settlement in mainland North America, labor markets have done a remarkable job of responding to shifting patterns of demand and supply. Not only have they accomplished the massive geographic shifts associated with the settlement of the United States, but they have also dealt with huge structural changes induced by the sustained pace of technological change.

References

Abramovitz, Moses and Paul A. David. “American Macroeconomic Growth in the Era of Knowledge-Based Progress: The Long-Run Perspective.” In The Cambridge Economic History of the United States, Volume 3: The Twentieth Century, edited by Stanley L. Engerman and Robert Gallman. New York: Cambridge University Press, 2000.

Alston, Lee J. and Jeffery G. Williamson. “The Earnings Gap between Agricultural and Manufacturing Laborers, 1925-1941. Journal of Economic History 51, no. 1 (1991): 83-99.

Barton, Josef J. Peasants and Strangers: Italians, Rumanians, and Slovaks in an American City, 1890-1950. Cambridge, MA: Harvard University Press, 1975.

Bellante, Don. “The North-South Differential and the Migration of Heterogeneous Labor.” American Economic Review 69, no. 1 (1979): 166-75.

Carter, Susan B. “The Changing Importance of Lifetime Jobs in the U.S. Economy, 1892-1978.” Industrial Relations 27 (1988): 287-300.

Carter, Susan B. and Elizabeth Savoca. “Labor Mobility and Lengthy Jobs in Nineteenth-Century America.” Journal of Economic History 50, no. 1 (1990): 1-16.

Carter, Susan B. and Richard Sutch. “Historical Perspectives on the Economic Consequences of Immigration into the United States.” In The Handbook of International Migration: The American Experience, edited by Charles Hirschman, Philip Kasinitz and Josh DeWind. New York: Russell Sage Foundation, 1999.

Coelho, Philip R.P. and Moheb A. Ghali. “The End of the North-South Wage Differential.” American Economic Review 61, no. 5 (1971): 932-37.

Coelho, Philip R.P. and James F. Shepherd. “Regional Differences in Real Wages: The United States in 1851-1880.” Explorations in Economic History 13 (1976): 203-30.

Craig, Lee A. To Sow One Acre More: Childbearing and Farm Productivity in the Antebellum North. Baltimore: Johns Hopkins University Press, 1993.

Donahue, John H. III and James J. Heckman. “Continuous versus Episodic Change: The Impact of Civil Rights Policy on the Economic Status of Blacks.” Journal of Economic Literature 29, no. 4 (1991): 1603-43.

Dunn, Richard S. “Servants and Slaves: The Recruitment and Employment of Labor.” In Colonial British America: Essays in the New History of the Early Modern Era, edited by Jack P. Greene and J.R. Pole. Baltimore: Johns Hopkins University Press, 1984.

Edwards, B. “A World of Work: A Survey of Outsourcing.” Economist 13 November 2004.

Edwards, Richard. Contested Terrain: The Transformation of the Workplace in the Twentieth Century. New York: Basic Books, 1979.

Ehrenberg, Ronald G. and Robert S. Smith. Modern Labor Economics: Theory and Public Policy, seventh edition. Reading, MA; Addison-Wesley, 2000.

Eldridge, Hope T. and Dorothy Swaine Thomas. Population Redistribution and Economic Growth, United States 1870-1950, vol. 3: Demographic Analyses and Interrelations. Philadelphia: American Philosophical Society, 1964.

Fishback, Price V. “Workers’ Compensation.” EH.Net Encyclopedia, edited by Robert Whaples. August 15, 2001. URL http://www.eh.net/encyclopedia/articles/fishback.workers.compensation.

Freeman, Richard and James Medoff. What Do Unions Do? New York: Basic Books, 1984.

Friedman, Gerald (2002). “Labor Unions in the United States.” EH.Net Encyclopedia, edited by Robert Whaples. May 8, 2002. URL http://www.eh.net/encyclopedia/articles/friedman.unions.us.

Galenson, David W. White Servitude in Colonial America. New York: Cambridge University Press, 1981.

Galenson, David W. “The Rise and Fall of Indentured Servitude in the Americas: An Economic Analysis.” Journal of Economic History 44, no. 1 (1984): 1-26.

Galloway, Lowell E., Richard K. Vedder and Vishwa Shukla. “The Distribution of the Immigrant Population in the United States: An Econometric Analysis.” Explorations in Economic History 11 (1974): 213-26.

Gjerde, John. From Peasants to Farmers: Migration from Balestrand, Norway to the Upper Middle West. New York: Cambridge University Press, 1985.

Goldin, Claudia. “The Political Economy of Immigration Restriction in the United States, 1890 to 1921.” In The Regulated Economy: A Historical Approach to Political Economy, edited by Claudia Goldin and Gary Libecap. Chicago: University of Chicago Press, 1994.

Goldin, Claudia. “Labor Markets in the Twentieth Century.” In The Cambridge Economic History of the United States, Volume 3: The Twentieth Century, edited by Stanley L. Engerman and Robert Gallman. Cambridge: Cambridge University Press, 2000.

Grubb, Farley. “The Market for Indentured Immigrants: Evidence on the Efficiency of Forward Labor Contracting in Philadelphia, 1745-1773.” Journal of Economic History 45, no. 4 (1985a): 855-68.

Grubb, Farley. “The Incidence of Servitude in Trans-Atlantic Migration, 1771-1804.” Explorations in Economic History 22 (1985b): 316-39.

Grubb, Farley. “Redemptioner Immigration to Pennsylvania: Evidence on Contract Choice and Profitability.” Journal of Economic History 46, no. 2 (1986): 407-18.

Hatton, Timothy J. and Jeffrey G. Williamson (1991). “Integrated and Segmented Labor Markets: Thinking in Two Sectors.” Journal of Economic History 51, no. 2 (1991): 413-25.

Hughes, Jonathan and Louis Cain. American Economic History, sixth edition. Boston: Addison-Wesley, 2003.

Jacoby, Sanford M. “The Development of Internal Labor markets in American Manufacturing Firms.” In Internal Labor Markets, edited by Paul Osterman, 23-69. Cambridge, MA: MIT Press, 1984

Jacoby, Sanford M. Employing Bureaucracy: Managers, Unions, and the Transformation of Work in American Industry, 1900-1945. New York: Columbia University Press, 1985.

Jacoby, Sanford M. and Sunil Sharma. “Employment Duration and Industrial Labor Mobility in the United States, 1880-1980.” Journal of Economic History 52, no. 1 (1992): 161-79.

James, John A. “Job Tenure in the Gilded Age.” In Labour Market Evolution: The Economic History of Market Integration, Wage Flexibility, and the Employment Relation, edited by George Grantham and Mary MacKinnon. New York: Routledge, 1994.

Kamphoefner, Walter D. The Westfalians: From Germany to Missouri. Princeton, NJ: Princeton University Press, 1987.

Keyssar, Alexander. Out of Work: The First Century of Unemployment in Massachusetts. New York: Cambridge University Press, 1986.

Krueger, Alan B. and Lawrence H. Summers. “Reflections on the Inter-Industry Wage Structure.” In Unemployment and the Structure of Labor Markets, edited by Kevin Lang and Jonathan Leonard, 17-47. Oxford: Blackwell, 1987.

Lebergott, Stanley. Manpower in Economic Growth: The American Record since 1800. New York: McGraw-Hill, 1964.

Margo, Robert. “Explaining Black-White Wage Convergence, 1940-1950: The Role of the Great Compression.” Industrial and Labor Relations Review 48 (1995): 470-81.

Margo, Robert. Wages and Labor Markets in the United States, 1820-1860. Chicago: University of Chicago Press, 2000a.

Margo, Robert. “The Labor Force in the Nineteenth Century.” In The Cambridge Economic History of the United States, Volume 2: The Long Nineteenth Century, edited by Stanley L. Engerman and Robert E. Gallman, 207-44. New York: Cambridge University Press, 2000b.

McCusker, John J. and Russell R. Menard. The Economy of British America: 1607-1789. Chapel Hill: University of North Carolina Press, 1985.

Montgomery, Edward. “Evidence on Metropolitan Wage Differences across Industries and over Time.” Journal of Urban Economics 31 (1992): 69-83.

Morgan, Edmund S. “The Labor Problem at Jamestown, 1607-18.” American Historical Review 76 (1971): 595-611.

Murnane, Richard J., John B. Willett and Frank Levy. “The Growing Importance of Cognitive Skills in Wage Determination.” Review of Economics and Statistics 77 (1995): 251-66

Nelson, Daniel. Managers and Workers: Origins of the New Factory System in the United States, 1880-1920. Madison: University of Wisconsin Press, 1975.

O’Rourke, Kevin H. and Jeffrey G. Williamson. Globalization and History: The Evolution of a Nineteenth-Century Atlantic Economy. Cambridge, MA: MIT Press, 1999.

Owen, Laura. “History of Labor Turnover in the U.S.” EH.Net Encyclopedia, edited by Robert Whaples. April 30, 2004. URL http://www.eh.net/encyclopedia/articles/owen.turnover.

Piketty, Thomas and Emmanuel Saez. “Income Inequality in the United States, 1913-1998.” Quarterly Journal of Economics 118 (2003): 1-39.

Plotnick, Robert D. et al. “The Twentieth-Century Record of Inequality and Poverty in the United States” In The Cambridge Economic History of the United States, Volume 3: The Twentieth Century, edited by Stanley L. Engerman and Robert Gallman. New York: Cambridge University Press, 2000.

Romer, Christina. “New Estimates of Prewar Gross National Product and Unemployment.” Journal of Economic History 46, no. 2 (1986a): 341-52.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” Journal of Political Economy 94 (1986b): 1-37.

Rosenbloom, Joshua L. “Was There a National Labor Market at the End of the Nineteenth Century? New Evidence on Earnings in Manufacturing.” Journal of Economic History 56, no. 3 (1996): 626-56.

Rosenbloom, Joshua L. Looking for Work, Searching for Workers: American Labor Markets during Industrialization. New York: Cambridge University Press, 2002.

Slichter, Sumner H. “The Current Labor Policies of American Industries.” Quarterly Journal of Economics 43 (1929): 393-435.

Sahling, Leonard G. and Sharon P. Smith. “Regional Wage Differentials: Has the South Risen Again?” Review of Economics and Statistics 65 (1983): 131-35.

Smith, James P. and Finis R. Welch. “Black Economic Progress after Myrdal.” Journal of Economic Literature 27 (1989): 519-64.

Steckel, Richard. “A History of the Standard of Living in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. July 22, 2002. URL http://eh.net/encyclopedia/article/steckel.standard.living.us

Sundstrom, William A. and Joshua L. Rosenbloom. “Occupational Differences in the Dispersion of Wages and Working Hours: Labor Market Integration in the United States, 1890-1903.” Explorations in Economic History 30 (1993): 379-408.

Ward, David. Cities and Immigrants: A Geography of Change in Nineteenth-Century America. New York: Oxford University Press, 1971.

Ware, Caroline F. The Early New England Cotton Manufacture: A Study in Industrial Beginnings. Boston: Houghton Mifflin, 1931.

Weiss, Thomas. “Revised Estimates of the United States Workforce, 1800-1860.” In Long Term Factors in American Economic Growth, edited by Stanley L. Engerman and Robert E. Gallman, 641-78. Chicago: University of Chicago, 1986.

Whaples, Robert. “Child Labor in the United States.” EH.Net Encyclopedia, edited by Robert Whaples. October 8, 2005. URL http://eh.net/encyclopedia/article/whaples.childlabor.

Williamson, Jeffrey G. “The Evolution of Global Labor Markets since 1830: Background Evidence and Hypotheses.” Explorations in Economic History 32 (1995): 141-96.

Williamson, Jeffrey G. and Peter H. Lindert. American Inequality: A Macroeconomic History. New York: Academic Press, 1980.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy since the Civil War. New York: Basic Books, 1986.

Wright, Gavin. “Postbellum Southern Labor Markets.” In Quantity and Quiddity: Essays in U.S. Economic History, edited by Peter Kilby. Middletown, CT: Wesleyan University Press, 1987.

Wright, Gavin. “American Agriculture and the Labor Market: What Happened to Proletarianization?” Agricultural History 62 (1988): 182-209.

Citation: Rosenbloom, Joshua. “The History of American Labor Market Institutions and Outcomes”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-history-of-american-labor-market-institutions-and-outcomes/

The Roots of American Industrialization, 1790-1860

David R. Meyer, Brown University

The Puzzle of Industrialization

In a society which is predominantly agricultural, how is it possible for industrialization to gain a foothold? One view is that the demand of farm households for manufactures spurs industrialization, but such an outcome is not guaranteed. What if farm households can meet their own food requirements, and they choose to supply some of their needs for manufactures by engaging in small-scale craft production in the home? They might supplement this production with limited purchases of goods from local craftworkers and purchases of luxuries from other countries. This local economy would be relatively self-sufficient, and there is no apparent impetus to alter it significantly through industrialization, that is, the growth of workshop and factory production for larger markets. Others would claim that limited gains might come from specialization, once demand passed some small threshold. Finally, it has been argued that if the farmers are impoverished, some of them would be available for manufacturing and this would provide an incentive to industrialize. However, this argument begs the question as to who would purchase the manufactures. One possibility is that non-farm rural dwellers, such as trade people, innkeepers, and professionals, as well as a small urban population, might provide an impetus to limited industrialization.

The problem with the “impoverished agriculture” theory

The industrialization of the eastern United States from 1790 to 1860 raises similar conundrums. For a long time, scholars thought that the agriculture was mostly poor quality. Thus, the farm labor force left agriculture for workshops, such as those which produced shoes, or for factories, such as the cotton textile mills of New England. These manufactures provided employment for women and children, who otherwise had limited productive possibilities because the farms were not economical. Yet, the market for manufactures remained mostly in the East prior to 1860. Consequently, it is unclear who would have purchased the products to support the growth of manufactures before 1820, as well as to undergird the large-scale industrialization of the East during the two decades following 1840. Even if the impoverished-agriculture explanation of the East’s industrialization is rejected, we are still left with the curiosity that as late as 1840, about eighty percent of the population lived in rural areas, though some of them were in nonfarm occupations.

In brief, the puzzle of eastern industrialization between 1790 and 1860 can be resolved – the East had a prosperous agriculture. Farmers supplied low-cost agricultural products to rural and urban dwellers, and this population demanded manufactures, which were supplied by vigorous local and subregional manufacturing sectors. Some entrepreneurs shifted into production for larger market areas, and this transformation occurred especially in sectors such as shoes, selected light manufactures produced in Connecticut (such as buttons, tinware, and wooden clocks), and cotton textiles. Transportation improvements exerted little impact on these agricultural and industrial developments, primarily because the lowly wagon served effectively as a transport medium and much of the East’s most prosperous areas were accessible to cheap waterway transportation. The metropolises of Boston, New York, Philadelphia, and, to a lesser extent, Baltimore, and the satellites of each (together, each metropolis and its satellites is called a metropolitan industrial complex), became leading manufacturing centers, and other industrial centers emerged in prosperous agricultural areas distant from these complexes. The East industrialized first, and, subsequently, the Midwest began an agricultural and industrial growth process which was underway by the 1840s. Together, the East and the Midwest constituted the American Manufacturing Belt, which was formed by the 1870s, whereas the South failed to industrialize commensurately.

Synergy between Agriculture and Manufacturing

The solution to the puzzle of how industrialization can occur in a predominantly agricultural economy recognizes the possibility of synergy between agriculture and manufacturing. During the first three decades following 1790, prosperous agricultural areas emerged in the eastern United States. Initially, these areas were concentrated near the small metropolises of Boston, New York, and Philadelphia, and in river valleys such as the Connecticut Valley in Connecticut and Massachusetts, the Hudson and Mohawk Valleys in New York, the Delaware Valley bordering Pennsylvania and New Jersey, and the Susquehanna Valley in eastern Pennsylvania. These agricultural areas had access to cheap, convenient transport which could be used to reach markets; the farms supplied the growing urban populations in the cities and some of the products were exported. Furthermore, the farmers supplied the nearby, growing non-farm populations in the villages and small towns who provided goods and services to farmers. These non-farm consumers included retailers, small mill owners, teamsters, craftspeople, and professionals (clergy, physicians, and lawyers).

Across every decade from 1800 to 1860, the number of farm laborers grew, thus testifying to the robustness of eastern agriculture (see Table 1). And, this increase occurred in the face of an expanding manufacturing sector, as increasing numbers of rural dwellers left the farms to work in the factories, especially after 1840. Even New England, the region which presumably was the epitome of declining agriculture, witnessed a rise in the number of farm laborers all the way up to 1840, and, as of 1860, the drop off from the peak was small. Massachusetts and Connecticut, which had vigorous small workshops and increasing numbers of small factories before 1840, followed by a surge in manufacturing after 1840, matched the trajectory of farm laborers in New England as a whole. The numbers in these two states peaked in 1840 and fell off only modestly over the next twenty years. The Middle Atlantic region witnessed an uninterrupted rise in the number of farm laborers over the sixty-year period. New York and Pennsylvania, the largest states, followed slightly different paths. In New York, the number of farm laborers peaked around 1840 and then stabilized near that level for the next two decades, whereas in Pennsylvania the number of farm laborers rose in an uninterrupted fashion.

Table 1
Number of Farm Laborers by Region and Selected States, 1800-1860

Year 1800 1810 1820 1830 1840 1850 1860
New England 228,100 257,700 303,400 353,800 389,100 367,400 348,100
Massachusetts 73,200 72,500 73,400 78,500 87,900 80,800 77,700
Connecticut 50,400 49,300 51,500 55,900 57,000 51,400 51,800
Middle Atlantic 375,700 471,400 571,700 715,000 852,800 910,400 966,600
New York 111,800 170,100 256,000 356,300 456,000 437,100 449,100
Pennsylvania 112,600 141,000 164,900 195,200 239,000 296,300 329,000
East 831,900 986,800 1,178,500 1,422,600 1,631,000 1,645,200 1,662,800

Source: Thomas Weiss, “U.S. Labor Force Estimates and Economic Growth, 1800-1860,”American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis (Chicago, IL: University of Chicago Press, 1992), table 1A.9, p. 51.

The farmers, retailers, professionals, and others in these prosperous agricultural areas accumulated capital which became available for other economic sectors, and manufacturing was one of the most important to receive this capital. Entrepreneurs who owned small workshops and factories obtained capital to turn out a wide range of goods such as boards, boxes, utensils, building hardware, furniture, and wagons, which were in demand in the agricultural areas. And, some of these workshops and factories enlarged their market areas to a subregion as they gained production efficiencies; but, this did not account for all industrial development. Selected manufactures such as shoes, tinware, buttons, and cotton textiles were widely demanded by urban and rural residents of prosperous agricultural areas and by residents of the large cities. These products were high value relative to their weight; thus, the cost to ship them long distances was low. Astute entrepreneurs devised production methods and marketing approaches to sell these goods in large market areas, including New England and the Middle Atlantic regions of the East.

Manufactures Which Were Produced for Large Market Areas

Shoes and Tinware

Small workshops turned out shoes. Massachusetts entrepreneurs devised an integrated shoe production complex based on a division of labor among shops, and they established a marketing arm of wholesalers, principally in Boston, who sold the shoes throughout New England, to the Middle Atlantic, and to the South (particularly, to slave plantations). Businesses in Connecticut drew on the extensive capital accumulated by the well-to-do rural and urban dwellers of that state and moved into tinware, plated ware, buttons, and wooden clocks. These products, like shoes, also were manufactured in small workshops, but a division of labor among shops was less important than the organization of production within shops. Firms producing each good tended to agglomerate in a small subregion of the state. These clusters arose because entrepreneurs shared information about production techniques and specialized skills which they developed, and this knowledge was communicated as workers moved among shops. Initially, a marketing system of peddlers emerged in the tinware sector, and they sold the goods, first throughout Connecticut, and then they extended their travels to the rest of New England and to the Middle Atlantic. Workshops which made other types of light, high-value goods soon took advantage of the peddler distribution system to enlarge their market areas. At first, these peddlers operated part-time during the year, but as the supply of goods increased and market demand grew, peddlers operated for longer periods of the year and they traveled farther.

Cotton Textiles

Cotton textile manufacturing was an industry built on low-wage, especially female, labor; presumably, this industry offered opportunities in areas where farmers were unsuccessful. Yet, similar to the other manufactures which enlarged their market areas to the entire East before 1820, cotton textile production emerged in prosperous agricultural areas. That is not surprising, because this industry required substantial capital, technical skills, and, initially, nearby markets. These requirements were met in rich farming areas, which also could draw on wealthy merchants in large cities who contributed capital and provided sale outlets beyond nearby markets as output grew. The production processes in cotton textile manufacturing, however, diverged from the approaches to making shoes and small metal and wooden products. From the start, production processes included textile machinery, which initially consisted of spinning machines to make yarn, and later (after 1815), weaving machines and other mechanical equipment were added. Highly skilled mechanics were required to build the machines and to maintain them. The greater capital requirements for cotton mills, compared to shoes and small goods’ manufactures in Connecticut, meant that merchant wholesalers and wealthy retailers, professionals, mill owners, and others, were important underwriters of the factories.

Starting in the 1790s, New England, and, especially, Rhode Island, housed the leaders in early cotton textile manufacturing. Providence merchants funded some of the first successful cotton spinning mills, and they drew on the talents of Samuel Slater, an immigrant British machinist. He trained many of the first important textile mechanics, and investors in various parts of Rhode Island, Connecticut, Massachusetts, New Hampshire, and New York hired them to build mills. Between 1815 and 1820, power-loom weaving began to be commercially feasible, and this effort was led by firms in Rhode Island and, especially, in Massachusetts. Boston merchants, starting with the Boston Manufacturing Company at Waltham, devised a business plan which targeted large-scale, integrated cotton textile manufacturing, with a marketing/sales arm housed in a separate firm. They enlarged their effort significantly after 1820, and much of the impetus to the growth of the cotton textile industry came from the success entrepreneurs had in lowering the cost of production.

The Impact of Transportation Improvements

Following 1820, government and private sources invested substantial sums in canals, and after 1835, railroad investment increased rapidly. Canals required huge volumes of low-value commodities in order to pay operating expenses, cover interest on the bonds which were issued for construction, and retire the bonds at maturity. These conditions were only met in the richest agricultural and resource (lumbering and coal mining, for example) areas traversed by the Erie and Champlain Canals in New York and the coal canals in eastern Pennsylvania and New Jersey. The vast majority of the other canals failed to yield benefits for agriculture and industry, and most were costly debacles. Early railroads mainly carried passengers, especially within fifty to one hundred miles of the largest cities – Boston, New York, Philadelphia, and Baltimore. Industrial products were not carried in large volumes until after 1850; consequently, railroads built before that time had little impact on industrialization in the East.

Canals and railroads had minor impacts on agricultural and industrial development because the lowly wagon provided withering competition. Wagons offered flexible, direct connections between origins and destinations, without the need to transship goods, as was the case with canals and railroads; these modes required wagons at their end points. Within a distance of about fifty miles, the cost of wagon transport was competitive with alternative transport modes, so long as the commodities were high value relative to their weight. And, infrequent transport of these goods could occur over distances of as much as one hundred miles. This applied to many manufactures, and agricultural commodities could be raised to high value by processing prior to shipment. Thus, wheat was turned into flour, corn and other grains were fed to cattle and pigs and these were processed into beef and pork prior to shipment, and milk was converted into butter and cheese. Most of the richest agricultural and industrial areas of the East were less than one hundred miles from the largest cities or these areas were near low-cost waterway transport along rivers, bays, and the Atlantic Coast. Therefore, canals and railroads in these areas had difficulty competing for freight, and outside these areas the limited production generated little demand for long distant transport services.

Agricultural Prosperity Continues

After 1820, eastern farmers seized the increasing market opportunities in the prosperous rural areas as nonfarm processing expanded and village and small town populations demanded greater amounts of farm products. The large number of farmers who were concentrated around the rapidly growing metropolises (Boston, New York, Philadelphia, and Baltimore) and near urban agglomerations such as Albany-Troy, New York, developed increasing specialization in urban market goods such as fluid milk, fresh vegetables, fruit, butter, and hay (for horse transport). Farmers farther away responded to competition by shifting into products which could be transported long distances to market, including wheat into flour, cattle which walked to market, or pigs which were converted into pork. During the winter these farms sent butter, and cheese was a specialty which could be lucrative for long periods of the year when temperatures were cool.

These changes swept across the East, and, after 1840, farmers increasingly adjusted their production to compete with cheap wheat, cattle, and pork arriving over the Erie Canal from the Midwest. Wheat growing became less profitable, and specialized agriculture expanded, such as potatoes, barley, and hops in central New York and cigar tobacco in the Connecticut Valley. Farmers near the largest cities intensified their specialization in urban market products, and as the railroads expanded, fluid milk was shipped longer distances to these cities. Farmers in less accessible areas and on poor agricultural land which was infertile or too hilly, became less competitive. If these farmers and their children stayed, their incomes declined relative to others in the East, but if they moved to the Midwest or to the burgeoning industrial cities of the East, they had the chance of participating in the rising prosperity.

Metropolitan Industrial Complexes

The metropolises of Boston, New York, Philadelphia, and, to a lesser extent, Baltimore, led the industrial expansion after 1820, because they were the greatest concentrated markets, they had the most capital, and their wholesalers provided access to subregional and regional markets outside the metropolises. By 1840, each of them was surrounded by industrial satellites – manufacturing centers in close proximity to, and economically integrated with, the metropolis. Together, these metropolises and their satellites formed metropolitan industrial complexes, which accounted for almost one-quarter of the nation’s manufacturing (see Table 2). For example, metropolises and satellites included Boston and Lowell, New York and Paterson (New Jersey), Philadelphia and Reading (Pennsylvania), and Baltimore and Wilmington (Delaware), which also was a satellite of Philadelphia. Among the four leading metropolises, New York and Philadelphia housed, by far, the largest share of the nation’s manufacturing workers, and their satellites had large numbers of industrial workers. Yet, Boston’s satellites contained the greatest concentration of industrial workers in the nation, with almost seven percent of the national total. The New York, Philadelphia, and Boston metropolitan industrial complexes each had approximately the same share of the nation’s manufacturing workers. These complexes housed a disproportionate share of the nation’s commerce-serving manufactures such as printing-publishing and paper and of local, regional, and national market manufactures such as glass, drugs and paints, textiles, musical instruments, furniture, hardware, and machinery.

Table 2
Manufacturing Employment in the Metropolitan Industrial Complexes
of New York, Philadelphia, Boston, and Baltimore
as a Percentage of National Manufacturing Employment in 1840

Metropolis Satellites Complex
New York 4.1% 3.4% 7.4%
Philadelphia 3.9 2.9 6.7
Boston 0.5 6.6 7.1
Baltimore 2.0 0.2 2.3
Four Complexes 10.5 13.1 23.5

Note: Metropolitan county is defined as the metropolis for each complex and “outside” comprises nearby counties; those included in each complex were the following. New York: metropolis (New York, Kings, Queens, Richmond); outside (Connecticut: Fairfield; New York: Westchester, Putnam, Rockland, Orange; New Jersey: Bergen, Essex, Hudson, Middlesex, Morris, Passaic, Somerset). Philadelphia: metropolis (Philadelphia); outside (Pennsylvania: Bucks, Chester, Delaware, Montgomery; New Jersey: Burlington, Gloucester, Mercer; Delaware: New Castle). Boston: metropolis (Suffolk); outside (Essex, Middlesex, Norfolk, Plymouth). Baltimore: metropolis (Baltimore); outside (Anne Arundel, Harford).

Source: U.S. Bureau of the Census, Compendium of the Sixth Census, 1840 (Washington, D.C.: Blair and Rives, 1841).

Also, by 1840, prosperous agricultural areas farther from these complexes, such as the Connecticut Valley in New England, the Hudson Valley, the Erie Canal Corridor across New York state, and southeastern Pennsylvania, housed significant amounts of manufacturing in urban places. At the intersection of the Hudson and Mohawk rivers, the Albany-Troy agglomeration contained one of the largest concentrations of manufacturing outside the metropolitan complexes. And, industrial towns such as Utica, Syracuse, Rochester, and Buffalo were strung along the Erie Canal Corridor. Many of the manufactures (such as furniture, wagons, and machinery) served subregional markets in the areas of prosperous agriculture, but some places also developed specialization in manufactures (textiles and hardware) for larger regional and interregional market areas (the East as a whole). The Connecticut Valley, for example, housed many firms which produced cotton textiles, hardware, and cutlery.

Manufactures for Eastern and National Markets

Shoes

In several industrial sectors whose firms had expanded before 1820 to regional, and even, multiregional markets, in the East, firms intensified their penetration of eastern markets and reached to markets in the rapidly growing Midwest between 1820 and 1860. In eastern Massachusetts, a production complex of shoe firms innovated methods of organizing output within and among firms, and they developed a wide array of specialized tools and components to increase productivity and to lower manufacturing costs. In addition, a formidable wholesaling, marketing, and distribution complex, headed by Boston wholesalers, pushed the ever-growing volume of shoes into sales channels which reached throughout the nation. Machinery did not come into use until the 1850s, and, by 1860, Massachusetts accounted for half of the value of the nation’s shoe production.

Cotton Textiles

In contrast, machinery constituted an important factor of production which drove down the price of cotton textile goods, substantially enlarging the quantity consumers demanded. Before 1820, most of the machinery innovations improved the spinning process for making yarn, and in the five years following 1815, innovations in mechanized weaving generated an initial substantial drop in the cost of production as the first integrated spinning-weaving mills emerged. During the next decade and a half the price of cotton goods collapsed by over fifty percent as large integrated spinning-weaving mills became the norm for the production of most cotton goods. Therefore, by the mid-1830s vast volumes of cotton goods were pouring out of textile mills, and a sophisticated set of specialized wholesaling firms, mostly concentrated in Boston, and secondarily, in New York and Philadelphia, channeled these items into the national market.

Prior to 1820, the cotton textile industry was organized into three cores. The Providence core dominated and the Boston core occupied second place; both of these were based mostly on mechanized spinning. A third core in the city of Philadelphia was based on hand spinning and weaving. Within about fifteen years after 1820, the Boston core soared to a commanding position in cotton textile production as a group of Boston merchants and their allies relentlessly replicated their business plan at various sites in New England, including at Lowell, Chicopee, and Taunton in Massachusetts, at Nashua, Manchester, and Dover in New Hampshire, and at Saco in Maine. The Providence core continued to grow, but its investors did not seem to fully grasp the strategic, multi-faceted business plan which the Boston merchants implemented. Similarly, investors in an emerging core within about fifty to seventy-five miles of New York City in the Hudson Valley and northern New Jersey likewise did not seem to fully understand the Boston merchants’ plan, and these New York City area firms never reached the scale of the firms of the Boston Core. The Philadelphia core enlarged to nearby areas southwest of the city and in Delaware, but these firms stayed small, and the Philadelphia firms created a small-scale, flexible production system which turned out specialized goods, not the mass-market commodity textiles of the other cores.

Capital Investment in Cotton Textiles

The distribution of capital investment in cotton textiles across the regions and states of the East between 1820 and 1860 capture the changing prominence of the cores of cotton textile production (see Table 3). The New England and the Middle Atlantic regions contained approximately similar shares (almost half each) of the nation’s capital investment. However, during the 1820s the cotton textile industry restructured to a form which was maintained for the next three decades. New England’s share of capital investment surged to about seventy percent, and it maintained that share until 1860, whereas the Middle Atlantic region’s share fell to around twenty percent by 1840 and remained near that until 1860. The rest of the nation, primarily the South, reached about ten percent of total capital investment around 1840 and continued at that level for the next two decades. Massachusetts became the leading cotton textile state by 1831 and Rhode Island, the early leader, gradually slipped to a level of about ten percent by the 1850s; New Hampshire and Pennsylvania housed approximately similar shares as Rhode Island by that time.

Table 3
Capital Invested in Cotton Textiles
by Region and State as a Percentage of the Nation
1820-1860

Region/state 1820 1831 1840 1850 1860
New England 49.6% 69.8% 68.4% 72.3% 70.3%
Maine 1.6 1.9 2.7 4.5 6.1
New Hampshire 5.6 13.1 10.8 14.7 12.8
Vermont 1.0 0.7 0.2 0.3 0.3
Massachusetts 14.3 31.7 34.1 38.2 34.2
Connecticut 11.6 7.0 6.2 5.7 6.7
Rhode Island 15.4 15.4 14.3 9.0 10.2
Middle Atlantic 46.2 29.5 22.7 17.3 19.0
New York 18.8 9.0 9.6 5.6 5.5
New Jersey 4.7 5.0 3.4 2.0 1.3
Pennsylvania 6.3 9.3 6.5 6.1 9.3
Delaware 4.0 0.9 0.6 0.6 0.6
Maryland 12.4 5.3 2.6 3.0 2.3
Rest of nation 4.3 0.7 9.0 10.4 10.7
Nation 100.0% 100.0% 100.0% 100.0% 100.0%
Total capital (thousands) $10,783 $40,613 $51,102 $74,501 $98,585

Sources: David J. Jeremy, Transatlantic Industrial Revolution: The Diffusion of Textile Technologies Between Britain and America, 1790-1830s (Cambridge, MA: MIT Press, 1981), appendix D, table D.1, p. 276; U.S. Bureau of the Census, Compendium of the Sixth Census, 1840 (Washington, D.C.: Blair and Rives, 1841); U.S. Bureau of the Census, Report on the Manufactures of the United States at the Tenth Census, 1880 (Washington, D.C.: Government Printing Office, 1883).

Connecticut’s Industries

In Connecticut, industrialists built on their successful production and sales prior to 1820 and expanded into a wider array of products which they sold in the East and South, and, after 1840, they acquired more sales in the Midwest. This success was not based on a mythical “Yankee ingenuity,” which, typically, has been framed in terms of character. Instead, this ingenuity rested on fundamental assets: a highly educated population linked through wide-ranging social networks which communicated information about technology, labor opportunities, and markets; and the abundant supplies of capital in the state supported the entrepreneurs. The peddler distribution system provided efficient sales channels into the mid-1830s, but, after that, firms took advantage of more traditional wholesaling channels. In some sectors, such as the brass industry, firms followed the example of the large Boston-core textile firms, and the brass companies founded their own wholesale distribution agencies in Boston and New York City. The achievements of Connecticut’s firms were evident by 1850. As a share of the nation’s value of production, they accounted for virtually all of the clocks, pins, and suspenders, close to half of the buttons and rubber goods, and about one-third of the brass foundry products, Britannia and plated ware, and hardware.

Difficulty of Duplicating Eastern Methods in the Midwest

The East industrialized first, based on a prosperous agricultural and industrialization process, as some of its entrepreneurs shifted into the national market manufactures of shoes, cotton textiles, and diverse goods turned out in Connecticut. These industrialists made this shift prior to 1820, and they enhanced their dominance of these products during the subsequent two decades. Manufacturers in the Midwest did not have sufficient intraregional markets to begin producing these goods before 1840; therefore, they could not compete in these national market manufactures. Eastern firms had developed technologies and organizations of production and created sales channels which could not be readily duplicated, and these light, high-value goods were transported cheaply to the Midwest. When midwestern industrialists faced choices about which manufactures to enter, the eastern light, high-value goods were being sold in the Midwest at prices which were so low that it was too risky for midwestern firms to attempt to compete. Instead, these firms moved into a wide range of local and regional market manufactures which also existed in the East, but which cost too much to transport to the Midwest. These goods included lumber and food products (e.g., flour and whiskey), bricks, chemicals, machinery, and wagons.

The American Manufacturing Belt

The Midwest Joins the American Manufacturing Belt after 1860

Between 1840 and 1860, Midwestern manufacturers made strides in building an industrial infrastructure, and they were positioned to join with the East to constitute the American Manufacturing Belt, the great concentration of manufacturing which would sprawl from the East Coast to the edge of the Great Plains. This Belt became mostly set within a decade or so after 1860, because technologies and organizations of production and of sales channels had lowered costs across a wide array of manufactures, and improvements in transportation (such as an integrated railroad system) and communication (such as the telegraph) reduced distribution costs. Thus, increasing shares of industrial production were sold in interregional markets.

Lack of Industrialization in the South

Although the South had prosperous farms, it failed to build a deep and broad industrial infrastructure prior to 1860, because much of its economy rested on a slave agricultural system. In this economy, investments were heavily concentrated in slaves rather than in an urban and industrial infrastructure. Local and regional demand remained low across much of the South, because slaves were not able to freely express their consumption demands and population densities remained low, except in a few agricultural areas. Thus, the market thresholds for many manufactures were not met, and, if thresholds were met, the demand was insufficient to support more than a few factories. By the 1870s, when the South had recovered from the Civil War and its economy was reconstructed, eastern and midwestern industrialists had built strong positions in many manufactures. And, as new industries emerged, the northern manufacturers had the technological and organizational infrastructure and distribution channels to capture dominance in the new industries.

In a similar fashion, the Great Plains, the Southwest, and the West were settled too late for their industrialists to be major producers of national market goods. Manufacturers in these regions focused on local and regional market manufactures. Some low wage industries (such as textiles) began to move to the South in significant numbers after 1900, and the emergence of industries based on high technology after 1950 led to new manufacturing concentrations which rested on different technologies. Nonetheless, the American Manufacturing Belt housed the majority of the nation’s industry until the middle of the twentieth century.

This essay is based on David R. Meyer, The Roots of American Industrialization, Baltimore: Johns Hopkins University Press, 2003.

Additional Readings

Atack, Jeremy, and Fred Bateman. To Their Own Soil: Agriculture in the Antebellum North. Ames, IA: Iowa State University Press, 1987.

Baker, Andrew H., and Holly V. Izard. “New England Farmers and the Marketplace, 1780-1865: A Case Study.” Agricultural History 65 (1991): 29-52.

Barker, Theo, and Dorian Gerhold. The Rise and Rise of Road Transport, 1700-1990. New York: Cambridge University Press, 1995.

Bodenhorn, Howard. A History of Banking in Antebellum America: Financial Markets and Economic Development in an Era of Nation-Building. New York: Cambridge University Press, 2000.

Brown, Richard D. Knowledge is Power: The Diffusion of Information in Early America, 1700-1865. New York: Oxford University Press, 1989.

Clark, Christopher. The Roots of Rural Capitalism: Western Massachusetts, 1780-1860. Ithaca, NY: Cornell University Press, 1990.

Dalzell, Robert F., Jr. Enterprising Elite: The Boston Associates and the World They Made. Cambridge, MA: Harvard University Press, 1987.

Durrenberger, Joseph A. Turnpikes: A Study of the Toll Road Movement in the Middle Atlantic States and Maryland. Cos Cob, CT: John E. Edwards, 1968.

Field, Alexander J. “On the Unimportance of Machinery.” Explorations in Economic History 22 (1985): 378-401.

Fishlow, Albert. American Railroads and the Transformation of the Ante-Bellum Economy. Cambridge, MA: Harvard University Press, 1965.

Fishlow, Albert. “Antebellum Interregional Trade Reconsidered.” American Economic Review 54 (1964): 352-64.

Goodrich, Carter, ed. Canals and American Economic Development. New York: Columbia University Press, 1961.

Gross, Robert A. “Culture and Cultivation: Agriculture and Society in Thoreau’s Concord.” Journal of American History 69 (1982): 42-61.

Hoke, Donald R. Ingenious Yankees: The Rise of the American System of Manufactures in the Private Sector. New York: Columbia University Press, 1990.

Hounshell, David A. From the American System to Mass Production, 1800-1932: The Development of Manufacturing Technology in the United States. Baltimore: Johns Hopkins University Press, 1984.

Jeremy, David J. Transatlantic Industrial Revolution: The Diffusion of Textile Technologies between Britain and America, 1790-1830s. Cambridge, MA: MIT Press, 1981.

Jones, Chester L. The Economic History of the Anthracite-Tidewater Canals. University of Pennsylvania Series on Political Economy and Public Law, no. 22. Philadelphia: John C. Winston, 1908.

Karr, Ronald D. “The Transformation of Agriculture in Brookline, 1770-1885.” Historical Journal of Massachusetts 15 (1987): 33-49.

Lindstrom, Diane. Economic Development in the Philadelphia Region, 1810-1850. New York: Columbia University Press, 1978.

McClelland, Peter D. Sowing Modernity: America’s First Agricultural Revolution. Ithaca, NY: Cornell University Press, 1997.

McMurry, Sally. Transforming Rural Life: Dairying Families and Agricultural Change, 1820-1885. Baltimore: Johns Hopkins University Press, 1995.

McNall, Neil A. An Agricultural History of the Genesee Valley, 1790-1860. Philadelphia: University of Pennsylvania Press, 1952.

Majewski, John. A House Dividing: Economic Development in Pennsylvania and Virginia Before the Civil War. New York: Cambridge University Press, 2000.

Mancall, Peter C. Valley of Opportunity: Economic Culture along the Upper Susquehanna, 1700-1800. Ithaca, NY: Cornell University Press, 1991.

Margo, Robert A. Wages and Labor Markets in the United States, 1820-1860. Chicago: University of Chicago Press, 2000.

Meyer, David R. “The Division of Labor and the Market Areas of Manufacturing Firms.” Sociological Forum 3 (1988): 433-53.

Meyer, David R. “Emergence of the American Manufacturing Belt: An Interpretation.” Journal of Historical Geography 9 (1983): 145-74.

Meyer, David R. “The Industrial Retardation of Southern Cities, 1860-1880.” Explorations in Economic History 25 (1988): 366-86.

Meyer, David R. “Midwestern Industrialization and the American Manufacturing Belt in the Nineteenth Century.” Journal of Economic History 49 (1989): 921-37.

Ransom, Roger L. “Interregional Canals and Economic Specialization in the Antebellum United States.” Explorations in Entrepreneurial History 5, no. 1 (1967-68): 12-35.

Roberts, Christopher. The Middlesex Canal, 1793-1860. Cambridge, MA: Harvard University Press, 1938.

Rothenberg, Winifred B. From Market-Places to a Market Economy: The Transformation of Rural Massachusetts, 1750-1850. Chicago: University of Chicago Press, 1992.

Scranton, Philip. Proprietary Capitalism: The Textile Manufacture at Philadelphia, 1800-1885. New York: Cambridge University Press, 1983.

Shlakman, Vera. “Economic History of a Factory Town: A Study of Chicopee, Massachusetts.” Smith College Studies in History 20, nos. 1-4 (1934-35): 1-264.

Sokoloff, Kenneth L. “Invention, Innovation, and Manufacturing Productivity Growth in the Antebellum Northeast.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John J. Wallis, 345-78. Chicago: University of Chicago Press, 1992.

Sokoloff, Kenneth L. “Inventive Activity in Early Industrial America: Evidence from Patent Records, 1790-1846.” Journal of Economic History 48 (1988): 813-50.

Sokoloff, Kenneth L. “Productivity Growth in Manufacturing during Early Industrialization: Evidence from the American Northeast, 1820-1860.” In Long-Term Factors in American Economic Growth, edited by Stanley L. Engerman and Robert E. Gallman, 679-729. Chicago: University of Chicago Press, 1986.

Ware, Caroline F. The Early New England Cotton Manufacture: A Study in Industrial Beginnings. Boston: Houghton Mifflin, 1931.

Weiss, Thomas. “Economic Growth before 1860: Revised Conjectures.” In American Economic Development in Historical Perspective, edited by Thomas Weiss and Donald Schaefer, 11-27. Stanford, CA: Stanford University Press, 1994.

Weiss, Thomas. “Long-Term Changes in U.S. Agricultural Output per Worker, 1800-1900.” Economic History Review 46 (1993): 324-41.

Weiss, Thomas. “U.S. Labor Force Estimates and Economic Growth, 1800-1860.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 19-75. Chicago University of Chicago Press, 1992.

Wood, Frederic J. The Turnpikes of New England. Boston: Marshall Jones, 1919.

Wood, Gordon S. The Radicalism of the American Revolution. New York: Alfred A. Knopf, 1992.

Zevin, Robert B. “The Growth of Cotton Textile Production after 1815.” In The Reinterpretation of American Economic History, edited by Robert W. Fogel and Stanley L. Engerman, 122-47. New York: Harper & Row, 1971.

Citation: Meyer, David. “American Industrialization”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-roots-of-american-industrialization-1790-1860/

Industrial Sickness Funds

John E. Murray, University of Toledo

Overview and Definition

Industrial sickness funds provided an early form of health insurance. They were financial institutions that extended cash payments and in some cases medical benefits to members who became unable to work due to sickness or injury. The term industrial sickness funds is a later construct which describes funds organized by companies, which were also known as establishment funds, and by labor unions. These funds were widespread geographically in the United States; the 1890 Census of Insurance found 1,259 nationwide, with concentrations in the Northeast, Midwest, California, Texas, and Louisiana (U.S. Department of the Interior, 1895). By the turn of the twentieth century, some industrial sickness funds had accumulated considerable experience at managing sickness benefits. A few predated the Civil War. When the U. S. Commissioner of Labor surveyed a sample of sickness funds in 1908, it found 867 non-fraternal funds nationwide that provided temporary disability benefits (U.S. Commissioner of Labor, 1909). By the time of World War I, these funds, together with similar funds sponsored by fraternal societies, covered 30 to 40 percent of non-agricultural wage workers in the more industrialized states, or by extension, eight to nine million nationwide (Murray 2007a). Sickness funds were numerous, widespread, and in general carefully operated.

Industrial sickness funds were among the earliest providers of any type of health or medical benefits in the United States. In fact, their earliest product was called “workingman’s insurance” or “sickness insurance,” terms that described their clientele and purpose accurately. In the late Progressive Era, reformers promoted government insurance programs that would supplant the sickness funds. To sound more British, they used the term “health insurance,” and that is the phrase we still use for this kind of insurance contract (Numbers 1978). In the history of health insurance, the funds were contemporary with benefit operations of fraternal societies (see fraternal sickness insurance) and led into the period of group health insurance (see health insurance, U. S.). They should be distinguished from the sickness benefits provided by some industrial insurance policies, which required weekly premium payments and paid a cash benefit upon death, which was intended to cover burial expenses.

Many written histories of health insurance have missed the important role industrial sickness funds played in both relief of worker suffering and in the political process. Recent historians have tended to criticize, patronize, or ignore sickness funds. Lubove (1986) complained that they stood in the way of government insurance for all workers. Klein (2003) claimed that they were inefficient, without making explicit her standard for that judgment. Quadagno (2005) simply asserted that no one had thought of health insurance before the 1920s. Contemporary commentators such as I. M. Rubinow and Irving Fisher criticized workers who preferred “hopelessly inadequate” sickness fund insurance over government insurance as “infantile” (Derickson 2005). But these criticisms stemmed more from their authors’ ideological preconceptions than from close study of these institutions.

Rise and Operations of Industrial Sickness Funds

The period of their greatest extent and importance was from the 1880s to around 1940. The many state labor bureau surveys of individual workers, since digitized by the University of California’s Historical Labor Statistics Project and available for download at EH.net, often asked questions such as “do you belong to a benefit society,” meaning a fraternal sickness benefit fund or an industrial sickness fund. Of the surveys from the early 1890s that included this question, around a quarter of respondents indicated that they belonged to such societies. Later, closer to 1920, several states examined the extent of sickness insurance coverage in response to movements to create governmental health insurance for workers (Table 1). These later studies indicated that in the Northeast, Midwest, and California, between thirty and forty percent of non-agricultural workers were covered. Thus, remarkably, these societies had actually increased their market share over a three decade period in which the labor force itself grew from 13 to 30 million workers (Murray 2007a). Industrial sickness funds were dynamic institutions, capable of dealing with an ever expanding labor market

Table 1:
Sources of Insurance in Three States (thousands of workers)

Source/state Illinois Ohio California
Fraternal society 250 200 291
Establishment fund 116 130 50
Union fund 140 85 38
Other sick fund 12 N/a 35
Commercial insurance 140 85 2 (?)
Total 660 500 416
Eligible labor force 1,850 1,500 995
Share insured 36% 33% 42%
Sources: Illinois (1919), Ohio, (1919), California (1917), Lee et al. (1957).

Industrial sickness funds operated in a relatively simple fashion, but one that enabled them to mitigate the usual information problems that emerge in insurance markets. The process of joining a fund and making a claim typically worked as follows. A newly hired worker in a plant with such a fund explicitly applied to join, often after a probationary period during which fund managers could observe his baseline health and work habits. After admission to the fund, he paid an entrance fee followed by weekly dues. Since the average industrial worker in the 1910s earned about ten dollars a week, the entrance fee of one dollar was a half-day’s pay and the dues of ten cents made the cost to the worker around one percent of his pay packet.

A member who was unable to work contacted his fund, which then sent either a committee of fellow fund members, a physician, or both to check on the member-now-claimant. If they found him as sick as he had said he was, and in their judgment he was unable to work, after a one week waiting period he received around half his weekly pay. The waiting period was intended to let transient, less serious illnesses resolve so that the fund could support members with longer-term medical problems. To continue receiving the sick pay the claimant needed to allow periodic examinations by a physician or visiting committee. In rough terms, the average worker missed two percent of a work year, or about a week every year, a rate that varied by age and industry. The quarter of all workers who missed any work lost on average one month’s pay; thus a typical incapacitated worker received three and a half weeks of benefit per year. Comparing the cost of dues and expected value of benefits shows that the sickness funds were close to an actuarially fair bet: $5.00 in annual dues compared to (0.25 chance of falling ill) x (3.5 weeks of benefits) x ($5.00 weekly benefit), or about four and a half dollars in expected benefits. Thus, sickness funds appear to have been a reasonably fair deal for workers.

Establishment funds did not invent sickness benefits by any means. Rather, they systematized previous arrangements for supporting sick workers or the survivors of deceased workers. The old way was to pass the hat, which was characterized by random assessments and arbitrary financial awards. Workers and employers both observed that contributors and beneficiaries alike detested passing the hat. Fellow workers complained about the surprise nature of the hat’s appearance, and beneficiaries faced humiliation upon grief when the hat contained less money than had been collected for a more popular co-worker. Eventually rules replaced discretion, and benefits were paid according to a published schedule, either as a flat rate per diem or as a percentage of wages. The 1890 Census of Insurance reported that only a few funds extended benefits “at the discretion of the society,” and by the time of the 1908 Commissioner of Labor survey the practice had disappeared (Murray 2007).

Labor union funds began in the early nineteenth century. In the earliest union funds, members of craft unions pledged to complete jobs that ill brothers had contracted to perform but could not finish due to illness. Eventually cash benefit payments replaced the in-kind promises of labor, accompanied by cash premium payments into the union’s kitty. While criticized by many observers as unstable, labor union funds actually operated in transparent fashion. Even funds that offered unemployment benefits survived the depression of the mid-1890s by reducing benefit payments and enacting other conservative measures. Another criticism was that their benefits were too small in amount and too brief in duration, but according to the 1908 Commissioner of Labor survey, labor union funds and establishment funds offered similar levels of benefits. The cost-benefit ratio did favor establishment funds, but establishment fund membership ended with employment at a particular company, while union funds offered the substantial attraction of benefits that were portable from job to job.

The cash payment to sick workers created an incentive to take sick leave that workers without sickness insurance did not face; this is the moral hazard of sick pay. Further, workers who believed that they were more likely to make a sick claim would have a stronger incentive to join a sickness fund than a worker in relatively good health; this is called adverse selection. Early twentieth century commentators on government sickness insurance disagreed on the extent and even the existence of moral hazard and adverse selection in sickness insurance. Later statistical studies found evidence for both in establishment funds. However, the funds themselves had understood the potential financial damage each could wreak and strategized to mitigate such losses. The magnitude of the sick pay moral hazard was small, and affected primarily the tendency of the worker to make a claim in the first place. Many sickness funds limited their liability here by paying for the physician who examined the claimant and thus was responsible for approving extended sickness payments. Physicians appear to have paid attention to the wishes of those who paid them. Among claimants in funds that paid the examining physician directly, the average duration of their illness ended significantly earlier. By the same token, physicians who were paid by the worker tended to approve longer absences for that worker—a sign that physicians too responded to incentives.

Testing for adverse selection depends on whether membership in a company’s fund was the worker’s choice (that is, it was voluntary) or the company’s choice (that is, it was compulsory). In fact among establishment funds in which membership was voluntary, claim rates per member were significantly higher than in mandatory membership funds. This indicates that voluntary funds were especially attractive to sicker workers, which is the essence of adverse selection. To reduce the risks of adverse selection, funds imposed age limits to keep out older applicants, physical examinations to discourage the obviously ill, probationary periods to reveal chronic illness, and pre-existing condition clauses to avoid paying for such conditions (Murray 2007a). Sickness funds thus cleverly managed information problems typical of insurance markets.

Industrial Sickness Funds and Progressive Era Politics

Industrial sickness funds were the linchpin of efforts to promote and to oppose the Progressive campaign for state-level mandatory government sickness insurance. One consistent claim made by government insurance supporters was that workers could neither afford to pay for sickness insurance nor to save in advance of financially damaging health problems. The leading advocacy organization, the American Association for Labor Legislation (AALL), reported in its magazine that “Savings of Wage-Earners Are Insufficient to Meet this Loss,” meaning lost income during sickness (American Association for Labor Legislation 1916a). However, worker surveys of savings, income, and insurance holdings revealed that workers rationally strategized according to their varying needs and abilities across the life-cycle. Young workers saved little and were less likely to belong to industrial sickness funds—but were less likely to miss work due to illness as well. Middle aged workers, married with families to support, were relatively more likely to belong to a sickness fund. Older workers pursued a different strategy, saving more and relying on sickness funds less; among other factors, they wanted greater liquidity in their financial assets (Murray 2007a). Worker strategies reflected varying needs at varying stages of life, some (but not all) of which could be adequately addressed by membership in sickness funds.

Despite claims to the contrary by some historians, there was little popular support for government sickness insurance in early twentieth century America. Lobbying by the AALL led twelve states to charge investigatory commissions with determining the need for and feasibility of government sickness insurance (Moss 1996). The AALL offered a basic bill that could be adjusted to meet a state’s particular needs (American Association for Labor Legislation 1916b). Typically the Association prodded states to adopt a version of German insurance, which would keep the many small industrial sickness funds while forcing new members into some and creating new funds for other workers. However, these bills met consistent defeat in statehouses, earning only a fleeting victory in the New York Senate in 1919, which was followed by the bill’s death in an Assembly committee (Hoffman 2001). In the previous year a California referendum on a constitutional amendment that would allow the government to provide sickness insurance lost by nearly three to one (Costa 1996).

After the Progressive campaign exhausted itself, industrial sickness funds continued to grow through the 1920s, but the Great Depression exposed deep flaws in their structure. Many labor union funds, without a sponsoring firm to act as lender of last resort, dissolved. Establishment funds failed at a surprisingly low rate, but their survival was made possible by the tendency of firms to fire less healthy workers. Federal surveys in Minnesota found that ill-health led to earlier job loss in the Depression, and comparisons of self reported health in later surveys indicated that the unemployed were in fact in poorer health than the employed, and the disparity grew as the Depression deepened. Thus, industrial sickness funds paradoxically enjoyed falling claim rates (and thus reduced expenses) as the economy deteriorated (Murray 2007).

Decline and Rebirth of Sickness Funds

At the same time, commercial insurers had been engaging in ever more productive research into the actuarial science of group health insurance. Eventually the insurers cut premium rates while offering benefits comparable to those available through sickness funds. As a result, the commercial insurers and Blue CrossBlue Shield came to dominate the market for health benefits. A federal survey that covered the early 1930s found more firms with group health than with mutual benefit societies but the benefit societies still insured more than twice as many workers (Sayers, et al 1937). By the later 1930s that gap in the number of firms had widened in favor of group health (Figure 1), and the number of workers insured was about equal. After the mid-1940s, industrial sickness funds were no longer a significant player in markets for health insurance (Murray 2007a).

Figure 1: Health Benefit Provision and Source
Source: Dobbin (1992) citing National Industrial Conference Board surveys.

More recently, a type of industrial sickness fund has begun to stage a comeback. Voluntary employee beneficiary associations (VEBAs) fall under a 1928 federal law that was created to govern industrial sickness funds. VEBAs are trusts set up to pay employee benefits without earning profits for the company. In late 2007, the Big Three automakers each contracted with the United Auto Workers (UAW) to operate a VEBA that would provide health insurance for UAW members. If the automakers and their workers succeed in establishing VEBAs that stand the test of time, they will have resurrected a once-successful financial institution previously thought relegated to the pre-World War II economy (Murray 2007b).

References

American Association for Labor Legislation. “Brief for Health Insurance.” American Labor Legislation Review 6 (1916a): 155–236.

American Association for Labor Legislation. “Tentative Draft of an Act.” American Labor Legislation Review 6 (1916b): 239–68.

California Social Insurance Commission. Report of the Social Insurance Commission of the State of California, January 25, 1917. Sacramento: California State Printing Office, 1917.

Costa, Dora L. “Demand for Private and State Provided Health Insurance in the 1910s: Evidence from California.” Photocopy, MIT, 1996.

Derickson, Alan. Health Security for All: Dreams of Universal Health Care in America. Baltimore: Johns Hopkins University Press, 2005.

Dobbin, Frank. “The Origins of Private Social Insurance: Public Policy and Fringe Benefits in America, 1920-1950,” American Journal of Sociology 97 (1992): 1416-50.

Hoffman, Beatrix. The Wages of Sickness: The Politics of Health Insurance in Progressive America. Chapel Hill: University of North Carolina Press, 2001.

Klein, Jennifer. For All These Rights: Business, Labor, and the Shaping of America’s Public-Private Welfare State. Princeton: Princeton University Press, 2003.

Lee, Everett S., Ann Ratner Miller, Carol P. Brainerd, and Richard A. Easterlin, under the direction of Simon Kuznets and Dorothy Swaine Thomas. Population Redistribution and Economic Growth, 1870-1950: Volume I, Methodological Considerations and Reference Tables. Philadelphia: Memoirs of the American Philosophical Society 45, 1957.

Lubove, Roy. The Struggle for Social Security, 1900-1930. Second edition. Pittsburgh: University of Pittsburgh Press, 1986.

Moss, David. Socializing Security: Progressive-Era Economists and the Origins of American Social Policy. Cambridge: Harvard University Press, 1996.

Murray, John E. Origins of American Health Insurance: A History of Industrial Sickness Funds. New Haven: Yale University Press, 2007a.

Murray, John E. “UAW Members Must Treat Health Care Money as Their Own,” Detroit Free Press, 21 November 2007b.

Ohio Health and Old Age Insurance Commission. Health, Health Insurance, Old Age Pensions: Report, Recommendations, Dissenting Opinions. Columbus: Heer, 1919.

Quadagno, Jill. One Nation, Uninsured: Why the U. S. Has No National Health Insurance. New York: Oxford University Press, 2005.

Sayers, R. R., Gertrud Kroeger, and W. M. Gafafer. “General Aspects and Functions of the Sick Benefit Organization.” Public Health Reports 52 (November 5, 1937): 1563–80.

State of Illinois. Report of the Health Insurance Commission of the State of Illinois, May 1, 1919. Springfield: State of Illinois, 1919.

U.S. Department of the Interior. Report on Insurance Business in the United States at the Eleventh Census: 1890; pt. 2, “Life Insurance.” Washington, DC: GPO, 1895.

U.S. Commissioner of Labor. Twenty-third Annual Report of the Commissioner of Labor, 1908: Workmen’s Insurance and Benefit Funds in the United States. Washington, DC: GPO, 1909.

Citation: Murray, John. “Industrial Sickness Funds, US”. EH.Net Encyclopedia, edited by Robert Whaples. June 5, 2008. URL http://eh.net/encyclopedia/industrial-sickness-funds/

The Economic History of Indonesia

Jeroen Touwen, Leiden University, Netherlands

Introduction

In recent decades, Indonesia has been viewed as one of Southeast Asia’s successful highly performing and newly industrializing economies, following the trail of the Asian tigers (Hong Kong, Singapore, South Korea, and Taiwan) (see Table 1). Although Indonesia’s economy grew with impressive speed during the 1980s and 1990s, it experienced considerable trouble after the financial crisis of 1997, which led to significant political reforms. Today Indonesia’s economy is recovering but it is difficult to say when all its problems will be solved. Even though Indonesia can still be considered part of the developing world, it has a rich and versatile past, in the economic as well as the cultural and political sense.

Basic Facts

Indonesia is situated in Southeastern Asia and consists of a large archipelago between the Indian Ocean and the Pacific Ocean, with more than 13.000 islands. The largest islands are Java, Kalimantan (the southern part of the island Borneo), Sumatra, Sulawesi, and Papua (formerly Irian Jaya, which is the western part of New Guinea). Indonesia’s total land area measures 1.9 million square kilometers (750,000 square miles). This is three times the area of Texas, almost eight times the area of the United Kingdom and roughly fifty times the area of the Netherlands. Indonesia has a tropical climate, but since there are large stretches of lowland and numerous mountainous areas, the climate varies from hot and humid to more moderate in the highlands. Apart from fertile land suitable for agriculture, Indonesia is rich in a range of natural resources, varying from petroleum, natural gas, and coal, to metals such as tin, bauxite, nickel, copper, gold, and silver. The size of Indonesia’s population is about 230 million (2002), of which the largest share (roughly 60%) live in Java.

Table 1

Indonesia’s Gross Domestic Product per Capita

Compared with Several Other Asian Countries (in 1990 dollars)

Indonesia Philippines Thailand Japan
1900 745 1 033 812 1 180
1913 904 1 066 835 1 385
1950 840 1 070 817 1 926
1973 1 504 1 959 1 874 11 439
1990 2 516 2 199 4 645 18 789
2000 3 041 2 385 6 335 20 084

Source: Angus Maddison, The World Economy: A Millennial Perspective, Paris: OECD Development Centre Studies 2001, 206, 214-215. For year 2000: University of Groningen and the Conference Board, GGDC Total Economy Database, 2003, http://www.eco.rug.nl/ggdc.

Important Aspects of Indonesian Economic History

“Missed Opportunities”

Anne Booth has characterized the economic history of Indonesia with the somewhat melancholy phrase “a history of missed opportunities” (Booth 1998). One may compare this with J. Pluvier’s history of Southeast Asia in the twentieth century, which is entitled A Century of Unfulfilled Expectations (Breda 1999). The missed opportunities refer to the fact that despite its rich natural resources and great variety of cultural traditions, the Indonesian economy has been underperforming for large periods of its history. A more cyclical view would lead one to speak of several ‘reversals of fortune.’ Several times the Indonesian economy seemed to promise a continuation of favorable economic development and ongoing modernization (for example, Java in the late nineteenth century, Indonesia in the late 1930s or in the early 1990s). But for various reasons Indonesia time and again suffered from severe incidents that prohibited further expansion. These incidents often originated in the internal institutional or political spheres (either after independence or in colonial times), although external influences such as the 1930s Depression also had their ill-fated impact on the vulnerable export-economy.

“Unity in Diversity”

In addition, one often reads about “unity in diversity.” This is not only a political slogan repeated at various times by the Indonesian government itself, but it also can be applied to the heterogeneity in the national features of this very large and diverse country. Logically, the political problems that arise from such a heterogeneous nation state have had their (negative) effects on the development of the national economy. The most striking difference is between densely populated Java, which has a long tradition of politically and economically dominating the sparsely populated Outer Islands. But also within Java and within the various Outer Islands, one encounters a rich cultural diversity. Economic differences between the islands persist. Nevertheless, for centuries, the flourishing and enterprising interregional trade has benefited regional integration within the archipelago.

Economic Development and State Formation

State formation can be viewed as a condition for an emerging national economy. This process essentially started in Indonesia in the nineteenth century, when the Dutch colonized an area largely similar to present-day Indonesia. Colonial Indonesia was called ‘the Netherlands Indies.’ The term ‘(Dutch) East Indies’ was mainly used in the seventeenth and eighteenth centuries and included trading posts outside the Indonesian archipelago.

Although Indonesian national historiography sometimes refers to a presumed 350 years of colonial domination, it is exaggerated to interpret the arrival of the Dutch in Bantam in 1596 as the starting point of Dutch colonization. It is more reasonable to say that colonization started in 1830, when the Java War (1825-1830) was ended and the Dutch initiated a bureaucratic, centralizing polity in Java without further restraint. From the mid-nineteenth century onward, Dutch colonization did shape the borders of the Indonesian nation state, even though it also incorporated weaknesses in the state: ethnic segmentation of economic roles, unequal spatial distribution of power, and a political system that was largely based on oppression and violence. This, among other things, repeatedly led to political trouble, before and after independence. Indonesia ceased being a colony on 17 August 1945 when Sukarno and Hatta proclaimed independence, although full independence was acknowledged by the Netherlands only after four years of violent conflict, on 27 December 1949.

The Evolution of Methodological Approaches to Indonesian Economic History

The economic history of Indonesia analyzes a range of topics, varying from the characteristics of the dynamic exports of raw materials, the dualist economy in which both Western and Indonesian entrepreneurs participated, and the strong measure of regional variation in the economy. While in the past Dutch historians traditionally focused on the colonial era (inspired by the rich colonial archives), from the 1960s and 1970s onward an increasing number of scholars (among which also many Indonesians, but also Australian and American scholars) started to study post-war Indonesian events in connection with the colonial past. In the course of the 1990s attention gradually shifted from the identification and exploration of new research themes towards synthesis and attempts to link economic development with broader historical issues. In 1998 the excellent first book-length survey of Indonesia’s modern economic history was published (Booth 1998). The stress on synthesis and lessons is also present in a new textbook on the modern economic history of Indonesia (Dick et al 2002). This highly recommended textbook aims at a juxtaposition of three themes: globalization, economic integration and state formation. Globalization affected the Indonesian archipelago even before the arrival of the Dutch. The period of the centralized, military-bureaucratic state of Soeharto’s New Order (1966-1998) was only the most recent wave of globalization. A national economy emerged gradually from the 1930s as the Outer Islands (a collective name which refers to all islands outside Java and Madura) reoriented towards industrializing Java.

Two research traditions have become especially important in the study of Indonesian economic history during the past decade. One is a highly quantitative approach, culminating in reconstructions of Indonesia’s national income and national accounts over a long period of time, from the late nineteenth century up to today (Van der Eng 1992, 2001). The other research tradition highlights the institutional framework of economic development in Indonesia, both as a colonial legacy and as it has evolved since independence. There is a growing appreciation among scholars that these two approaches complement each other.

A Chronological Survey of Indonesian Economic History

The precolonial economy

There were several influential kingdoms in the Indonesian archipelago during the pre-colonial era (e.g. Srivijaya, Mataram, Majapahit) (see further Reid 1988,1993; Ricklefs 1993). Much debate centers on whether this heyday of indigenous Asian trade was effectively disrupted by the arrival of western traders in the late fifteenth century

Sixteenth and seventeenth century

Present-day research by scholars in pre-colonial economic history focuses on the dynamics of early-modern trade and pays specific attention to the role of different ethnic groups such as the Arabs, the Chinese and the various indigenous groups of traders and entrepreneurs. During the sixteenth to the nineteenth century the western colonizers only had little grip on a limited number of spots in the Indonesian archipelago. As a consequence much of the economic history of these islands escapes the attention of the economic historian. Most data on economic matters is handed down by western observers with their limited view. A large part of the area remained engaged in its own economic activities, including subsistence agriculture (of which the results were not necessarily very meager) and local and regional trade.

An older research literature has extensively covered the role of the Dutch in the Indonesian archipelago, which began in 1596 when the first expedition of Dutch sailing ships arrived in Bantam. In the seventeenth and eighteenth centuries the Dutch overseas trade in the Far East, which focused on high-value goods, was in the hands of the powerful Dutch East India Company (in full: the United East Indies Trading Company, or Vereenigde Oost-Indische Compagnie [VOC], 1602-1795). However, the region was still fragmented and Dutch presence was only concentrated in a limited number of trading posts.

During the eighteenth century, coffee and sugar became the most important products and Java became the most important area. The VOC gradually took over power from the Javanese rulers and held a firm grip on the productive parts of Java. The VOC was also actively engaged in the intra-Asian trade. For example, cotton from Bengal was sold in the pepper growing areas. The VOC was a successful enterprise and made large dividend payments to its shareholders. Corruption, lack of investment capital, and increasing competition from England led to its demise and in 1799 the VOC came to an end (Gaastra 2002, Jacobs 2000).

The nineteenth century

In the nineteenth century a process of more intensive colonization started, predominantly in Java, where the Cultivation System (1830-1870) was based (Elson 1994; Fasseur 1975).

During the Napoleonic era the VOC trading posts in the archipelago had been under British rule, but in 1814 they came under Dutch authority again. During the Java War (1825-1830), Dutch rule on Java was challenged by an uprising led by Javanese prince Diponegoro. To repress this revolt and establish firm rule in Java, colonial expenses increased, which in turn led to a stronger emphasis on economic exploitation of the colony. The Cultivation System, initiated by Johannes van den Bosch, was a state-governed system for the production of agricultural products such as sugar and coffee. In return for a fixed compensation (planting wage), the Javanese were forced to cultivate export crops. Supervisors, such as civil servants and Javanese district heads, were paid generous ‘cultivation percentages’ in order to stimulate production. The exports of the products were consigned to a Dutch state-owned trading firm (the Nederlandsche Handel-Maatschappij, NHM, established in 1824) and sold profitably abroad.

Although the profits (‘batig slot’) for the Dutch state of the period 1830-1870 were considerable, various reasons can be mentioned for the change to a liberal system: (a) the emergence of new liberal political ideology; (b) the gradual demise of the Cultivation System during the 1840s and 1850s because internal reforms were necessary; and (c) growth of private (European) entrepreneurship with know-how and interest in the exploitation of natural resources, which took away the need for government management (Van Zanden and Van Riel 2000: 226).

Table 2

Financial Results of Government Cultivation, 1840-1849 (‘Cultivation System’) (in thousands of guilders in current values)

1840-1844 1845-1849
Coffee 40 278 24 549
Sugar 8 218 4 136
Indigo, 7 836 7 726
Pepper, Tea 647 1 725
Total net profits 39 341 35 057

Source: Fasseur 1975: 20.

Table 3

Estimates of Total Profits (‘batig slot’) during the Cultivation System,

1831/40 – 1861/70 (in millions of guilders)

1831/40 1841/50 1851/60 1861/70
Gross revenues of sale of colonial products 227.0 473.9 652.7 641.8
Costs of transport etc (NHM) 88.0 165.4 138.7 114.7
Sum of expenses 59.2 175.1 275.3 276.6
Total net profits* 150.6 215.6 289.4 276.7

Source: Van Zanden and Van Riel 2000: 223.

* Recalculated by Van Zanden and Van Riel to include subsidies for the NHM and other costs that in fact benefited the Dutch economy.

The heyday of the colonial export economy (1900-1942)

After 1870, private enterprise was promoted but the exports of raw materials gained decisive momentum after 1900. Sugar, coffee, pepper and tobacco, the old export products, were increasingly supplemented with highly profitable exports of petroleum, rubber, copra, palm oil and fibers. The Outer Islands supplied an increasing share in these foreign exports, which were accompanied by an intensifying internal trade within the archipelago and generated an increasing flow of foreign imports. Agricultural exports were cultivated both in large-scale European agricultural plantations (usually called agricultural estates) and by indigenous smallholders. When the exploitation of oil became profitable in the late nineteenth century, petroleum earned a respectable position in the total export package. In the early twentieth century, the production of oil was increasingly concentrated in the hands of the Koninklijke/Shell Group.


Figure 1

Foreign Exports from the Netherlands-Indies, 1870-1940

(in millions of guilders, current values)

Source: Trade statistics

The momentum of profitable exports led to a broad expansion of economic activity in the Indonesian archipelago. Integration with the world market also led to internal economic integration when the road system, railroad system (in Java and Sumatra) and port system were improved. In shipping lines, an important contribution was made by the KPM (Koninklijke Paketvaart-Maatschappij, Royal Packet boat Company) that served economic integration as well as imperialist expansion. Subsidized shipping lines into remote corners of the vast archipelago carried off export goods (forest products), supplied import goods and transported civil servants and military.

The Depression of the 1930s hit the export economy severely. The sugar industry in Java collapsed and could not really recover from the crisis. In some products, such as rubber and copra, production was stepped up to compensate for lower prices. In the rubber exports indigenous producers for this reason evaded the international restriction agreements. The Depression precipitated the introduction of protectionist measures, which ended the liberal period that had started in 1870. Various import restrictions were launched, making the economy more self-sufficient, as for example in the production of rice, and stimulating domestic integration. Due to the strong Dutch guilder (the Netherlands adhered to the gold standard until 1936), it took relatively long before economic recovery took place. The outbreak of World War II disrupted international trade, and the Japanese occupation (1942-1945) seriously disturbed and dislocated the economic order.

Table 4

Annual Average Growth in Economic Key Aggregates 1830-1990

GDP per capita Export volume Export

Prices

Government Expenditure
Cultivation System 1830-1840 n.a. 13.5 5.0 8.5
Cultivation System 1840-1848 n.a. 1.5 – 4.5 [very low]
Cultivation System 1849-1873 n.a. 1.5 1.5 2.6
Liberal Period 1874-1900 [very low] 3.1 – 1.9 2.3
Ethical Period 1901-1928 1.7 5.8 17.4 4.1
Great Depression 1929-1934 -3.4 -3.9 -19.7 0.4
Prewar Recovery 1934-1940 2.5 2.2 7.8 3.4
Old Order 1950-1965 1.0 0.8 – 2.1 1.8
New Order 1966-1990 4.4 5.4 11.6 10.6

Source: Booth 1998: 18.

Note: These average annual growth percentages were calculated by Booth by fitting an exponential curve to the data for the years indicated. Up to 1873 data refer only to Java.

The post-1945 period

After independence, the Indonesian economy had to recover from the hardships of the Japanese occupation and the war for independence (1945-1949), on top of the slow recovery from the 1930s Depression. During the period 1949-1965, there was little economic growth, predominantly in the years from 1950 to 1957. In 1958-1965, growth rates dwindled, largely due to political instability and inappropriate economic policy measures. The hesitant start of democracy was characterized by a power struggle between the president, the army, the communist party and other political groups. Exchange rate problems and absence of foreign capital were detrimental to economic development, after the government had eliminated all foreign economic control in the private sector in 1957/58. Sukarno aimed at self-sufficiency and import substitution and estranged the suppliers of western capital even more when he developed communist sympathies.

After 1966, the second president, general Soeharto, restored the inflow of western capital, brought back political stability with a strong role for the army, and led Indonesia into a period of economic expansion under his authoritarian New Order (Orde Baru) regime which lasted until 1997 (see below for the three phases in New Order). In this period industrial output quickly increased, including steel, aluminum, and cement but also products such as food, textiles and cigarettes. From the 1970s onward the increased oil price on the world market provided Indonesia with a massive income from oil and gas exports. Wood exports shifted from logs to plywood, pulp, and paper, at the price of large stretches of environmentally valuable rainforest.

Soeharto managed to apply part of these revenues to the development of technologically advanced manufacturing industry. Referring to this period of stable economic growth, the World Bank Report of 1993 speaks of an ‘East Asian Miracle’ emphasizing the macroeconomic stability and the investments in human capital (World Bank 1993: vi).

The financial crisis in 1997 revealed a number of hidden weaknesses in the economy such as a feeble financial system (with a lack of transparency), unprofitable investments in real estate, and shortcomings in the legal system. The burgeoning corruption at all levels of the government bureaucracy became widely known as KKN (korupsi, kolusi, nepotisme). These practices characterize the coming-of-age of the 32-year old, strongly centralized, autocratic Soeharto regime.

From 1998 until present

Today, the Indonesian economy still suffers from severe economic development problems following the financial crisis of 1997 and the subsequent political reforms after Soeharto stepped down in 1998. Secessionist movements and the low level of security in the provincial regions, as well as relatively unstable political policies, form some of its present-day problems. Additional problems include the lack of reliable legal recourse in contract disputes, corruption, weaknesses in the banking system, and strained relations with the International Monetary Fund. The confidence of investors remains low, and in order to achieve future growth, internal reform will be essential to build up confidence of international donors and investors.

An important issue on the reform agenda is regional autonomy, bringing a larger share of export profits to the areas of production instead of to metropolitan Java. However, decentralization policies do not necessarily improve national coherence or increase efficiency in governance.

A strong comeback in the global economy may be at hand, but has not as yet fully taken place by the summer of 2003 when this was written.

Additional Themes in the Indonesian Historiography

Indonesia is such a large and multi-faceted country that many different aspects have been the focus of research (for example, ethnic groups, trade networks, shipping, colonialism and imperialism). One can focus on smaller regions (provinces, islands), as well as on larger regions (the western archipelago, the eastern archipelago, the Outer Islands as a whole, or Indonesia within Southeast Asia). Without trying to be exhaustive, eleven themes which have been subject of debate in Indonesian economic history are examined here (on other debates see also Houben 2002: 53-55; Lindblad 2002b: 145-152; Dick 2002: 191-193; Thee 2002: 242-243).

The indigenous economy and the dualist economy

Although western entrepreneurs had an advantage in technological know-how and supply of investment capital during the late-colonial period, there has been a traditionally strong and dynamic class of entrepreneurs (traders and peasants) in many regions of Indonesia. Resilient in times of economic malaise, cunning in symbiosis with traders of other Asian nationalities (particularly Chinese), the Indonesian entrepreneur has been rehabilitated after the relatively disparaging manner in which he was often pictured in the pre-1945 literature. One of these early writers, J.H. Boeke, initiated a school of thought centering on the idea of ‘economic dualism’ (referring to a modern western and a stagnant eastern sector). As a consequence, the term ‘dualism’ was often used to indicate western superiority. From the 1960s onward such ideas have been replaced by a more objective analysis of the dualist economy that is not so judgmental about the characteristics of economic development in the Asian sector. Some focused on technological dualism (such as B. Higgins) others on ethnic specialization in different branches of production (see also Lindblad 2002b: 148, Touwen 2001: 316-317).

The characteristics of Dutch imperialism

Another vigorous debate concerns the character of and the motives for Dutch colonial expansion. Dutch imperialism can be viewed as having a rather complex mix of political, economic and military motives which influenced decisions about colonial borders, establishing political control in order to exploit oil and other natural resources, and preventing local uprisings. Three imperialist phases can be distinguished (Lindblad 2002a: 95-99). The first phase of imperialist expansion was from 1825-1870. During this phase interference with economic matters outside Java increased slowly but military intervention was occasional. The second phase started with the outbreak of the Aceh War in 1873 and lasted until 1896. During this phase initiatives in trade and foreign investment taken by the colonial government and by private businessmen were accompanied by extension of colonial (military) control in the regions concerned. The third and final phase was characterized by full-scale aggressive imperialism (often known as ‘pacification’) and lasted from 1896 until 1907.

The impact of the cultivation system on the indigenous economy

The thesis of ‘agricultural involution’ was advocated by Clifford Geertz (1963) and states that a process of stagnation characterized the rural economy of Java in the nineteenth century. After extensive research, this view has generally been discarded. Colonial economic growth was stimulated first by the Cultivation System, later by the promotion of private enterprise. Non-farm employment and purchasing power increased in the indigenous economy, although there was much regional inequality (Lindblad 2002a: 80; 2002b:149-150).

Regional diversity in export-led economic expansion

The contrast between densely populated Java, which had been dominant in economic and political regard for a long time, and the Outer Islands, which were a large, sparsely populated area, is obvious. Among the Outer Islands we can distinguish between areas which were propelled forward by export trade, either from Indonesian or European origin (examples are Palembang, East Sumatra, Southeast Kalimantan) and areas which stayed behind and only slowly picked the fruits of the modernization that took place elsewhere (as for example Benkulu, Timor, Maluku) (Touwen 2001).

The development of the colonial state and the role of Ethical Policy

Well into the second half of the nineteenth century, the official Dutch policy was to abstain from interference with local affairs. The scarce resources of the Dutch colonial administrators should be reserved for Java. When the Aceh War initiated a period of imperialist expansion and consolidation of colonial power, a call for more concern with indigenous affairs was heard in Dutch politics, which resulted in the official Ethical Policy which was launched in 1901 and had the threefold aim of improving indigenous welfare, expanding the educational system, and allowing for some indigenous participation in the government (resulting in the People’s Council (Volksraad) that was installed in 1918 but only had an advisory role). The results of the Ethical Policy, as for example measured in improvements in agricultural technology, education, or welfare services, are still subject to debate (Lindblad 2002b: 149).

Living conditions of coolies at the agricultural estates

The plantation economy, which developed in the sparsely populated Outer Islands (predominantly in Sumatra) between 1870 and 1942, was in bad need of labor. The labor shortage was solved by recruiting contract laborers (coolies) in China, and later in Java. The Coolie Ordinance was a government regulation that included the penal clause (which allowed for punishment by plantation owners). In response to reported abuse, the colonial government established the Labor Inspectorate (1908), which aimed at preventing abuse of coolies on the estates. The living circumstances and treatment of the coolies has been subject of debate, particularly regarding the question whether the government put enough effort in protecting the interests of the workers or allowed abuse to persist (Lindblad 2002b: 150).

Colonial drain

How large of a proportion of economic profits was drained away from the colony to the mother country? The detrimental effects of the drain of capital, in return for which European entrepreneurial initiatives were received, have been debated, as well as the exact methods of its measurement. There was also a second drain to the home countries of other immigrant ethnic groups, mainly to China (Van der Eng 1998; Lindblad 2002b: 151).

The position of the Chinese in the Indonesian economy

In the colonial economy, the Chinese intermediary trader or middleman played a vital role in supplying credit and stimulating the cultivation of export crops such as rattan, rubber and copra. The colonial legal system made an explicit distinction between Europeans, Chinese and Indonesians. This formed the roots of later ethnic problems, since the Chinese minority population in Indonesia has gained an important (and sometimes envied) position as capital owners and entrepreneurs. When threatened by political and social turmoil, Chinese business networks may have sometimes channel capital funds to overseas deposits.

Economic chaos during the ‘Old Order’

The ‘Old Order’-period, 1945-1965, was characterized by economic (and political) chaos although some economic growth undeniably did take place during these years. However, macroeconomic instability, lack of foreign investment and structural rigidity formed economic problems that were closely connected with the political power struggle. Sukarno, the first president of the Indonesian republic, had an outspoken dislike of colonialism. His efforts to eliminate foreign economic control were not always supportive of the struggling economy of the new sovereign state. The ‘Old Order’ has for long been a ‘lost area’ in Indonesian economic history, but the establishment of the unitary state and the settlement of major political issues, including some degree of territorial consolidation (as well as the consolidation of the role of the army) were essential for the development of a national economy (Dick 2002: 190; Mackie 1967).

Development policy and economic planning during the ‘New Order’ period

The ‘New Order’ (Orde Baru) of Soeharto rejected political mobilization and socialist ideology, and established a tightly controlled regime that discouraged intellectual enquiry, but did put Indonesia’s economy back on the rails. New flows of foreign investment and foreign aid programs were attracted, the unbridled population growth was reduced due to family planning programs, and a transformation took place from a predominantly agricultural economy to an industrializing economy. Thee Kian Wie distinguishes three phases within this period, each of which deserve further study:

(a) 1966-1973: stabilization, rehabilitation, partial liberalization and economic recovery;

(b) 1974-1982: oil booms, rapid economic growth, and increasing government intervention;

(c) 1983-1996: post-oil boom, deregulation, renewed liberalization (in reaction to falling oil-prices), and rapid export-led growth. During this last phase, commentators (including academic economists) were increasingly concerned about the thriving corruption at all levels of the government bureaucracy: KKN (korupsi, kolusi, nepotisme) practices, as they later became known (Thee 2002: 203-215).

Financial, economic and political crisis: KRISMON, KRISTAL

The financial crisis of 1997 started with a crisis of confidence following the depreciation of the Thai baht in July 1997. Core factors causing the ensuing economic crisis in Indonesia were the quasi-fixed exchange rate of the rupiah, quickly rising short-term foreign debt and the weak financial system. Its severity had to be attributed to political factors as well: the monetary crisis (KRISMON) led to a total crisis (KRISTAL) because of the failing policy response of the Soeharto regime. Soeharto had been in power for 32 years and his government had become heavily centralized and corrupt and was not able to cope with the crisis in a credible manner. The origins, economic consequences, and socio-economic impact of the crisis are still under discussion. (Thee 2003: 231-237; Arndt and Hill 1999).

(Note: I want to thank Dr. F. Colombijn and Dr. J.Th Lindblad at Leiden University for their useful comments on the draft version of this article.)

Selected Bibliography

In addition to the works cited in the text above, a small selection of recent books is mentioned here, which will allow the reader to quickly grasp the most recent insights and find useful further references.

General textbooks or periodicals on Indonesia’s (economic) history:

Booth, Anne. The Indonesian Economy in the Nineteenth and Twentieth Centuries: A History of Missed Opportunities. London: Macmillan, 1998.

Bulletin of Indonesian Economic Studies.

Dick, H.W., V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie. The Emergence of a National Economy in Indonesia, 1800-2000. Sydney: Allen & Unwin, 2002.

Itinerario “Economic Growth and Institutional Change in Indonesia in the 19th and 20th centuries” [special issue] 26 no. 3-4 (2002).

Reid, Anthony. Southeast Asia in the Age of Commerce, 1450-1680, Vol. I: The Lands below the Winds. New Haven: Yale University Press, 1988.

Reid, Anthony. Southeast Asia in the Age of Commerce, 1450-1680, Vol. II: Expansion and Crisis. New Haven: Yale University Press, 1993.

Ricklefs, M.C. A History of Modern Indonesia since ca. 1300. Basingstoke/Londen: Macmillan, 1993.

On the VOC:

Gaastra, F.S. De Geschiedenis van de VOC. Zutphen: Walburg Pers, 1991 (1st edition), 2002 (4th edition).

Jacobs, Els M. Koopman in Azië: de Handel van de Verenigde Oost-Indische Compagnie tijdens de 18de Eeuw. Zutphen: Walburg Pers, 2000.

Nagtegaal, Lucas. Riding the Dutch Tiger: The Dutch East Indies Company and the Northeast Coast of Java 1680-1743. Leiden: KITLV Press, 1996.

On the Cultivation System:

Elson, R.E. Village Java under the Cultivation System, 1830-1870. Sydney: Allen and Unwin, 1994.

Fasseur, C. Kultuurstelsel en Koloniale Baten. De Nederlandse Exploitatie van Java, 1840-1860. Leiden, Universitaire Pers, 1975. (Translated as: The Politics of Colonial Exploitation: Java, the Dutch and the Cultivation System. Ithaca, NY: Southeast Asia Program, Cornell University Press 1992.)

Geertz, Clifford. Agricultural Involution: The Processes of Ecological Change in Indonesia. Berkeley: University of California Press, 1963.

Houben, V.J.H. “Java in the Nineteenth Century: Consolidation of a Territorial State.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 56-81. Sydney: Allen & Unwin, 2002.

On the Late-Colonial Period:

Dick, H.W. “Formation of the Nation-state, 1930s-1966.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 153-193. Sydney: Allen & Unwin, 2002.

Lembaran Sejarah, “Crisis and Continuity: Indonesian Economy in the Twentieth Century” [special issue] 3 no. 1 (2000).

Lindblad, J.Th., editor. New Challenges in the Modern Economic History of Indonesia. Leiden: PRIS, 1993. Translated as: Sejarah Ekonomi Modern Indonesia. Berbagai Tantangan Baru. Jakarta: LP3ES, 2002.

Lindblad, J.Th., editor. The Historical Foundations of a National Economy in Indonesia, 1890s-1990s. Amsterdam: North-Holland, 1996.

Lindblad, J.Th. “The Outer Islands in the Nineteenthh Century: Contest for the Periphery.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 82-110. Sydney: Allen & Unwin, 2002a.

Lindblad, J.Th. “The Late Colonial State and Economic Expansion, 1900-1930s.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 111-152. Sydney: Allen & Unwin, 2002b.

Touwen, L.J. Extremes in the Archipelago: Trade and Economic Development in the Outer Islands of Indonesia, 1900‑1942. Leiden: KITLV Press, 2001.

Van der Eng, Pierre. “Exploring Exploitation: The Netherlands and Colonial Indonesia, 1870-1940.” Revista de Historia Económica 16 (1998): 291-321.

Zanden, J.L. van, and A. van Riel. Nederland, 1780-1914: Staat, instituties en economische ontwikkeling. Amsterdam: Balans, 2000. (On the Netherlands in the nineteenth century.)

Independent Indonesia:

Arndt, H.W. and Hal Hill, editors. Southeast Asia’s Economic Crisis: Origins, Lessons and the Way forward. Singapore: Institute of Southeast Asian Studies, 1999.

Cribb, R. and C. Brown. Modern Indonesia: A History since 1945. Londen/New York: Longman, 1995.

Feith, H. The Decline of Constitutional Democracy in Indonesia. Ithaca, New York: Cornell University Press, 1962.

Hill, Hal. The Indonesian Economy. Cambridge: Cambridge University Press, 2000. (This is the extended second edition of Hill, H., The Indonesian Economy since 1966. Southeast Asia’s Emerging Giant. Cambridge: Cambridge University Press, 1996.)

Hill, Hal, editor. Unity and Diversity: Regional Economic Development in Indonesia since 1970. Singapore: Oxford University Press, 1989.

Mackie, J.A.C. “The Indonesian Economy, 1950-1960.” In The Economy of Indonesia: Selected Readings, edited by B. Glassburner, 16-69. Ithaca NY: Cornell University Press 1967.

Robison, Richard. Indonesia: The Rise of Capital. Sydney: Allen and Unwin, 1986.

Thee Kian Wie. “The Soeharto Era and After: Stability, Development and Crisis, 1966-2000.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 194-243. Sydney: Allen & Unwin, 2002.

World Bank. The East Asian Miracle: Economic Growth and Public Policy. Oxford: World Bank /Oxford University Press, 1993.

On economic growth:

Booth, Anne. The Indonesian Economy in the Nineteenth and Twentieth Centuries. A History of Missed Opportunities. London: Macmillan, 1998.

Van der Eng, Pierre. “The Real Domestic Product of Indonesia, 1880-1989.” Explorations in Economic History 39 (1992): 343-373.

Van der Eng, Pierre. “Indonesia’s Growth Performance in the Twentieth Century.” In The Asian Economies in the Twentieth Century, edited by Angus Maddison, D.S. Prasada Rao and W. Shepherd, 143-179. Cheltenham: Edward Elgar, 2002.

Van der Eng, Pierre. “Indonesia’s Economy and Standard of Living in the Twentieth Century.” In Indonesia Today: Challenges of History, edited by G. Lloyd and S. Smith, 181-199. Singapore: Institute of Southeast Asian Studies, 2001.

Citation: Touwen, Jeroen. “The Economic History of Indonesia”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-indonesia/

Hours of Work in U.S. History

Robert Whaples, Wake Forest University

In the 1800s, many Americans worked seventy hours or more per week and the length of the workweek became an important political issue. Since then the workweek’s length has decreased considerably. This article presents estimates of the length of the historical workweek in the U.S., describes the history of the shorter-hours “movement,” and examines the forces that drove the workweek’s decline over time.

Estimates of the Length of the Workweek

Measuring the length of the workweek (or workday or workyear) is a difficult task, full of ambiguities concerning what constitutes work and who is to be considered a worker. Estimating the length of the historical workweek is even more troublesome. Before the Civil War most Americans were employed in agriculture and most of these were self-employed. Like self-employed workers in other fields, they saw no reason to record the amount of time they spent working. Often the distinction between work time and leisure time was blurry. Therefore, estimates of the length of the typical workweek before the mid-1800s are very imprecise.

The Colonial Period

Based on the amount of work performed — for example, crops raised per worker — Carr (1992) concludes that in the seventeenth-century Chesapeake region, “for at least six months of the year, an eight to ten-hour day of hard labor was necessary.” This does not account for other required tasks, which probably took about three hours per day. This workday was considerably longer than for English laborers, who at the time probably averaged closer to six hours of heavy labor each day.

The Nineteenth Century

Some observers believe that most American workers adopted the practice of working from “first light to dark” — filling all their free hours with work — throughout the colonial period and into the nineteenth century. Others are skeptical of such claims and argue that work hours increased during the nineteenth century — especially its first half. Gallman (1975) calculates “changes in implicit hours of work per agricultural worker” and estimates that hours increased 11 to 18 percent from 1800 to 1850. Fogel and Engerman (1977) argue that agricultural hours in the North increased before the Civil War due to the shift into time-intensive dairy and livestock. Weiss and Craig (1993) find evidence suggesting that agricultural workers also increased their hours of work between 1860 and 1870. Finally, Margo (2000) estimates that “on an economy-wide basis, it is probable that annual hours of work rose over the (nineteenth) century, by around 10 percent.” He credits this rise to the shift out of agriculture, a decline in the seasonality of labor demand and reductions in annual periods of nonemployment. On the other hand, it is clear that working hours declined substantially for one important group. Ransom and Sutch (1977) and Ng and Virts (1989) estimate that annual labor hours per capita fell 26 to 35 percent among African-Americans with the end of slavery.

Manufacturing Hours before 1890

Our most reliable estimates of the workweek come from manufacturing, since most employers required that manufacturing workers remain at work during precisely specified hours. The Census of Manufactures began to collect this information in 1880 but earlier estimates are available. Much of what is known about average work hours in the nineteenth century comes from two surveys of manufacturing hours taken by the federal government. The first survey, known as the Weeks Report, was prepared by Joseph Weeks as part of the Census of 1880. The second was prepared in 1893 by Commissioner of Labor Carroll D. Wright, for the Senate Committee on Finance, chaired by Nelson Aldrich. It is commonly called the Aldrich Report. Both of these sources, however, have been criticized as flawed due to problems such as sample selection bias (firms whose records survived may not have been typical) and unrepresentative regional and industrial coverage. In addition, the two series differ in their estimates of the average length of the workweek by as much as four hours. These estimates are reported in Table 1. Despite the previously mentioned problems, it seems reasonable to accept two important conclusions based on these data — the length of the typical manufacturing workweek in the 1800s was very long by modern standards and it declined significantly between 1830 and 1890.

Table 1
Estimated Average Weekly Hours Worked in Manufacturing, 1830-1890

Year Weeks Report Aldrich Report
1830 69.1
1840 67.1 68.4
1850 65.5 69.0
1860 62.0 66.0
1870 61.1 63.0
1880 60.7 61.8
1890 60.0

Sources: U.S. Department of Interior (1883), U.S. Senate (1893)
Note: Atack and Bateman (1992), using data from census manuscripts, estimate average weekly hours to be 60.1 in 1880 — very close to Weeks’ contemporary estimate. They also find that the summer workweek was about 1.5 hours longer than the winter workweek.

Hours of Work during the Twentieth Century

Because of changing definitions and data sources there does not exist a consistent series of workweek estimates covering the entire twentieth century. Table 2 presents six sets of estimates of weekly hours. Despite differences among the series, there is a fairly consistent pattern, with weekly hours falling considerably during the first third of the century and much more slowly thereafter. In particular, hours fell strongly during the years surrounding World War I, so that by 1919 the eight-hour day (with six workdays per week) had been won. Hours fell sharply at the beginning of the Great Depression, especially in manufacturing, then rebounded somewhat and peaked during World War II. After World War II, the length of the workweek stabilized around forty hours. Owen’s nonstudent-male series shows little trend after World War II, but the other series show a slow, but steady, decline in the length of the average workweek. Greis’s two series are based on the average length of the workyear and adjust for paid vacations, holidays and other time-off. The last column is based on information reported by individuals in the decennial censuses and in the Current Population Survey of 1988. It may be the most accurate and representative series, as it is based entirely on the responses of individuals rather than employers.

Table 2
Estimated Average Weekly Hours Worked, 1900-1988

Year Census of Manu-facturing JonesManu-

facturing

OwenNonstudent Males GreisManu-

facturing

GreisAll Workers Census/CPS All Workers
1900 59.6* 55.0 58.5
1904 57.9 53.6 57.1
1909 56.8 (57.3) 53.1 55.7
1914 55.1 (55.5) 50.1 54.0
1919 50.8 (51.2) 46.1 50.0
1924 51.1* 48.8 48.8
1929 50.6 48.0 48.7
1934 34.4 40.6
1940 37.6 42.5 43.3
1944 44.2 46.9
1947 39.2 42.4 43.4 44.7
1950 38.7 41.1 42.7
1953 38.6 41.5 43.2 44.0
1958 37.8* 40.9 42.0 43.4
1960 41.0 40.9
1963 41.6 43.2 43.2
1968 41.7 41.2 42.0
1970 41.1 40.3
1973 40.6 41.0
1978 41.3* 39.7 39.1
1980 39.8
1988 39.2

Sources: Whaples (1990a), Jones (1963), Owen (1976, 1988), and Greis (1984). The last column is based on the author’s calculations using Coleman and Pencavel’s data from Table 4 (below).
* = these estimates are from one year earlier than the year listed.
(The figures in parentheses in the first column are unofficial estimates but are probably more precise, as they better estimate the hours of workers in industries with very long workweeks.)

Hours in Other Industrial Sectors

Table 3 compares the length of the workweek in manufacturing to that in other industries for which there is available information. (Unfortunately, data from the agricultural and service sectors are unavailable until late in this period.) The figures in Table 3 show that the length of the workweek was generally shorter in the other industries — sometimes considerably shorter. For example, in 1910 anthracite coalminers’ workweeks were about forty percent shorter than the average workweek among manufacturing workers. All of the series show an overall downward trend.

Table 3
Estimated Average Weekly Hours Worked, Other Industries

Year Manufacturing Construction Railroads Bituminous Coal Anthracite Coal
1850s about 66 about 66
1870s about 62 about 60
1890 60.0 51.3
1900 59.6 50.3 52.3 42.8 35.8
1910 57.3 45.2 51.5 38.9 43.3
1920 51.2 43.8 46.8 39.3 43.2
1930 50.6 42.9 33.3 37.0
1940 37.6 42.5 27.8 27.2
1955 38.5 37.1 32.4 31.4

Sources: Douglas (1930), Jones (1963), Licht (1983), and Tables 1 and 2.
Note: The manufacturing figures for the 1850s and 1870s are approximations based on averaging numbers from the Weeks and Aldrich reports from Table 1. The early estimates for the railroad industry are also approximations.

Recent Trends by Race and Gender

Some analysts, such as Schor (1992) have argued that the workweek increased substantially in the last half of the twentieth century. Few economists accept this conclusion, arguing that it is based on the use of faulty data (public opinion surveys) and unexplained methods of “correcting” more reliable sources. Schor’s conclusions are contradicted by numerous studies. Table 4 presents Coleman and Pencavel’s (1993a, 1993b) estimates of the average workweek of employed people — disaggregated by race and gender. For all four groups the average length of the workweek has dropped since 1950. Although median weekly hours were virtually constant for men, the upper tail of the hours distribution fell for those with little schooling and rose for the well-educated. In addition, Coleman and Pencavel also find that work hours declined for young and older men (especially black men), but changed little for white men in their prime working years. Women with relatively little schooling were working fewer hours in the 1980s than in 1940, while the reverse is true of well-educated women.

Table 4
Estimated Average Weekly Hours Worked, by Race and Gender, 1940-1988

Year White Men Black Men White Women Black Women
1940 44.1 44.5 40.6 42.2
1950 43.4 42.8 41.0 40.3
1960 43.3 40.4 36.8 34.7
1970 43.1 40.2 36.1 35.9
1980 42.9 39.6 35.9 36.5
1988 42.4 39.6 35.5 37.2

Source: Coleman and Pencavel (1993a, 1993b)

Broader Trends in Time Use, 1880 to 2040

In 1880 a typical male household head had very little leisure time — only about 1.8 hours per day over the course of a year. However, as Fogel’s (2000) estimates in Table 5 show, between 1880 and 1995 the amount of work per day fell nearly in half, allowing leisure time to more than triple. Because of the decline in the length of the workweek and the declining portion of a lifetime that is spent in paid work (due largely to lengthening periods of education and retirement) the fraction of the typical American’s lifetime devoted to work has become remarkably small. Based on these trends Fogel estimates that four decades from now less than one-fourth of our discretionary time (time not needed for sleep, meals, and hygiene) will be devoted to paid work — over three-fourths will be available for doing what we wish.

Table 5
Division of the Day for the Average Male Household Head over the Course of a Year, 1880 and 1995

Activity 1880 1995
Sleep 8 8
Meals and hygiene 2 2
Chores 2 2
Travel to and from work 1 1
Work 8.5 4.7
Illness .7 .5
Left over for leisure activities 1.8 5.8

Source: Fogel (2000)

Table 6
Estimated Trend in the Lifetime Distribution of Discretionary Time, 1880-2040

Activity 1880 1995 2040
Lifetime Discretionary Hours 225,900 298,500 321,900
Lifetime Work Hours 182,100 122,400 75,900
Lifetime Leisure Hours 43,800 176,100 246,000

Source: Fogel (2000)
Notes: Discretionary hours exclude hours used for sleep, meals and hygiene. Work hours include paid work, travel to and from work, and household chores.

Postwar International Comparisons

While hours of work have decreased slowly in the U.S. since the end of World War II, they have decreased more rapidly in Western Europe. Greis (1984) calculates that annual hours worked per employee fell from 1908 to 1704 in the U.S. between 1950 and 1979, a 10.7 percent decrease. This compares to a 21.8 percent decrease across a group of twelve Western European countries, where the average fell from 2170 hours to 1698 hours between 1950 and 1979. Perhaps the most precise way of measuring work hours is to have individuals fill out diaries on their day-to-day and hour-to-hour time use. Table 7 presents an international comparison of average work hours both inside and outside of the workplace, by adult men and women — averaging those who are employed with those who are not. (Juster and Stafford (1991) caution, however, that making these comparisons requires a good deal of guesswork.) These numbers show a significant drop in total work per week in the U.S. between 1965 and 1981. They also show that total work by men and women is very similar, although it is divided differently. Total work hours in the U.S. were fairly similar to those in Japan, but greater than in Denmark, while less than in the USSR.

Table 7
Weekly Work Time in Four Countries, Based on Time Diaries, 1960s-1980s

Activity US USSR (Pskov)
Men Women Men Women
1965 1981 1965 1981 1965 1981 1965 1981
Total Work 63.1 57.8 60.9 54.4 64.4 65.7 75.3 66.3
Market Work 51.6 44.0 18.9 23.9 54.6 53.8 43.8 39.3
Commuting 4.8 3.5 1.6 2.0 4.9 5.2 3.7 3.4
Housework 11.5 13.8 41.8 30.5 9.8 11.9 31.5 27.0
Activity Japan Denmark
Men Women Men Women
1965 1985 1965 1985 1964 1987 1964 1987
Total Work 60.5 55.5 64.7 55.6 45.4 46.2 43.4 43.9
Market Work 57.7 52.0 33.2 24.6 41.7 33.4 13.3 20.8
Commuting 3.6 4.5 1.0 1.2 n.a n.a n.a n.a
Housework 2.8 3.5 31.5 31.0 3.7 12.8 30.1 23.1

Source: Juster and Stafford (1991)

The Shorter Hours “Movement” in the U.S.

The Colonial Period

Captain John Smith, after mapping New England’s coast, came away convinced that three days’ work per week would satisfy any settler. Far from becoming a land of leisure, however, the abundant resources of British America and the ideology of its settlers, brought forth high levels of work. Many colonial Americans held the opinion that prosperity could be taken as a sign of God’s pleasure with the individual, viewed work as inherently good and saw idleness as the devil’s workshop. Rodgers (1978) argues that this work ethic spread and eventually reigned supreme in colonial America. The ethic was consistent with the American experience, since high returns to effort meant that hard work often yielded significant increases in wealth. In Virginia, authorities also transplanted the Statue of Artificers, which obliged all Englishmen (except the gentry) to engage in productive activity from sunrise to sunset. Likewise, a 1670 Massachusetts law demanded a minimum ten-hour workday, but it is unlikely that these laws had any impact on the behavior of most free workers.

The Revolutionary War Period

Roediger and Foner (1989) contend that the Revolutionary War era brought a series of changes that undermined support for sun-to-sun work. The era’s republican ideology emphasized that workers needed free time, away from work, to participate in democracy. Simultaneously, the development of merchant capitalism meant that there were, for the first time, a significant number of wageworkers. Roediger and Foner argue that reducing labor costs was crucial to the profitability of these workers’ employers, who reduced costs by squeezing more work from their employees — reducing time for meals, drink and rest and sometimes even rigging the workplace’s official clock. Incensed by their employers’ practice of paying a flat daily wage during the long summer shift and resorting to piece rates during short winter days, Philadelphia’s carpenters mounted America’s first ten-hour-day strike in May 1791. (The strike was unsuccessful.)

1820s: The Shorter Hours Movement Begins

Changes in the organization of work, with the continued rise of merchant capitalists, the transition from the artisanal shop to the early factory, and an intensified work pace had become widespread by about 1825. These changes produced the first extensive, aggressive movement among workers for shorter hours, as the ten-hour movement blossomed in New York City, Philadelphia and Boston. Rallying around the ten-hour banner, workers formed the first city-central labor union in the U.S., the first labor newspaper, and the first workingmen’s political party — all in Philadelphia — in the late 1820s.

Early Debates over Shorter Hours

Although the length of the workday is largely an economic decision arrived at by the interaction of the supply and demand for labor, advocates of shorter hours and foes of shorter hours have often argued the issue on moral grounds. In the early 1800s, advocates argued that shorter work hours improved workers’ health, allowed them time for self-improvement and relieved unemployment. Detractors countered that workers would abuse leisure time (especially in saloons) and that long, dedicated hours of work were the path to success, which should not be blocked for the great number of ambitious workers.

1840s: Early Agitation for Government Intervention

When Samuel Slater built the first textile mills in the U.S., “workers labored from sun up to sun down in summer and during the darkness of both morning and evening in the winter. These hours ? only attracted attention when they exceeded the common working day of twelve hours,” according to Ware (1931). During the 1830s, an increased work pace, tighter supervision, and the addition of about fifteen minutes to the work day (partly due to the introduction of artificial lighting during winter months), plus the growth of a core of more permanent industrial workers, fueled a campaign for a shorter workweek among mill workers in Lowell, Massachusetts, whose workweek averaged about 74 hours. This agitation was led by Sarah Bagley and the New England Female Labor Reform Association, which, beginning in 1845, petitioned the state legislature to intervene in the determination of hours. The petitions were followed by America’s first-ever examination of labor conditions by a governmental investigating committee. The Massachusetts legislature proved to be very unsympathetic to the workers’ demands, but similar complaints led to the passage of laws in New Hampshire (1847) and Pennsylvania (1848), declaring ten hours to be the legal length of the working day. However, these laws also specified that a contract freely entered into by employee and employer could set any length for the workweek. Hence, these laws had little impact. Legislation passed by the federal government had a more direct, though limited effect. On March 31, 1840, President Martin Van Buren issued an executive order mandating a ten-hour day for all federal employees engaged in manual work.

1860s: Grand Eight Hours Leagues

As the length of the workweek gradually declined, political agitation for shorter hours seems to have waned for the next two decades. However, immediately after the Civil War reductions in the length of the workweek reemerged as an important issue for organized labor. The new goal was an eight-hour day. Roediger (1986) argues that many of the new ideas about shorter hours grew out of the abolitionists’ critique of slavery — that long hours, like slavery, stunted aggregate demand in the economy. The leading proponent of this idea, Ira Steward, argued that decreasing the length of the workweek would raise the standard of living of workers by raising their desired consumption levels as their leisure expanded, and by ending unemployment. The hub of the newly launched movement was Boston and Grand Eight Hours Leagues sprang up around the country in 1865 and 1866. The leaders of the movement called the meeting of the first national organization to unite workers of different trades, the National Labor Union, which met in Baltimore in 1867. In response to this movement, eight states adopted general eight-hour laws, but again the laws allowed employer and employee to mutually consent to workdays longer than the “legal day.” Many critics saw these laws and this agitation as a hoax, because few workers actually desired to work only eight hours per day at their original hourly pay rate. The passage of the state laws did foment action by workers — especially in Chicago where parades, a general strike, rioting and martial law ensued. In only a few places did work hours fall after the passage of these laws. Many become disillusioned with the idea of using the government to promote shorter hours and by the late 1860s, efforts to push for a universal eight-hour day had been put on the back burner.

The First Enforceable Hours Laws

Despite this lull in shorter-hours agitation, in 1874, Massachusetts passed the nation’s first enforceable ten-hour law. It covered only female workers and became fully effective by 1879. This legislation was fairly late by European standards. Britain had passed its first effective Factory Act, setting maximum hours for almost half of its very young textile workers, in 1833.

1886: Year of Dashed Hopes

In the early 1880s organized labor in the U.S. was fairly weak. In 1884, the short-lived Federation of Organized Trades and Labor Unions (FOTLU) fired a “shot in the dark.” During its final meeting, before dissolving, the Federation “ordained” May 1, 1886 as the date on which workers would cease working beyond eight hours per day. Meanwhile, the Knights of Labor, which had begun as a secret fraternal society and evolved a labor union, began to gain strength. It appears that many nonunionized workers, especially the unskilled, came to see in the Knights a chance to obtain a better deal from their employers, perhaps even to obtain the eight-hour day. FOTLU’s call for workers to simply walk off the job after eight hours beginning on May 1, plus the activities of socialist and anarchist labor organizers and politicians, and the apparent strength of the Knights combined to attract members in record numbers. The Knights mushroomed and its new membership demanded that their local leaders support them in attaining the eight-hour day. Many smelled victory in the air — the movement to win the eight-hour day became frenzied and the goal became “almost a religious crusade” (Grob, 1961).

The Knights’ leader, Terence Powderly, thought that the push for a May 1 general strike for eight-hours was “rash, short-sighted and lacking in system” and “must prove abortive” (Powderly, 1890). He offered no effective alternative plan but instead tried to block the mass action, issuing a “secret circular” condemning the use of strikes. Powderly reasoned that low incomes forced workmen to accept long hours. Workers didn’t want shorter hours unless their daily pay was maintained, but employers were unwilling and/or unable to offer this. Powderly’s rival, labor leader Samuel Gompers, agreed that “the movement of ’86 did not have the advantage of favorable conditions” (Gompers, 1925). Nelson (1986) points to divisions among workers, which probably had much to do with the failure in 1886 of the drive for the eight-hour day. Some insisted on eight hours with ten hours’ pay, but others were willing to accept eight hours with eight hours’ pay,

Haymarket Square Bombing

The eight-hour push of 1886 was, in Norman Ware’s words, “a flop” (Ware, 1929). Lack of will and organization among workers was undoubtedly important, but its collapse was aided by violence that marred strikes and political rallies in Chicago and Milwaukee. The 1886 drive for eight-hours literally blew up in organized labor’s face. At Haymarket Square in Chicago an anarchist bomb killed fifteen policemen during an eight-hour rally, and in Milwaukee’s Bay View suburb nine strikers were killed as police tried to disperse roving pickets. The public backlash and fear of revolution damned the eight-hour organizers along with the radicals and dampened the drive toward eight hours — although it is estimated that the strikes of May 1886 shortened the workweek for about 200,000 industrial workers, especially in New York City and Cincinnati.

The AFL’s Strategy

After the demise of the Knights of Labor, the American Federation of Labor (AFL) became the strongest labor union in the U.S. It held shorter hours as a high priority. The inside cover of its Proceedings carried two slogans in large type: “Eight hours for work, eight hours for rest, eight hours for what we will” and “Whether you work by the piece or work by the day, decreasing the hours increases the pay.” (The latter slogan was coined by Ira Steward’s wife, Mary.) In the aftermath of 1886, the American Federation of Labor adopted a new strategy of selecting each year one industry in which it would attempt to win the eight-hour day, after laying solid plans, organizing, and building up a strike fund war chest by taxing nonstriking unions. The United Brotherhood of Carpenters and Joiners was selected first and May 1, 1890 was set as a day of national strikes. It is estimated that nearly 100,000 workers gained the eight-hour day as a result of these strikes in 1890. However, other unions turned down the opportunity to follow the carpenters’ example and the tactic was abandoned. Instead, the length of the workweek continued to erode during this period, sometimes as the result of a successful local strike, more often as the result of broader economic forces.

The Spread of Hours Legislation

Massachusetts’ first hours law in 1874 set sixty hours per week as the legal maximum for women, in 1892 this was cut to 58, in 1908 to 56, and in 1911 to 54. By 1900, 26 percent of states had maximum hours laws covering women, children and, in some, adult men (generally only those in hazardous industries). The percentage of states with maximum hours laws climbed to 58 percent in 1910, 76 percent in 1920, and 84 percent in 1930. Steinberg (1982) calculates that the percent of employees covered climbed from 4 percent nationally in 1900, to 7 percent in 1910, and 12 percent in 1920 and 1930. In addition, these laws became more restrictive with the average legal standard falling from a maximum of 59.3 hours per week in 1900 to 56.7 in 1920. According to her calculations, in 1900 about 16 percent of the workers covered by these laws were adult men, 49 percent were adult women and the rest were minors.

Court Rulings

The banner years for maximum hours legislation were right around 1910. This may have been partly a reaction to the Supreme Court’s ruling upholding female-hours legislation in the Muller vs. Oregon case (1908). The Court’s rulings were not always completely consistent during this period, however. In 1898 the Court upheld a maximum eight-hour day for workmen in the hazardous industries of mining and smelting in Utah in Holden vs. Hardy. In Lochner vs. New York (1905), it rejected as unconstitutional New York’s ten-hour day for bakers, which was also adopted (at least nominally) out of concerns for safety. The defendant showed that mortality rates in baking were only slightly above average, and lower than those for many unregulated occupations, arguing that this was special interest legislation, designed to favor unionized bakers. Several state courts, on the other hand, supported laws regulating the hours of men in only marginally hazardous work. By 1917, in Bunting vs. Oregon, the Supreme Court seemingly overturned the logic of the Lochner decision, supporting a state law that required overtime payment for all men working long hours. The general presumption during this period was that the courts would allow regulation of labor concerning women and children, who were thought to be incapable of bargaining on an equal footing with employers and in special need of protection. Men were allowed freedom of contract unless it could be proven that regulating their hours served a higher good for the population at large.

New Arguments about Shorter Hours

During the first decades of the twentieth century, arguments favoring shorter hours moved away from Steward’s line that shorter hours increased pay and reduced unemployment to arguments that shorter hours were good for employers because they made workers more productive. A new cadre of social scientists began to offer evidence that long hours produced health-threatening, productivity-reducing fatigue. This line of reasoning, advanced in the court brief of Louis Brandeis and Josephine Goldmark, was crucial in the Supreme Court’s decision to support state regulation of women’s hours in Muller vs. Oregon. Goldmark’s book, Fatigue and Efficiency (1912) was a landmark. In addition, data relating to hours and output among British and American war workers during World War I helped convince some that long hours could be counterproductive. Businessmen, however, frequently attacked the shorter hours movement as merely a ploy to raise wages, since workers were generally willing to work overtime at higher wage rates.

Federal Legislation in the 1910s

In 1912 the Federal Public Works Act was passed, which provided that every contract to which the U.S. government was a party must contain an eight-hour day clause. Three year later LaFollette’s Bill established maximum hours for maritime workers. These were preludes to the most important shorter-hours law enacted by Congress during this period — 1916’s Adamson Act, which was passed to counter a threatened nationwide strike, granted rail workers the basic eight hour day. (The law set eight hours as the basic workday and required higher overtime pay for longer hours.)

World War I and Its Aftermath

Labor markets became very tight during World War I as the demand for workers soared and the unemployment rate plunged. These forces put workers in a strong bargaining position, which they used to obtain shorter work schedules. The move to shorter hours was also pushed by the federal government, which gave unprecedented support to unionization. The federal government began to intervene in labor disputes for the first time, and the National War Labor Board “almost invariably awarded the basic eight-hour day when the question of hours was at issue” in labor disputes (Cahill, 1932). At the end of the war everyone wondered if organized labor would maintain its newfound power and the crucial test case was the steel industry. Blast furnace workers generally put in 84-hour workweeks. These abnormally long hours were the subject of much denunciation and a major issue in a strike that began in September 1919. The strike failed (and organized labor’s power receded during the 1920s), but four years later US Steel reduced its workday from twelve to eight hours. The move came after much arm-twisting by President Harding but its timing may be explained by immigration restrictions and the loss of immigrant workers who were willing to accept such long hours (Shiells, 1990).

The Move to a Five-day Workweek

During the 1920s agitation for shorter workdays largely disappeared, now that the workweek had fallen to about 50 hours. However, pressure arose to grant half-holidays on Saturday or Saturday off — especially in industries whose workers were predominantly Jewish. By 1927 at least 262 large establishments had adopted the five-day week, while only 32 had it by 1920. The most notable action was Henry Ford’s decision to adopt the five-day week in 1926. Ford employed more than half of the nation’s approximately 400,000 workers with five-day weeks. However, Ford’s motives were questioned by many employers who argued that productivity gains from reducing hours ceased beyond about forty-eight hours per week. Even the reformist American Labor Legislation Review greeted the call for a five-day workweek with lukewarm interest.

Changing Attitudes in the 1920s

Hunnicutt (1988) argues that during the 1920s businessmen and economists began to see shorter hours as a threat to future economic growth. With the development of advertising — the “gospel of consumption” — a new vision of progress was proposed to American workers. It replaced the goal of leisure time with a list of things to buy and business began to persuade workers that more work brought more tangible rewards. Many workers began to oppose further decreases in the length of the workweek. Hunnicutt concludes that a new work ethic arose as Americans threw off the psychology of scarcity for one of abundance.

Hours’ Reduction during the Great Depression

Then the Great Depression hit the American economy. By 1932 about half of American employers had shortened hours. Rather than slash workers’ real wages, employers opted to lay-off many workers (the unemployment rate hit 25 percent) and tried to protect the ones they kept on by the sharing of work among them. President Hoover’s Commission for Work Sharing pushed voluntary hours reductions and estimated that they had saved three to five million jobs. Major employers like Sears, GM, and Standard Oil scaled down their workweeks and Kellogg’s and the Akron tire industry pioneered the six-hour day. Amid these developments, the AFL called for a federally-mandated thirty-hour workweek.

The Black-Connery 30-Hours Bill and the NIRA

The movement for shorter hours as a depression-fighting work-sharing measure built such a seemingly irresistible momentum that by 1933 observers predicting that the “30-hour week was within a month of becoming federal law” (Hunnicutt, 1988). During the period after the 1932 election but before Franklin Roosevelt’s inauguration, Congressional hearings on thirty hours began, and less than one month into FDR’s first term, the Senate passed, 53 to 30, a thirty-hour bill authored by Hugo Black. The bill was sponsored in the House by William Connery. Roosevelt originally supported the Black-Connery proposals, but soon backed off, uneasy with a provision forbidding importation of goods produced by workers whose weeks were longer than thirty hours, and convinced by arguments of business that trying to legislate fewer hours might have disastrous results. Instead, FDR backed the National Industrial Recovery Act (NIRA). Hunnicutt argues that an implicit deal was struck in the NIRA. Labor leaders were persuaded by NIRA Section 7a’s provisions — which guaranteed union organization and collective bargaining — to support the NIRA rather than the Black-Connery Thirty-Hour Bill. Business, with the threat of thirty hours hanging over its head, fell raggedly into line. (Most historians cite other factors as the key to the NIRA’s passage. See Barbara Alexander’s article on the NIRA in this encyclopedia.) When specific industry codes were drawn up by the NIRA-created National Recovery Administration (NRA), shorter hours were deemphasized. Despite a plan by NRA Administrator Hugh Johnson to make blanket provisions for a thirty-five hour workweek in all industry codes, by late August 1933, the momentum toward the thirty-hour week had dissipated. About half of employees covered by NRA codes had their hours set at forty per week and nearly 40 percent had workweeks longer than forty hours.

The FSLA: Federal Overtime Law

Hunnicutt argues that the entire New Deal can be seen as an attempt to keep shorter-hours advocates at bay. After the Supreme Court struck down the NRA, Roosevelt responded to continued demands for thirty hours with the Works Progress Administration, the Wagner Act, Social Security, and, finally, the Fair Labor Standards Acts, which set a federal minimum wage and decreed that overtime beyond forty hours per week would be paid at one-and-a-half times the base rate in covered industries.

The Demise of the Shorter Hours’ Movement

As the Great Depression ended, average weekly work hours slowly climbed from their low reached in 1934. During World War II hours reached a level almost as high as at the end of World War I. With the postwar return of weekly work hours to the forty-hour level the shorter hours movement effectively ended. Occasionally organized labor’s leaders announced that they would renew the push for shorter hours, but they found that most workers didn’t desire a shorter workweek.

The Case of Kellogg’s

Offsetting isolated examples of hours reductions after World War II, there were noteworthy cases of backsliding. Hunnicutt (1996) has studied the case of Kellogg’s in great detail. In 1946, 87% of women and 71% of men working at Kellogg’s voted to return to the six-hour day, with the end of the war. Over the course of the next decade, however, the tide turned. By 1957 most departments had opted to switch to 8-hour shifts, so that only about one-quarter of the work force, mostly women, retained a six-hour shift. Finally, in 1985, the last department voted to adopt an 8-hour workday. Workers, especially male workers, began to favor additional money more than the extra two hours per day of free time. In interviews they explained that they needed the extra money to buy a wide range of consumer items and to keep up with the neighbors. Several men told about the friction that resulted when men spent too much time around the house: “The wives didn’t like the men underfoot all day.” “The wife always found something for me to do if I hung around.” “We got into a lot of fights.” During the 1950s, the threat of unemployment evaporated and the moral condemnation for being a “work hog” no longer made sense. In addition, the rise of quasi-fixed employment costs (such as health insurance) induced management to push workers toward a longer workday.

The Current Situation

As the twentieth century ended there was nothing resembling a shorter hours “movement.” The length of the workweek continues to fall for most groups — but at a glacial pace. Some Americans complain about a lack of free time but the vast majority seem content with an average workweek of roughly forty hours — channeling almost all of their growing wages into higher incomes rather than increased leisure time.

Causes of the Decline in the Length of the Workweek

Supply, Demand and Hours of Work

The length of the workweek, like other labor market outcomes, is determined by the interaction of the supply and demand for labor. Employers are torn by conflicting pressures. Holding everything else constant, they would like employees to work long hours because this means that they can utilize their equipment more fully and offset any fixed costs from hiring each worker (such as the cost of health insurance — common today, but not a consideration a century ago). On the other hand, longer hours can bring reduced productivity due to worker fatigue and can bring worker demands for higher hourly wages to compensate for putting in long hours. If they set the workweek too high, workers may quit and few workers will be willing to work for them at a competitive wage rate. Thus, workers implicitly choose among a variety of jobs — some offering shorter hours and lower earnings, others offering longer hours and higher earnings.

Economic Growth and the Long-Term Reduction of Work Hours

Historically employers and employees often agreed on very long workweeks because the economy was not very productive (by today’s standards) and people had to work long hours to earn enough money to feed, clothe and house their families. The long-term decline in the length of the workweek, in this view, has primarily been due to increased economic productivity, which has yielded higher wages for workers. Workers responded to this rise in potential income by “buying” more leisure time, as well as by buying more goods and services. In a recent survey, a sizeable majority of economic historians agreed with this view. Over eighty percent accepted the proposition that “the reduction in the length of the workweek in American manufacturing before the Great Depression was primarily due to economic growth and the increased wages it brought” (Whaples, 1995). Other broad forces probably played only a secondary role. For example, roughly two-thirds of economic historians surveyed rejected the proposition that the efforts of labor unions were the primary cause of the drop in work hours before the Great Depression.

Winning the Eight-Hour Day in the Era of World War I

The swift reduction of the workweek in the period around World War I has been extensively analyzed by Whaples (1990b). His findings support the consensus that economic growth was the key to reduced work hours. Whaples links factors such as wages, labor legislation, union power, ethnicity, city size, leisure opportunities, age structure, wealth and homeownership, health, education, alternative employment opportunities, industrial concentration, seasonality of employment, and technological considerations to changes in the average workweek in 274 cities and 118 industries. He finds that the rapid economic expansion of the World War I period, which pushed up real wages by more than 18 percent between 1914 and 1919, explains about half of the drop in the length of the workweek. The reduction of immigration during the war was important, as it deprived employers of a group of workers who were willing to put in long hours, explaining about one-fifth of the hours decline. The rapid electrification of manufacturing seems also to have played an important role in reducing the workweek. Increased unionization explains about one-seventh of the reduction, and federal and state legislation and policies that mandated reduced workweeks also had a noticeable role.

Cross-sectional Patterns from 1919

In 1919 the average workweek varied tremendously, emphasizing the point that not all workers desired the same workweek. The workweek exceeded 69 hours in the iron blast furnace, cottonseed oil, and sugar beet industries, but fell below 45 hours in industries such as hats and caps, fur goods, and women’s clothing. Cities’ averages also differed dramatically. In a few Midwestern steel mill towns average workweeks exceeded 60 hours. In a wide range of low-wage Southern cities they reached the high 50s, but in high-wage Western ports, like Seattle, the workweek fell below 45 hours.

Whaples (1990a) finds that among the most important city-level determinants of the workweek during this period were the availability of a pool of agricultural workers, the capital-labor ratio, horsepower per worker, and the amount of employment in large establishments. Hours rose as each of these increased. Eastern European immigrants worked significantly longer than others, as did people in industries whose output varied considerably from season to season. High unionization and strike levels reduced hours to a small degree. The average female employee worked about six and a half fewer hours per week in 1919 than did the average male employee. In city-level comparisons, state maximum hours laws appear to have had little affect on average work hours, once the influences of other factors have been taken into account. One possibility is that these laws were passed only after economic forces lowered the length of the workweek. Overall, in cities where wages were one percent higher, hours were about -0.13 to -0.05 percent lower. Again, this suggests that during the era of declining hours, workers were willing to use higher wages to “buy” shorter hours.

Annotated Bibliography

Perhaps the most comprehensive survey of the shorter hours movement in the U.S. is David Roediger and Philip Foner’s Our Own Time: A History of American Labor and the Working Day (1989). It contends that “the length of the working day has been the central issue for the American labor movement during its most vigorous periods of activity, uniting workers along lines of craft, gender, and ethnicity.” Critics argue that its central premise is flawed because workers have often been divided about the optimal length of the workweek. It explains the point of view of organized labor and recounts numerous historically important events and arguments, but does not attempt to examine in detail the broader economic forces that determined the length of the workweek. An earlier useful comprehensive work is Marion Cahill’s Shorter Hours: A Study of the Movement since the Civil War (1932).

Benjamin Hunnicutt’s Work Without End: Abandoning Shorter Hours for the Right to Work (1988) focuses on the period from 1920 to 1940 and traces the political, intellectual, and social “dialogues” that changed the American concept of progress from dreams of more leisure to an “obsession” with the importance of work and wage-earning. This work’s detailed analysis and insights are valuable, but it draws many of its inferences from what intellectuals said about shorter hours, rather than spending time on the actual decision makers — workers and employers. Hunnicutt’s Kellogg’s Six-Hour Day (1996), is important because it does exactly this — interviewing employees and examining the motives and decisions of a prominent employer. Unfortunately, it shows that one must carefully interpret what workers say on the subject, as they are prone to reinterpret their own pasts so that their choices can be more readily rationalized. (See EH.NET’s review: http://eh.net/book_reviews/kelloggs-six-hour-day/.)

Economists have given surprisingly little attention to the determinants of the workweek. The most comprehensive treatment is Robert Whaples’ “The Shortening of the American Work Week” (1990), which surveys estimates of the length of the workweek, the shorter hours movement, and economic theories about the length of the workweek. Its core is an extensive statistical examination of the determinants of the workweek in the period around World War I.

References

Atack, Jeremy and Fred Bateman. “How Long Was the Workday in 1880?” Journal of Economic History 52, no. 1 (1992): 129-160.

Cahill, Marion Cotter. Shorter Hours: A Study of the Movement since the Civil War. New York: Columbia University Press, 1932.

Carr, Lois Green. “Emigration and the Standard of Living: The Seventeenth Century Chesapeake.” Journal of Economic History 52, no. 2 (1992): 271-291.

Coleman, Mary T. and John Pencavel. “Changes in Work Hours of Male Employees, 1940-1988.” Industrial and Labor Relations Review 46, no. 2 (1993a): 262-283.

Coleman, Mary T. and John Pencavel. “Trends in Market Work Behavior of Women since 1940.” Industrial and Labor Relations Review 46, no. 4 (1993b): 653-676.

Douglas, Paul. Real Wages in the United States, 1890-1926. Boston: Houghton, 1930.

Fogel, Robert. The Fourth Great Awakening and the Future of Egalitarianism. Chicago: University of Chicago Press, 2000.

Fogel, Robert and Stanley Engerman. Time on the Cross: The Economics of American Negro Slavery. Boston: Little, Brown, 1974.

Gallman, Robert. “The Agricultural Sector and the Pace of Economic Growth: U.S. Experience in the Nineteenth Century.” In Essays in Nineteenth-Century Economic History: The Old Northwest, edited by David Klingaman and Richard Vedder. Athens, OH: Ohio University Press, 1975.

Goldmark, Josephine. Fatigue and Efficiency. New York: Charities Publication Committee, 1912.

Gompers, Samuel. Seventy Years of Life and Labor: An Autobiography. New York: Dutton, 1925.

Greis, Theresa Diss. The Decline of Annual Hours Worked in the United States, since 1947. Manpower and Human Resources Studies, no. 10, Wharton School, University of Pennsylvania, 1984.

Grob, Gerald. Workers and Utopia: A Study of Ideological Conflict in the American Labor Movement, 1865-1900. Evanston: Northwestern University Press, 1961.

Hunnicutt, Benjamin Kline. Work Without End: Abandoning Shorter Hours for the Right to Work. Philadelphia: Temple University Press, 1988.

Hunnicutt, Benjamin Kline. Kellogg’s Six-Hour Day. Philadelphia: Temple University Press, 1996.

Jones, Ethel. “New Estimates of Hours of Work per Week and Hourly Earnings, 1900-1957.” Review of Economics and Statistics 45, no. 4 (1963): 374-385.

Juster, F. Thomas and Frank P. Stafford. “The Allocation of Time: Empirical Findings, Behavioral Models, and Problems of Measurement.” Journal of Economic Literature 29, no. 2 (1991): 471-522.

Licht, Walter. Working for the Railroad: The Organization of Work in the Nineteenth Century. Princeton: Princeton University Press, 1983.

Margo, Robert. “The Labor Force in the Nineteenth Century.” In The Cambridge Economic History of the United States, Volume II, The Long Nineteenth Century, edited by Stanley Engerman and Robert Gallman, 207-243. New York: Cambridge University Press, 2000.

Nelson, Bruce. “‘We Can’t Get Them to Do Aggressive Work’: Chicago’s Anarchists and the Eight-Hour Movement.” International Labor and Working Class History 29 (1986).

Ng, Kenneth and Nancy Virts. “The Value of Freedom.” Journal of Economic History 49, no. 4 (1989): 958-965.

Owen, John. “Workweeks and Leisure: An Analysis of Trends, 1948-1975.” Monthly Labor Review 99 (1976).

Owen, John. “Work-time Reduction in the United States and Western Europe.” Monthly Labor Review 111 (1988).

Powderly, Terence. Thirty Years of Labor, 1859-1889. Columbus: Excelsior, 1890.

Ransom, Roger and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. New York: Cambridge University Press, 1977.

Rodgers, Daniel. The Work Ethic in Industrial America, 1850-1920. Chicago: University of Chicago Press, 1978.

Roediger, David. “Ira Steward and the Antislavery Origins of American Eight-Hour Theory.” Labor History 27 (1986).

Roediger, David and Philip Foner. Our Own Time: A History of American Labor and the Working Day. New York: Verso, 1989.

Schor, Juliet B. The Overworked American: The Unexpected Decline in Leisure. New York: Basic Books, 1992.

Shiells, Martha Ellen, “Collective Choice of Working Conditions: Hours in British and U.S. Iron and Steel, 1890-1923.” Journal of Economic History 50, no. 2 (1990): 379-392.

Steinberg, Ronnie. Wages and Hours: Labor and Reform in Twentieth-Century America. New Brunswick, NJ: Rutgers University Press, 1982.

United States, Department of Interior, Census Office. Report on the Statistics of Wages in Manufacturing Industries, by Joseph Weeks, 1880 Census, Vol. 20. Washington: GPO, 1883.

United States Senate. Senate Report 1394, Fifty-Second Congress, Second Session. “Wholesale Prices, Wages, and Transportation.” Washington: GPO, 1893.

Ware, Caroline. The Early New England Cotton Manufacture: A Study of Industrial Beginnings. Boston: Houghton-Mifflin, 1931.

Ware, Norman. The Labor Movement in the United States, 1860-1895. New York: Appleton, 1929.

Weiss, Thomas and Lee Craig. “Agricultural Productivity Growth during the Decade of the Civil War.” Journal of Economic History 53, no. 3 (1993): 527-548.

Whaples, Robert. “The Shortening of the American Work Week: An Economic and Historical Analysis of Its Context, Causes, and Consequences.” Ph.D. dissertation, University of Pennsylvania, 1990a.

Whaples, Robert. “Winning the Eight-Hour Day, 1909-1919.” Journal of Economic History 50, no. 2 (1990b): 393-406.

Whaples, Robert. “Where Is There Consensus Among American Economic Historians? The Results of a Survey on Forty Propositions.” Journal of Economic History 55, no. 1 (1995): 139-154.

Citation: Whaples, Robert. “Hours of Work in U.S. History”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/hours-of-work-in-u-s-history/

An Overview of the Great Depression

Randall Parker, East Carolina University

This article provides an overview of selected events and economic explanations of the interwar era. What follows is not intended to be a detailed and exhaustive review of the literature on the Great Depression, or of any one theory in particular. Rather, it will attempt to describe the “big picture” events and topics of interest. For the reader who wishes more extensive analysis and detail, references to additional materials are also included.

The 1920s

The Great Depression, and the economic catastrophe that it was, is perhaps properly scaled in reference to the decade that preceded it, the 1920s. By conventional macroeconomic measures, this was a decade of brisk economic growth in the United States. Perhaps the moniker “the roaring twenties” summarizes this period most succinctly. The disruptions and shocking nature of World War I had been survived and it was felt the United States was entering a “new era.” In January 1920, the Federal Reserve seasonally adjusted index of industrial production, a standard measure of aggregate economic activity, stood at 81 (1935–39 = 100). When the index peaked in July 1929 it was at 114, for a growth rate of 40.6 percent over this period. Similar rates of growth over the 1920–29 period equal to 47.3 percent and 42.4 percent are computed using annual real gross national product data from Balke and Gordon (1986) and Romer (1988), respectively. Further computations using the Balke and Gordon (1986) data indicate an average annual growth rate of real GNP over the 1920–29 period equal to 4.6 percent. In addition, the relative international economic strength of this country was clearly displayed by the fact that nearly one-half of world industrial output in 1925–29 was produced in the United States (Bernanke, 1983).

Consumer Durables Market

The decade of the 1920s also saw major innovations in the consumption behavior of households. The development of installment credit over this period led to substantial growth in the consumer durables market (Bernanke, 1983). Purchases of automobiles, refrigerators, radios and other such durable goods all experienced explosive growth during the 1920s as small borrowers, particularly households and unincorporated businesses, utilized their access to available credit (Persons, 1930; Bernanke, 1983; Soule, 1947).

Economic Growth in the 1920s

Economic growth during this period was mitigated only somewhat by three recessions. According to the National Bureau of Economic Research (NBER) business cycle chronology, two of these recessions were from May 1923 through July 1924 and October 1926 through November 1927. Both of these recessions were very mild and unremarkable. In contrast, the 1920s began with a recession lasting 18 months from the peak in January 1920 until the trough of July 1921. Original estimates of real GNP from the Commerce Department showed that real GNP fell 8 percent between 1919 and 1920 and another 7 percent between 1920 and 1921 (Romer, 1988). The behavior of prices contributed to the naming of this recession “the Depression of 1921,” as the implicit price deflator for GNP fell 16 percent and the Bureau of Labor Statistics wholesale price index fell 46 percent between 1920 and 1921. Although thought to be severe, Romer (1988) has argued that the so-called “postwar depression” was not as severe as once thought. While the deflation from war-time prices was substantial, revised estimates of real GNP show falls in output of only 1 percent between 1919 and 1920 and 2 percent between 1920 and 1921. Romer (1988) also argues that the behaviors of output and prices are inconsistent with the conventional explanation of the Depression of 1921 being primarily driven by a decline in aggregate demand. Rather, the deflation and the mild recession are better understood as resulting from a decline in aggregate demand together with a series of positive supply shocks, particularly in the production of agricultural goods, and significant decreases in the prices of imported primary commodities. Overall, the upshot is that the growth path of output was hardly impeded by the three minor downturns, so that the decade of the 1920s can properly be viewed economically as a very healthy period.

Fed Policies in the 1920s

Friedman and Schwartz (1963) label the 1920s “the high tide of the Reserve System.” As they explain, the Federal Reserve became increasingly confident in the tools of policy and in its knowledge of how to use them properly. The synchronous movements of economic activity and explicit policy actions by the Federal Reserve did not go unnoticed. Taking the next step and concluding there was cause and effect, the Federal Reserve in the 1920s began to use monetary policy as an implement to stabilize business cycle fluctuations. “In retrospect, we can see that this was a major step toward the assumption by government of explicit continuous responsibility for economic stability. As the decade wore on, the System took – and perhaps even more was given – credit for the generally stable conditions that prevailed, and high hopes were placed in the potency of monetary policy as then administered” (Friedman and Schwartz, 1963).

The giving/taking of credit to/by the Federal Reserve has particular value pertaining to the recession of 1920–21. Although suggesting the Federal Reserve probably tightened too much, too late, Friedman and Schwartz (1963) call this episode “the first real trial of the new system of monetary control introduced by the Federal Reserve Act.” It is clear from the history of the time that the Federal Reserve felt as though it had successfully passed this test. The data showed that the economy had quickly recovered and brisk growth followed the recession of 1920–21 for the remainder of the decade.

Questionable Lessons “Learned” by the Fed

Moreover, Eichengreen (1992) suggests that the episode of 1920–21 led the Federal Reserve System to believe that the economy could be successfully deflated or “liquidated” without paying a severe penalty in terms of reduced output. This conclusion, however, proved to be mistaken at the onset of the Depression. As argued by Eichengreen (1992), the Federal Reserve did not appreciate the extent to which the successful deflation could be attributed to the unique circumstances that prevailed during 1920–21. The European economies were still devastated after World War I, so the demand for United States’ exports remained strong many years after the War. Moreover, the gold standard was not in operation at the time. Therefore, European countries were not forced to match the deflation initiated in the United States by the Federal Reserve (explained below pertaining to the gold standard hypothesis).

The implication is that the Federal Reserve thought that deflation could be generated with little effect on real economic activity. Therefore, the Federal Reserve was not vigorous in fighting the Great Depression in its initial stages. It viewed the early years of the Depression as another opportunity to successfully liquidate the economy, especially after the perceived speculative excesses of the 1920s. However, the state of the economic world in 1929 was not a duplicate of 1920–21. By 1929, the European economies had recovered and the interwar gold standard was a vehicle for the international transmission of deflation. Deflation in 1929 would not operate as it did in 1920–21. The Federal Reserve failed to understand the economic implications of this change in the international standing of the United States’ economy. The result was that the Depression was permitted to spiral out of control and was made much worse than it otherwise would have been had the Federal Reserve not considered it to be a repeat of the 1920–21 recession.

The Beginnings of the Great Depression

In January 1928 the seeds of the Great Depression, whenever they were planted, began to germinate. For it is around this time that two of the most prominent explanations for the depth, length, and worldwide spread of the Depression first came to be manifest. Without any doubt, the economics profession would come to a firm consensus around the idea that the economic events of the Great Depression cannot be properly understood without a solid linkage to both the behavior of the supply of money together with Federal Reserve actions on the one hand and the flawed structure of the interwar gold standard on the other.

It is well documented that many public officials, such as President Herbert Hoover and members of the Federal Reserve System in the latter 1920s, were intent on ending what they perceived to be the speculative excesses that were driving the stock market boom. Moreover, as explained by Hamilton (1987), despite plentiful denials to the contrary, the Federal Reserve assumed the role of “arbiter of security prices.” Although there continues to be debate as to whether or not the stock market was overvalued at the time (White, 1990; DeLong and Schleifer, 1991), the main point is that the Federal Reserve believed there to be a speculative bubble in equity values. Hamilton (1987) describes how the Federal Reserve, intending to “pop” the bubble, embarked on a highly contractionary monetary policy in January 1928. Between December 1927 and July 1928 the Federal Reserve conducted $393 million of open market sales of securities so that only $80 million remained in the Open Market account. Buying rates on bankers’ acceptances1 were raised from 3 percent in January 1928 to 4.5 percent by July, reducing Federal Reserve holdings of such bills by $193 million, leaving a total of only $185 million of these bills on balance. Further, the discount rate was increased from 3.5 percent to 5 percent, the highest level since the recession of 1920–21. “In short, in terms of the magnitudes consciously controlled by the Fed, it would be difficult to design a more contractionary policy than that initiated in January 1928” (Hamilton, 1987).

The pressure did not stop there, however. The death of Federal Reserve Bank President Benjamin Strong and the subsequent control of policy ascribed to Adolph Miller of the Federal Reserve Board insured that the fall in the stock market was going to be made a reality. Miller believed the speculative excesses of the stock market were hurting the economy, and the Federal Reserve continued attempting to put an end to this perceived harm (Cecchetti, 1998). The amount of Federal Reserve credit that was being extended to market participants in the form of broker loans became an issue in 1929. The Federal Reserve adamantly discouraged lending that was collateralized by equities. The intentions of the Board of Governors of the Federal Reserve were made clear in a letter dated February 2, 1929 sent to Federal Reserve banks. In part the letter read:

The board has no disposition to assume authority to interfere with the loan practices of member banks so long as they do not involve the Federal reserve banks. It has, however, a grave responsibility whenever there is evidence that member banks are maintaining speculative security loans with the aid of Federal reserve credit. When such is the case the Federal reserve bank becomes either a contributing or a sustaining factor in the current volume of speculative security credit. This is not in harmony with the intent of the Federal Reserve Act, nor is it conducive to the wholesome operation of the banking and credit system of the country. (Board of Governors of the Federal Reserve 1929: 93–94, quoted from Cecchetti, 1998)

The deflationary pressure to stock prices had been applied. It was now a question of when the market would break. Although the effects were not immediate, the wait was not long.

The Economy Stumbles

The NBER business cycle chronology dates the start of the Great Depression in August 1929. For this reason many have said that the Depression started on Main Street and not Wall Street. Be that as it may, the stock market plummeted in October of 1929. The bursting of the speculative bubble had been achieved and the economy was now headed in an ominous direction. The Federal Reserve’s seasonally adjusted index of industrial production stood at 114 (1935–39 = 100) in August 1929. By October it had fallen to 110 for a decline of 3.5 percent (annualized percentage decline = 14.7 percent). After the crash, the incipient recession intensified, with the industrial production index falling from 110 in October to 100 in December 1929, or 9 percent (annualized percentage decline = 41 percent). In 1930, the index fell further from 100 in January to 79 in December, or an additional 21percent.

Links between the Crash and the Depression?

While popular history treats the crash and the Depression as one and the same event, economists know that they were not. But there is no doubt that the crash was one of the things that got the ball rolling. Several authors have offered explanations for the linkage between the crash and the recession of 1929–30. Mishkin (1978) argues that the crash and an increase in liabilities led to a deterioration in households’ balance sheets. The reduced liquidity2 led consumers to defer consumption of durable goods and housing and thus contributed to a fall in consumption. Temin (1976) suggests that the fall in stock prices had a negative wealth effect on consumption, but attributes only a minor role to this given that stocks were not a large fraction of total wealth; the stock market in 1929, although falling dramatically, remained above the value it had achieved in early 1928, and the propensity to consume from wealth was small during this period. Romer (1990) provides evidence suggesting that if the stock market were thought to be a predictor of future economic activity, then the crash can rightly be viewed as a source of increased consumer uncertainty that depressed spending on consumer durables and accelerated the decline that had begun in August 1929. Flacco and Parker (1992) confirm Romer’s findings using different data and alternative estimation techniques.

Looking back on the behavior of the economy during the year of 1930, industrial production declined 21 percent, the consumer price index fell 2.6 percent, the supply of high-powered money (that is, the liabilities of the Federal Reserve that are usable as money, consisting of currency in circulation and bank reserves; also called the monetary base) fell 2.8 percent, the nominal supply of money as measured by M1 (the product of the monetary base3 multiplied by the money multiplier4) dipped 3.5 percent and the ex post real interest rate turned out to be 11.3 percent, the highest it had been since the recession of 1920–21 (Hamilton, 1987). In spite of this, when put into historical context, there was no reason to view the downturn of 1929–30 as historically unprecedented. Its magnitude was comparable to that of many recessions that had previously occurred. Perhaps there was justifiable optimism in December 1930 that the economy might even shake off the negative movement and embark on the path to recovery, rather like what had occurred after the recession of 1920–21 (Bernanke, 1983). As we know, the bottom would not come for another 27 months.

The Economy Crumbles

Banking Failures

During 1931, there was a “change in the character of the contraction” (Friedman and Schwartz, 1963). Beginning in October 1930 and lasting until December 1930, the first of a series of banking panics now accompanied the downward spasms of the business cycle. Although bank failures had occurred throughout the 1920s, the magnitude of the failures that occurred in the early 1930s was of a different order altogether (Bernanke, 1983). The absence of any type of deposit insurance resulted in the contagion of the panics being spread to sound financial institutions and not just those on the margin.

Traditional Methods of Combating Bank Runs Not Used

Moreover, institutional arrangements that had existed in the private banking system designed to provide liquidity – to convert assets into cash – to fight bank runs before 1913 were not exercised after the creation of the Federal Reserve System. For example, during the panic of 1907, the effects of the financial upheaval had been contained through a combination of lending activities by private banks, called clearinghouses, and the suspension of deposit convertibility into currency. While not preventing bank runs and the financial panic, their economic impact was lessened to a significant extent by these countermeasures enacted by private banks, as the economy quickly recovered in 1908. The aftermath of the panic of 1907 and the desire to have a central authority to combat the contagion of financial disruptions was one of the factors that led to the establishment of the Federal Reserve System. After the creation of the Federal Reserve, clearinghouse lending and suspension of deposit convertibility by private banks were not undertaken. Believing the Federal Reserve to be the “lender of last resort,” it was apparently thought that the responsibility to fight bank runs was the domain of the central bank (Friedman and Schwartz, 1963; Bernanke, 1983). Unfortunately, when the banking panics came in waves and the financial system was collapsing, being the “lender of last resort” was a responsibility that the Federal Reserve either could not or would not assume.

Money Supply Contracts

The economic effects of the banking panics were devastating. Aside from the obvious impact of the closing of failed banks and the subsequent loss of deposits by bank customers, the money supply accelerated its downward spiral. Although the economy had flattened out after the first wave of bank failures in October–December 1930, with the industrial production index steadying from 79 in December 1930 to 80 in April 1931, the remainder of 1931 brought a series of shocks from which the economy was not to recover for some time.

Second Wave of Banking Failure

In May, the failure of Austria’s largest bank, the Kredit-anstalt, touched off financial panics in Europe. In September 1931, having had enough of the distress associated with the international transmission of economic depression, Britain abandoned its participation in the gold standard. Further, just as the United States’ economy appeared to be trying to begin recovery, the second wave of bank failures hit the financial system in June and did not abate until December. In addition, the Hoover administration in December 1931, adhering to its principles of limited government, embarked on a campaign to balance the federal budget. Tax increases resulted the following June, just as the economy was to hit the first low point of its so-called “double bottom” (Hoover, 1952).

The results of these events are now evident. Between January and December 1931 the industrial production index declined from 78 to 66, or 15.4 percent, the consumer price index fell 9.4 percent, the nominal supply of M1 dipped 5.7 percent, the ex post real interest rate5 remained at 11.3 percent, and although the supply of high-powered money6 actually increased 5.5 percent, the currency–deposit and reserve–deposit ratios began their upward ascent, and thus the money multiplier started its downward plunge (Hamilton, 1987). If the economy had flattened out in the spring of 1931, then by December output, the money supply, and the price level were all on negative growth paths that were dragging the economy deeper into depression.

Third Wave of Banking Failure

The economic difficulties were far from over. The economy displayed some evidence of recovery in late summer/early fall of 1932. However, in December 1932 the third, and largest, wave of banking panics hit the financial markets and the collapse of the economy arrived with the business cycle hitting bottom in March 1933. Industrial production between January 1932 and March 1933 fell an additional 15.6 percent. For the combined years of 1932 and 1933, the consumer price index fell a cumulative 16.2 percent, the nominal supply of M1 dropped 21.6 percent, the nominal M2 money supply fell 34.7 percent, and although the supply of high-powered money increased 8.4 percent, the currency–deposit and reserve–deposit ratios accelerated their upward ascent. Thus the money multiplier continued on a downward plunge that was not arrested until March 1933. Similar behaviors for real GDP, prices, money supplies and other key macroeconomic variables occurred in many European economies as well (Snowdon and Vane, 1999; Temin, 1989).

An examination of the macroeconomic data in August 1929 compared to March 1933 provides a stark contrast. The unemployment rate of 3 percent in August 1929 was at 25 percent in March 1933. The industrial production index of 114 in August 1929 was at 54 in March 1933, or a 52.6 percent decrease. The money supply had fallen 35 percent, prices plummeted by about 33 percent, and more than one-third of banks in the United States were either closed or taken over by other banks. The “new era” ushered in by “the roaring twenties” was over. Roosevelt took office in March 1933, a nationwide bank holiday was declared from March 6 until March 13, and the United States abandoned the international gold standard in April 1933. Recovery commenced immediately and the economy began its long path back to the pre-1929 secular growth trend.

Table 1 summarizes the drop in industrial production in the major economies of Western Europe and North America. Table 2 gives gross national product estimates for the United States from 1928 to 1941. The constant price series adjusts for inflation and deflation.

Table 1
Indices of Total Industrial Production, 1927 to 1935 (1929 = 100)

1927 1928 1929 1930 1931 1932 1933 1934 1935
Britain 95 94 100 94 86 89 95 105 114
Canada 85 94 100 91 78 68 69 82 90
France 84 94 100 99 85 74 83 79 77
Germany 95 100 100 86 72 59 68 83 96
Italy 87 99 100 93 84 77 83 85 99
Netherlands 87 94 100 109 101 90 90 93 95
Sweden 85 88 100 102 97 89 93 111 125
U.S. 85 90 100 83 69 55 63 69 79

Source: Industrial Statistics, 1900-57 (Paris, OEEC, 1958), Table 2.

Table 2
U.S. GNP at Constant (1929) and Current Prices, 1928-1941

Year GNP at constant (1929) prices (billions of $) GNP at current prices (billions of $)
1928 98.5 98.7
1929 104.4 104.6
1930 95.1 91.2
1931 89.5 78.5
1932 76.4 58.6
1933 74.2 56.1
1934 80.8 65.5
1935 91.4 76.5
1936 100.9 83.1
1937 109.1 91.2
1938 103.2 85.4
1939 111.0 91.2
1940 121.0 100.5
1941 131.7 124.7

Contemporary Explanations

The economics profession during the 1930s was at a loss to explain the Depression. The most prominent conventional explanations were of two types. First, some observers at the time firmly grounded their explanations on the two pillars of classical macroeconomic thought, Say’s Law and the belief in the self-equilibrating powers of the market. Many argued that it was simply a question of time before wages and prices adjusted fully enough for the economy to return to full employment and achieve the realization of the putative axiom that “supply creates its own demand.” Second, the Austrian school of thought argued that the Depression was the inevitable result of overinvestment during the 1920s. The best remedy for the situation was to let the Depression run its course so that the economy could be purified from the negative effects of the false expansion. Government intervention was viewed by the Austrian school as a mechanism that would simply prolong the agony and make any subsequent depression worse than it would ordinarily be (Hayek, 1966; Hayek, 1967).

Liquidationist Theory

The Hoover administration and the Federal Reserve Board also contained several so-called “liquidationists.” These individuals basically believed that economic agents should be forced to re-arrange their spending proclivities and alter their alleged profligate use of resources. If it took mass bankruptcies to produce this result and wipe the slate clean so that everyone could have a fresh start, then so be it. The liquidationists viewed the events of the Depression as an economic penance for the speculative excesses of the 1920s. Thus, the Depression was the price that was being paid for the misdeeds of the previous decade. This is perhaps best exemplified in the well-known quotation of Treasury Secretary Andrew Mellon, who advised President Hoover to “Liquidate labor, liquidate stocks, liquidate the farmers, liquidate real estate.” Mellon continued, “It will purge the rottenness out of the system. High costs of living and high living will come down. People will work harder, live a more moral life. Values will be adjusted, and enterprising people will pick up the wrecks from less competent people” (Hoover, 1952). Hoover apparently followed this advice as the Depression wore on. He continued to reassure the public that if the principles of orthodox finance were faithfully followed, recovery would surely be the result.

The business press at the time was not immune from such liquidationist prescriptions either. The Commercial and Financial Chronicle, in an August 3, 1929 editorial entitled “Is Not Group Speculating Conspiracy, Fostering Sham Prosperity?” complained of the economy being replete with profligate spending including:

(a) The luxurious diversification of diet advantageous to dairy men … and fruit growers …; (b) luxurious dressing … more silk and rayon …; (c) free spending for automobiles and their accessories, gasoline, house furnishings and equipment, radios, travel, amusements and sports; (d) the displacement from the farms by tractors and autos of produce-consuming horses and mules to a number aggregating 3,700,000 for the period 1918–1928 … (e) the frills of education to thousands for whom places might better be reserved at bench or counter or on the farm. (Quoted from Nelson, 1991)

Persons, in a paper which appeared in the November 1930 Quarterly Journal of Economics, demonstrates that some academic economists also held similar liquidationist views.

Although certainly not universal, the descriptions above suggest that no small part of the conventional wisdom at the time believed the Depression to be a penitence for past sins. In addition, it was thought that the economy would be restored to full employment equilibrium once wages and prices adjusted sufficiently. Say’s Law will ensure the economy will return to health, and supply will create its own demand sufficient to return to prosperity, if we simply let the system work its way through. In his memoirs published in 1952, 20 years after his election defeat, Herbert Hoover continued to steadfastly maintain that if Roosevelt and the New Dealers would have stuck to the policies his administration put in place, the economy would have made a full recovery within 18 months after the election of 1932. We have to intensify our resolve to “stay the course.” All will be well in time if we just “take our medicine.” In hindsight, it challenges the imagination to think up worse policy prescriptions for the events of 1929–33.

Modern Explanations

There remains considerable debate regarding the economic explanations for the behavior of the business cycle between August 1929 and March 1933. This section describes the main hypotheses that have been presented in the literature attempting to explain the causes for the depth, protracted length, and worldwide propagation of the Great Depression.

The United States’ experience, considering the preponderance of empirical results and historical simulations contained in the economic literature, can largely be accounted for by the monetary hypothesis of Friedman and Schwartz (1963) together with the nonmonetary/financial hypotheses of Bernanke (1983) and Fisher (1933). That is, most, but not all, of the characteristic phases of the business cycle and depth to which output fell from 1929 to 1933 can be accounted for by the monetary and nonmonetary/financial hypotheses. The international experience, well documented in Choudri and Kochin (1980), Hamilton (1988), Temin (1989), Bernanke and James (1991), and Eichengreen (1992), can be properly understood as resulting from a flawed interwar gold standard. Each of these hypotheses is explained in greater detail below.

Nonmonetary/Nonfinancial Theories

It should be noted that I do not include a section covering the nonmonetary/nonfinancial theories of the Great Depression. These theories, including Temin’s (1976) focus on autonomous consumption decline, the collapse of housing construction contained in Anderson and Butkiewicz (1980), the effects of the stock market crash, the uncertainty hypothesis of Romer (1990), and the Smoot–Hawley Tariff Act of 1930, are all worthy of mention and can rightly be apportioned some of the responsibility for initiating the Depression. However, any theory of the Depression must be able to account for the protracted problems associated with the punishing deflation imposed on the United States and the world during that era. While the nonmonetary/nonfinancial theories go a long way accounting for the impetus for, and first year of the Depression, my reading of the empirical results of the economic literature indicates that they do not have the explanatory power of the three other theories mentioned above to account for the depths to which the economy plunged.

Moreover, recent research by Olney (1999) argues convincingly that the decline in consumption was not autonomous at all. Rather, the decline resulted because high consumer indebtedness threatened future consumption spending because default was expensive. Olney shows that households were shouldering an unprecedented burden of installment debt – especially for automobiles. In addition, down payments were large and contracts were short. Missed installment payments triggered repossession, reducing consumer wealth in 1930 because households lost all acquired equity. Cutting consumption was the only viable strategy in 1930 for avoiding default.

The Monetary Hypothesis

In reviewing the economic history of the Depression above, it was mentioned that the supply of money fell by 35 percent, prices dropped by about 33 percent, and one-third of all banks vanished. Milton Friedman and Anna Schwartz, in their 1963 book A Monetary History of the United States, 1867–1960, call this massive drop in the supply of money “The Great Contraction.”

Friedman and Schwartz (1963) discuss and painstakingly document the synchronous movements of the real economy with the disruptions that occurred in the financial sector. They point out that the series of bank failures that occurred beginning in October 1930 worsened economic conditions in two ways. First, bank shareholder wealth was reduced as banks failed. Second, and most importantly, the bank failures were exogenous shocks and led to the drastic decline in the money supply. The persistent deflation of the 1930s follows directly from this “great contraction.”

Criticisms of Fed Policy

However, this raises an important question: Where was the Federal Reserve while the money supply and the financial system were collapsing? If the Federal Reserve was created in 1913 primarily to be the “lender of last resort” for troubled financial institutions, it was failing miserably. Friedman and Schwartz pin the blame squarely on the Federal Reserve and the failure of monetary policy to offset the contractions in the money supply. As the money multiplier continued on its downward path, the monetary base, rather than being aggressively increased, simply progressed slightly upwards on a gently positive sloping time path. As banks were failing in waves, was the Federal Reserve attempting to contain the panics by aggressively lending to banks scrambling for liquidity? The unfortunate answer is “no.” When the panics were occurring, was there discussion of suspending deposit convertibility or suspension of the gold standard, both of which had been successfully employed in the past? Again the unfortunate answer is “no.” Did the Federal Reserve consider the fact that it had an abundant supply of free gold, and therefore that monetary expansion was feasible? Once again the unfortunate answer is “no.” The argument can be summarized by the following quotation:

At all times throughout the 1929–33 contraction, alternative policies were available to the System by which it could have kept the stock of money from falling, and indeed could have increased it at almost any desired rate. Those policies did not involve radical innovations. They involved measures of a kind the System had taken in earlier years, of a kind explicitly contemplated by the founders of the System to meet precisely the kind of banking crisis that developed in late 1930 and persisted thereafter. They involved measures that were actually proposed and very likely would have been adopted under a slightly different bureaucratic structure or distribution of power, or even if the men in power had had somewhat different personalities. Until late 1931 – and we believe not even then – the alternative policies involved no conflict with the maintenance of the gold standard. Until September 1931, the problem that recurrently troubled the System was how to keep the gold inflows under control, not the reverse. (Friedman and Schwartz, 1963)

The inescapable conclusion is that it was a failure of the policies of the Federal Reserve System in responding to the crises of the time that made the Depression as bad as it was. If monetary policy had responded differently, the economic events of 1929–33 need not have been as they occurred. This assertion is supported by the results of Fackler and Parker (1994). Using counterfactual historical simulations, they show that if the Federal Reserve had kept the M1 money supply growing along its pre-October 1929 trend of 3.3 percent annually, most of the Depression would have been averted. McCallum (1990) also reaches similar conclusions employing a monetary base feedback policy in his counterfactual simulations.

Lack of Leadership at the Fed

Friedman and Schwartz trace the seeds of these regrettable events to the death of Federal Reserve Bank of New York President Benjamin Strong in 1928. Strong’s death altered the locus of power in the Federal Reserve System and left it without effective leadership. Friedman and Schwartz maintain that Strong had the personality, confidence and reputation in the financial community to lead monetary policy and sway policy makers to his point of view. Friedman and Schwartz believe that Strong would not have permitted the financial panics and liquidity crises to persist and affect the real economy. Instead, after Governor Strong died, the conduct of open market operations changed from a five-man committee dominated by the New York Federal Reserve to that of a 12-man committee of Federal Reserve Bank governors. Decisiveness in leadership was replaced by inaction and drift. Others (Temin, 1989; Wicker, 1965) reject this point, claiming the policies of the Federal Reserve in the 1930s were not inconsistent with the policies pursued in the decade of the 1920s.

The Fed’s Failure to Distinguish between Nominal and Real Interest Rates

Meltzer (1976) also points out errors made by the Federal Reserve. His argument is that the Federal Reserve failed to distinguish between nominal and real interest rates. That is, while nominal rates were falling, the Federal Reserve did virtually nothing, since it construed this to be a sign of an “easy” credit market. However, in the face of deflation, real rates were rising and there was in fact a “tight” credit market. Failure to make this distinction led money to be a contributing factor to the initial decline of 1929.

Deflation

Cecchetti (1992) and Nelson (1991) bolster the monetary hypothesis by demonstrating that the deflation during the Depression was anticipated at short horizons, once it was under way. The result, using the Fisher equation, is that high ex ante real interest rates were the transmission mechanism that led from falling prices to falling output. In addition, Cecchetti (1998) and Cecchetti and Karras (1994) argue that if the lower bound of the nominal interest rate is reached, then continued deflation renders the opportunity cost of holding money negative. In this instance the nature of money changes. Now the rate of deflation places a floor on the real return nonmoney assets must provide to make them attractive to hold. If they cannot exceed the rate on money holdings, then agents will move their assets into cash and the result will be negative net investment and a decapitalization of the economy.

Critics of the Monetary Hypothesis

The monetary hypothesis, however, is not without its detractors. Paul Samuelson observes that the monetary base did not fall during the Depression. Moreover, expecting the Federal Reserve to have aggressively increased the monetary base by whatever amount was necessary to stop the decline in the money supply is hindsight. A course of action for monetary policy such as this was beyond the scope of discussion prevailing at the time. In addition, others, like Moses Abramovitz, point out that the money supply had endogenous components that were beyond the Federal Reserve’s ability to control. Namely, the money supply may have been falling as a result of declining economic activity, or so-called “reverse causation.” Moreover the gold standard, to which the United States continued to adhere until March 1933, also tied the hands of the Federal Reserve in so far as gold outflows that occurred required the Federal Reserve to contract the supply of money. These views are also contained in Temin (1989) and Eichengreen (1992), as discussed below.

Bernanke (1983) argues that the monetary hypothesis: (i) is not a complete explanation of the link between the financial sector and aggregate output in the 1930s; (ii) does not explain how it was that decreases in the money supply caused output to keep falling over many years, especially since it is widely believed that changes in the money supply only change prices and other nominal economic values in the long run, not real economic values like output ; and (iii) is quantitatively insufficient to explain the depth of the decline in output. Bernanke (1983) not only resurrected and sharpened Fisher’s (1933) debt deflation hypothesis, but also made further contributions to what has come to be known as the nonmonetary/financial hypothesis.

The Nonmonetary/Financial Hypothesis

Bernanke (1983), building on the monetary hypothesis of Friedman and Schwartz (1963), presents an alternative interpretation of the way in which the financial crises may have affected output. The argument involves both the effects of debt deflation and the impact that bank panics had on the ability of financial markets to efficiently allocate funds from lenders to borrowers. These nonmonetary/financial theories hold that events in financial markets other than shocks to the money supply can help to account for the paths of output and prices during the Great Depression.

Fisher (1933) asserted that the dominant forces that account for “great” depressions are (nominal) over-indebtedness and deflation. Specifically, he argued that real debt burdens were substantially increased when there were dramatic declines in the price level and nominal incomes. The combination of deflation, falling nominal income and increasing real debt burdens led to debtor insolvency, lowered aggregate demand, and thereby contributed to a continuing decline in the price level and thus further increases in the real burden of debt.

The “Credit View”

Bernanke (1983), in what is now called the “credit view,” provided additional details to help explain Fisher’s debt deflation hypothesis. He argued that in normal circumstances, an initial decline in prices merely reallocates wealth from debtors to creditors, such as banks. Usually, such wealth redistributions are minor in magnitude and have no first-order impact on the economy. However, in the face of large shocks, deflation in the prices of assets forfeited to banks by debtor bankruptcies leads to a decline in the nominal value of assets on bank balance sheets. For a given value of bank liabilities, also denominated in nominal terms, this deterioration in bank assets threatens insolvency. As banks reallocate away from loans to safer government securities, some borrowers, particularly small ones, are unable to obtain funds, often at any price. Further, if this reallocation is long-lived, the shortage of credit for these borrowers helps to explain the persistence of the downturn. As the disappearance of bank financing forces lower expenditure plans, aggregate demand declines, which again contributes to the downward deflationary spiral. For debt deflation to be operative, it is necessary to demonstrate that there was a substantial build-up of debt prior to the onset of the Depression and that the deflation of the 1930s was at least partially unanticipated at medium- and long-term horizons at the time that the debt was being incurred. Both of these conditions appear to have been in place (Fackler and Parker, 2001; Hamilton, 1992; Evans and Wachtel, 1993).

The Breakdown in Credit Markets

In addition, the financial panics which occurred hindered the credit allocation mechanism. Bernanke (1983) explains that the process of credit intermediation requires substantial information gathering and non-trivial market-making activities. The financial disruptions of 1930–33 are correctly viewed as substantial impediments to the performance of these services and thus impaired the efficient allocation of credit between lenders and borrowers. That is, financial panics and debtor and business bankruptcies resulted in a increase in the real cost of credit intermediation. As the cost of credit intermediation increased, sources of credit for many borrowers (especially households, farmers and small firms) became expensive or even unobtainable at any price. This tightening of credit put downward pressure on aggregate demand and helped turn the recession of 1929–30 into the Great Depression. The empirical support for the validity of the nonmonetary/financial hypothesis during the Depression is substantial (Bernanke, 1983; Fackler and Parker, 1994, 2001; Hamilton, 1987, 1992), although support for the “credit view” for the transmission mechanism of monetary policy in post-World War II economic activity is substantially weaker. In combination, considering the preponderance of empirical results and historical simulations contained in the economic literature, the monetary hypothesis and the nonmonetary/financial hypothesis go a substantial distance toward accounting for the economic experiences of the United States during the Great Depression.

The Role of Pessimistic Expectations

To this combination, the behavior of expectations should also be added. As explained by James Tobin, there was another reason for a “change in the character of the contraction” in 1931. Although Friedman and Schwartz attribute this “change” to the bank panics that occurred, Tobin points out that change also took place because of the emergence of pessimistic expectations. If it was thought that the early stages of the Depression were symptomatic of a recession that was not different in kind from similar episodes in our economic history, and that recovery was a real possibility, the public need not have had pessimistic expectations. Instead the public may have anticipated things would get better. However, after the British left the gold standard, expectations changed in a very pessimistic way. The public may very well have believed that the business cycle downturn was not going to be reversed, but rather was going to get worse than it was. When households and business investors begin to make plans based on the economy getting worse instead of making plans based on anticipations of recovery, the depressing economic effects on consumption and investment of this switch in expectations are common knowledge in the modern macroeconomic literature. For the literature on the Great Depression, the empirical research conducted on the expectations hypothesis focuses almost exclusively on uncertainty (which is not the same thing as pessimistic/optimistic expectations) and its contribution to the onset of the Depression (Romer, 1990; Flacco and Parker, 1992). Although Keynes (1936) writes extensively about the state of expectations and their economic influence, the literature is silent regarding the empirical validity of the expectations hypothesis in 1931–33. Yet, in spite of this, the continued shocks that the United States’ economy received demonstrated that the business cycle downturn of 1931–33 was of a different kind than had previously been known. Once the public believed this to be so and made their plans accordingly, the results had to have been economically devastating. There is no formal empirical confirmation and I have not segregated the expectations hypothesis as a separate hypothesis in the overview. However, the logic of the above argument compels me to be of the opinion that the expectations hypothesis provides an impressive addition to the monetary hypothesis and the nonmonetary/financial hypothesis in accounting for the economic experiences of the United States during the Great Depression.

The Gold Standard Hypothesis

Recent research on the operation of the interwar gold standard has deepened our understanding of the Depression and its international character. The way and manner in which the interwar gold standard was structured and operated provide a convincing explanation of the international transmission of deflation and depression that occurred in the 1930s.

The story has its beginning in the 1870–1914 period. During this time the gold standard functioned as a pegged exchange rate system where certain rules were observed. Namely, it was necessary for countries to permit their money supplies to be altered in response to gold flows in order for the price-specie flow mechanism to function properly. It operated successfully because countries that were gaining gold allowed their money supply to increase and raise the domestic price level to restore equilibrium and maintain the fixed exchange rate of their currency. Countries that were losing gold were obligated to permit their money supply to decrease and generate a decline in their domestic price level to restore equilibrium and maintain the fixed exchange rate of their currency. Eichengreen (1992) discusses and extensively documents that the gold standard of this period functioned as smoothly as it did because of the international commitment countries had to the gold standard and the level of international cooperation exhibited during this time. “What rendered the commitment to the gold standard credible, then, was that the commitment was international, not merely national. That commitment was activated through international cooperation” (Eichengreen, 1992).

The gold standard was suspended when the hostilities of World War I broke out. By the end of 1928, major countries such as the United States, the United Kingdom, France and Germany had re-established ties to a functioning fixed exchange rate gold standard. However, Eichengreen (1992) points out that the world in which the gold standard functioned before World War I was not the same world in which the gold standard was being re-established. A credible commitment to the gold standard, as Hamilton (1988) explains, required that a country maintain fiscal soundness and political objectives that insured the monetary authority could pursue a monetary policy consistent with long-run price stability and continuous convertibility of the currency. Successful operation required these conditions to be in place before re-establishment of the gold standard was operational. However, many governments during the interwar period went back on the gold standard in the opposite set of circumstances. They re-established ties to the gold standard because they were incapable, due to the political chaos generated after World War I, of fiscal soundness and did not have political objectives conducive to reforming monetary policy such that it could insure long-run price stability. “By this criterion, returning to the gold standard could not have come at a worse time or for poorer reasons” (Hamilton, 1988). Kindleberger (1973) stresses the fact that the pre-World War I gold standard functioned as well as it did because of the unquestioned leadership exercised by Great Britain. After World War I and the relative decline of Britain, the United States did not exhibit the same strength of leadership Britain had shown before. The upshot is that it was an unsuitable environment in which to re-establish the gold standard after World War I and the interwar gold standard was destined to drift in a state of malperformance as no one took responsibility for its proper functioning. However, the problems did not end there.

Flaws in the Interwar International Gold Standard

Lack of Symmetry in the Response of Gold-Gaining and Gold-Losing Countries

The interwar gold standard operated with four structural/technical flaws that almost certainly doomed it to failure (Eichengreen, 1986; Temin, 1989; Bernanke and James, 1991). The first, and most damaging, was the lack of symmetry in the response of gold-gaining countries and gold-losing countries that resulted in a deflationary bias that was to drag the world deeper into deflation and depression. If a country was losing gold reserves, it was required to decrease its money supply to maintain its commitment to the gold standard. Given that a minimum gold reserve had to be maintained and that countries became concerned when the gold reserve fell within 10 percent of this minimum, little gold could be lost before the necessity of monetary contraction, and thus deflation, became a reality. Moreover, with a fractional gold reserve ratio of 40 percent, the result was a decline in the domestic money supply equal to 2.5 times the gold outflow. On the other hand, there was no such constraint on countries that experienced gold inflows. Gold reserves were accumulated without the binding requirement that the domestic money supply be expanded. Thus the price–specie flow mechanism ceased to function and the equilibrating forces of the pre-World War I gold standard were absent during the interwar period. If a country attracting gold reserves were to embark on a contractionary path, the result would be the further extraction of gold reserves from other countries on the gold standard and the imposition of deflation on their economies as well, as they were forced to contract their money supplies. “As it happened, both of the two major gold surplus countries – France and the United States, who at the time together held close to 60 percent of the world’s monetary gold – took deflationary paths in 1928–1929” (Bernanke and James, 1991).

Foreign Exchange Reserves

Second, countries that did not have reserve currencies could hold their minimum reserves in the form of both gold and convertible foreign exchange reserves. If the threat of devaluation of a reserve currency appeared likely, a country holding foreign exchange reserves could divest itself of the foreign exchange, as holding it became a more risky proposition. Further, the convertible reserves were usually only fractionally backed by gold. Thus, if countries were to prefer gold holdings as opposed to foreign exchange reserves for whatever reason, the result would be a contraction in the world money supply as reserves were destroyed in the movement to gold. This effect can be thought of as equivalent to the effect on the domestic money supply in a fractional reserve banking system of a shift in the public’s money holdings toward currency and away from bank deposits.

The Bank of France and Open Market Operations

Third, the powers of many European central banks were restricted or excluded outright. In particular, as discussed by Eichengreen (1986), the Bank of France was prohibited from engaging in open market operations, i.e. the purchase or sale of government securities. Given that France was one of the countries amassing gold reserves, this restriction largely prevented them from adhering to the rules of the gold standard. The proper response would have been to expand their supply of money and inflate so as not to continue to attract gold reserves and impose deflation on the rest of the world. This was not done. France continued to accumulate gold until 1932 and did not leave the gold standard until 1936.

Inconsistent Currency Valuations

Lastly, the gold standard was re-established at parities that were unilaterally determined by each individual country. When France returned to the gold standard in 1926, it returned at a parity rate that is believed to have undervalued the franc. When Britain returned to the gold standard in 1925, it returned at a parity rate that is believed to have overvalued the pound. In this situation, the only sustainable equilibrium required the French to inflate their economy in response to the gold inflows. However, given their legacy of inflation during the 1921–26 period, France steadfastly resisted inflation (Eichengreen, 1986). The maintenance of the gold standard and the resistance to inflation were now inconsistent policy objectives. The Bank of France’s inability to conduct open market operations only made matters worse. The accumulation of gold and the exporting of deflation to the world was the result.

The Timing of Recoveries

Taken together, the flaws described above made the interwar gold standard dysfunctional and in the end unsustainable. Looking back, we observe that the record of departure from the gold standard and subsequent recovery was different for many different countries. For some countries recovery came sooner. For some it came later. It is in this timing of departure from the gold standard that recent research has produced a remarkable empirical finding. From the work of Choudri and Kochin (1980), Eichengreen and Sachs (1985), Temin (1989), and Bernanke and James (1991), we now know that the sooner a country abandoned the gold standard, the quicker recovery commenced. Spain, which never restored its participation in the gold standard, missed the ravages of the Depression altogether. Britain left the gold standard in September 1931, and started to recover. Sweden left the gold standard at the same time as Britain, and started to recover. The United States left in March 1933, and recovery commenced. France, Holland, and Poland continued to have their economies struggle after the United States’ recovery began as they continued to adhere to the gold standard until 1936. Only after they left did recovery start; departure from the gold standard freed a country from the ravages of deflation.

The Fed and the Gold Standard: The “Midas Touch”

Temin (1989) and Eichengreen (1992) argue that it was the unbending commitment to the gold standard that generated deflation and depression worldwide. They emphasize that the gold standard required fiscal and monetary authorities around the world to submit their economies to internal adjustment and economic instability in the face of international shocks. Given how the gold standard tied countries together, if the gold parity were to be defended and devaluation was not an option, unilateral monetary actions by any one country were pointless. The end result is that Temin (1989) and Eichengreen (1992) reject Friedman and Schwartz’s (1963) claim that the Depression was caused by a series of policy failures on the part of the Federal Reserve. Actions taken in the United States, according to Temin (1989) and Eichengreen (1992), cannot be properly understood in isolation with respect to the rest of the world. If the commitment to the gold standard was to be maintained, monetary and fiscal authorities worldwide had little choice in responding to the crises of the Depression. Why did the Federal Reserve continue a policy of inaction during the banking panics? Because the commitment to the gold standard, what Temin (1989) has labeled “The Midas Touch,” gave them no choice but to let the banks fail. Monetary expansion and the injection of liquidity would lower interest rates, lead to a gold outflow, and potentially be contrary to the rules of the gold standard. Continued deflation due to gold outflows would begin to call into question the monetary authority’s commitment to the gold standard. “Defending gold parity might require the authorities to sit idly by as the banking system crumbled, as the Federal Reserve did at the end of 1931 and again at the beginning of 1933” (Eichengreen, 1992). Thus, if the adherence to the gold standard were to be maintained, the money supply was endogenous with respect to the balance of payments and beyond the influence of the Federal Reserve.

Eichengreen (1992) concludes further that what made the pre-World War I gold standard so successful was absent during the interwar period: credible commitment to the gold standard activated through international cooperation in its implementation and management. Had these important ingredients of the pre-World War I gold standard been present during the interwar period, twentieth-century economic history may have been very different.

Recovery and the New Deal

March 1933 was the rock bottom of the Depression and the inauguration of Franklin D. Roosevelt represented a sharp break with the status quo. Upon taking office, a bank holiday was declared, the United States left the interwar gold standard the following month, and the government commenced with several measures designed to resurrect the financial system. These measures included: (i) the establishment of the Reconstruction Finance Corporation which set about funneling large sums of liquidity to banks and other intermediaries; (ii) the Securities Exchange Act of 1934 which established margin requirements for bank loans used to purchase stocks and bonds and increased information requirements to potential investors; and (iii) the Glass–Steagal Act which strictly separated commercial banking and investment banking. Although delivering some immediate relief to financial markets, lenders continued to be reluctant to extend credit after the events of 1929–33, and the recovery of financial markets was slow and incomplete. Bernanke (1983) estimates that the United States’ financial system did not begin to shed the inefficiencies under which it was operating until the end of 1935.

The NIRA

Policies designed to promote different economic institutions were enacted as part of the New Deal. The National Industrial Recovery Act (NIRA) was passed on June 6, 1933 and was designed to raise prices and wages. In addition, the Act mandated the formation of planning boards in critical sectors of the economy. The boards were charged with setting output goals for their respective sector and the usual result was a restriction of production. In effect, the NIRA was a license for industries to form cartels and was struck down as unconstitutional in 1935. The Agricultural Adjustment Act of 1933 was similar legislation designed to reduce output and raise prices in the farming sector. It too was ruled unconstitutional in 1936.

Relief and Jobs Programs

Other policies intended to provide relief directly to people who were destitute and out of work were rapidly enacted. The Civilian Conservation Corps (CCC), the Tennessee Valley Authority (TVA), the Public Works Administration (PWA) and the Federal Emergency Relief Administration (FERA) were set up shortly after Roosevelt took office and provided jobs for the unemployed and grants to states for direct relief. The Civil Works Administration (CWA), created in 1933–34, and the Works Progress Administration (WPA), created in 1935, were also designed to provide work relief to the jobless. The Social Security Act was also passed in 1935. There surely are other programs with similar acronyms that have been left out, but the intent was the same. In the words of Roosevelt himself, addressing Congress in 1938:

Government has a final responsibility for the well-being of its citizenship. If private co-operative endeavor fails to provide work for the willing hands and relief for the unfortunate, those suffering hardship from no fault of their own have a right to call upon the Government for aid; and a government worthy of its name must make fitting response. (Quoted from Polenberg, 2000)

The Depression had shown the inaccuracies of classifying the 1920s as a “new era.” Rather, the “new era,” as summarized by Roosevelt’s words above and initiated in government’s involvement in the economy, began in March 1933.

The NBER business cycle chronology shows continuous growth from March 1933 until May 1937, at which time a 13-month recession hit the economy. The business cycle rebounded in June 1938 and continued on its upward march to and through the beginning of the United States’ involvement in World War II. The recovery that started in 1933 was impressive, with real GNP experiencing annual rates of the growth in the 10 percent range between 1933 and December 1941, excluding the recession of 1937–38 (Romer, 1993). However, as reported by Romer (1993), real GNP did not return to its pre-Depression level until 1937 and real GNP did not catch up to its pre-Depression secular trend until 1942. Indeed, the unemployment rate, peaking at 25 percent in March 1933, continued to dwell near or above the double-digit range until 1940. It is in this sense that most economists attribute the ending of the Depression to the onset of World War II. The War brought complete recovery as the unemployment rate quickly plummeted after December 1941 to its nadir during the War of below 2 percent.

Explanations for the Pace of Recovery

The question remains, however, that if the War completed the recovery, what initiated it and sustained it through the end of 1941? Should we point to the relief programs of the New Deal and the leadership of Roosevelt? Certainly, they had psychological/expectational effects on consumers and investors and helped to heal the suffering experienced during that time. However, as shown by Brown (1956), Peppers (1973), and Raynold, McMillin and Beard (1991), fiscal policy contributed little to the recovery, and certainly could have done much more.

Once again we return to the financial system for answers. The abandonment of the gold standard, the impact this had on the money supply, and the deliverance from the economic effects of deflation would have to be singled out as the most important contributor to the recovery. Romer (1993) stresses that Eichengreen and Sachs (1985) have it right; recovery did not come before the decision to abandon the old gold parity was made operational. Once this became reality, devaluation of the currency permitted expansion in the money supply and inflation which, rather than promoting a policy of beggar-thy-neighbor, allowed countries to escape the deflationary vortex of economic decline. As discussed in connection with the gold standard hypothesis, the simultaneity of leaving the gold standard and recovery is a robust empirical result that reflects more than simple temporal coincidence.

Romer (1993) reports an increase in the monetary base in the United States of 52 percent between April 1933 and April 1937. The M1 money supply virtually matched this increase in the monetary base, with 49 percent growth over the same period. The sources of this increase were two-fold. First, aside from the immediate monetary expansion permitted by devaluation, as Romer (1993) explains, monetary expansion continued into 1934 and beyond as gold flowed to the United States from Europe due to the increasing political unrest and heightened probability of hostilities that began the progression to World War II. Second, the increase in the money supply matched the increase in the monetary base and the Treasury chose not to sterilize the gold inflows. This is evidence that the monetary expansion resulted from policy decisions and not endogenous changes in the money multiplier. The new regime was freed from the constraints of the gold standard and the policy makers were intent on taking actions of a different nature than what had been done between 1929 and 1933.

Incompleteness of the Recovery before WWII

The Depression had turned a corner and the economy was emerging from the abyss in 1933. However, it still had a long way to go to reach full recovery. Friedman and Schwartz (1963) comment that “the most notable feature of the revival after 1933 was not its rapidity but its incompleteness.” They claim that monetary policy and the Federal Reserve were passive after 1933. The monetary authorities did nothing to stop the fall from 1929 to 1933 and did little to promote the recovery. The Federal Reserve made no effort to increase the stock of high-powered money through the use of either open market operations or rediscounting; Federal Reserve credit outstanding remained “almost perfectly constant from 1934 to mid-1940” (Friedman and Schwartz, 1963). As we have seen above, it was the Treasury that was generating increases in the monetary base at the time by issuing gold certificates equal to the amount of gold reserve inflow and depositing them at the Federal Reserve. When the government spent the money, the Treasury swapped the gold certificates for Federal Reserve notes and this expanded the monetary base (Romer, 1993). Monetary policy was thought to be powerless to promote recovery, and instead it was fiscal policy that became the implement of choice. The research shows that fiscal policy could have done much more to aid in recovery – ironically fiscal policy was the vehicle that was now the focus of attention. There is an easy explanation for why this is so.

The Emergences of Keynes

The economics profession as a whole was at a loss to provide cogent explanations for the events of 1929–33. In the words of Robert Gordon (1998), “economics had lost its intellectual moorings, and it was time for a new diagnosis.” There were no convincing answers regarding why the earlier theories of macroeconomic behavior failed to explain the events that were occurring, and worse, there was no set of principles that established a guide for proper actions in the future. That changed in 1936 with the publication of Keynes’s book The General Theory of Employment, Interest and Money. Perhaps there has been no other person and no other book in economics about which so much has been written. Many consider the arrival of Keynesian thought to have been a “revolution,” although this too is hotly contested (see, for example, Laidler, 1999). The debates that The General Theory generated have been many and long-lasting. There is little that can be said here to add or subtract from the massive literature devoted to the ideas promoted by Keynes, whether they be viewed right or wrong. But the influence over academic thought and economic policy that was generated by The General Theory is not in doubt.

The time was right for a set of ideas that not only explained the Depression’s course of events, but also provided a prescription for remedies that would create better economic performance in the future. Keynes and The General Theory, at the time the events were unfolding, provided just such a package. When all is said and done, we can look back in hindsight and argue endlessly about what Keynes “really meant” or what the “true” contribution of Keynesianism has been to the world of economics. At the time the Depression happened, Keynes represented a new paradigm for young scholars to latch on to. The stage was set for the nurturing of macroeconomics for the remainder of the twentieth century.

This article is a modified version of the introduction to Randall Parker, editor, Reflections on the Great Depression, Edward Elgar Publishing, 2002.

Bibliography

Olney, Martha. “Avoiding Default:The Role of Credit in the Consumption Collapse of 1930.” Quarterly Journal of Economics 114, no. 1 (1999): 319-35.

Anderson, Barry L. and James L. Butkiewicz. “Money, Spending and the Great Depression.” Southern Economic Journal 47 (1980): 388-403.

Balke, Nathan S. and Robert J. Gordon. “Historical Data.” In The American Business Cycle: Continuity and Change, edited by Robert J. Gordon. Chicago: University of Chicago Press, 1986.

Bernanke, Ben S. “Nonmonetary Effects of the Financial Crisis in the Propagation of the Great Depression.” American Economic Review 73, no. 3 (1983): 257-76.

Bernanke, Ben S. and Harold James. “The Gold Standard, Deflation, and Financial Crisis in the Great Depression: An International Comparison.” In Financial Markets and Financial Crises, edited by R. Glenn Hubbard. Chicago: University of Chicago Press, 1991.

Brown, E. Cary. “Fiscal Policy in the Thirties: A Reappraisal.” American Economic Review 46, no. 5 (1956): 857-79.

Cecchetti, Stephen G. “Prices during the Great Depression: Was the Deflation of 1930-1932 Really Anticipated?” American Economic Review 82, no. 1 (1992): 141-56.

Cecchetti, Stephen G. “Understanding the Great Depression: Lessons for Current Policy.” In The Economics of the Great Depression, edited by Mark Wheeler. Kalamazoo, MI: W.E. Upjohn Institute for Employment Research, 1998.

Cecchetti, Stephen G. and Georgios Karras. “Sources of Output Fluctuations during the Interwar Period: Further Evidence on the Causes of the Great Depression.” Review of Economics and Statistics 76, no. 1 (1994): 80-102

Choudri, Ehsan U. and Levis A. Kochin. “The Exchange Rate and the International Transmission of Business Cycle Disturbances: Some Evidence from the Great Depression.” Journal of Money, Credit, and Banking 12, no. 4 (1980): 565-74.

De Long, J. Bradford and Andrei Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” Journal of Economic History 51, no. 3 (1991): 675-700.

Eichengreen, Barry. “The Bank of France and the Sterilization of Gold, 1926–1932.” Explorations in Economic History 23, no. 1 (1986): 56-84.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919–1939. New York: Oxford University Press, 1992.

Eichengreen, Barry and Jeffrey Sachs. “Exchange Rates and Economic Recovery in the 1930s.” Journal of Economic History 45, no. 4 (1985): 925-46.

Evans, Martin and Paul Wachtel. “Were Price Changes during the Great Depression Anticipated? Evidence from Nominal Interest Rates.” Journal of Monetary Economics 32, no. 1 (1993): 3-34.

Fackler, James S. and Randall E. Parker. “Accounting for the Great Depression: A Historical Decomposition.” Journal of Macroeconomics 16 (1994): 193-220.

Fackler, James S. and Randall E. Parker. “Was Debt Deflation Operative during the Great Depression?” East Carolina University Working Paper, 2001.

Fisher, Irving. “The Debt–Deflation Theory of Great Depressions.” Econometrica 1, no. 4 (1933): 337-57.

Flacco, Paul R. and Randall E. Parker. “Income Uncertainty and the Onset of the Great Depression.” Economic Inquiry 30, no. 1 (1992): 154-71.

Friedman, Milton and Anna J. Schwartz. A Monetary History of the United States, 1867–1960. Princeton, NJ: Princeton University Press, 1963.

Gordon, Robert J. Macroeconomics, seventh edition. New York: Addison Wesley, 1998.

Hamilton, James D. “Monetary Factors in the Great Depression.” Journal of Monetary Economics 13 (1987): 1-25.

Hamilton, James D. “Role of the International Gold Standard in Propagating the Great Depression.” Contemporary Policy Issues 6, no. 2 (1988): 67-89.

Hamilton, James D. “Was the Deflation during the Great Depression Anticipated? Evidence from the Commodity Futures Market.” American Economic Review 82, no. 1 (1992): 157-78.

Hayek, Friedrich A. von. Monetary Theory and the Trade Cycle. New

York: A. M. Kelley, 1967 (originally published in 1929).

Hayek, Friedrich A. von, Prices and Production. New York: A. M.

Kelley, 1966 (originally published in 1931).

Hoover, Herbert. The Memoirs of Herbert Hoover: The Great Depression, 1929–1941. New York: Macmillan, 1952.

Keynes, John M. The General Theory of Employment, Interest, and Money. London: Macmillan, 1936.

Kindleberger, Charles P. The World in Depression, 1929–1939. Berkeley: University of California Press, 1973.

Laidler, David. Fabricating the Keynesian Revolution. Cambridge: Cambridge University Press, 1999.

McCallum, Bennett T. “Could a Monetary Base Rule Have Prevented the Great Depression?” Journal of Monetary Economics 26 (1990): 3-26.

Meltzer, Allan H. “Monetary and Other Explanations of the Start of the Great Depression.” Journal of Monetary Economics 2 (1976): 455-71.

Mishkin, Frederick S. “The Household Balance Sheet and the Great Depression.” Journal of Economic History 38, no. 4 (1978): 918-37.

Nelson, Daniel B. “Was the Deflation of 1929–1930 Anticipated? The Monetary Regime as Viewed by the Business Press.” Research in Economic History 13 (1991): 1-65.

Peppers, Larry. “Full Employment Surplus Analysis and Structural Change: The 1930s.” Explorations in Economic History 10 (1973): 197-210..

Persons, Charles E. “Credit Expansion, 1920 to 1929, and Its Lessons.” Quarterly Journal of Economics 45, no. 1 (1930): 94-130.

Polenberg, Richard. The Era of Franklin D. Roosevelt, 1933–1945: A Brief History with Documents. Boston: Bedford/St. Martin’s, 2000.

Raynold, Prosper, W. Douglas McMillin and Thomas R. Beard. “The Impact of Federal Government Expenditures in the 1930s.” Southern Economic Journal 58, no. 1 (1991): 15-28.

Romer, Christina D. “World War I and the Postwar Depression: A Reappraisal Based on Alternative Estimates of GNP.” Journal of Monetary Economics 22, no. 1 (1988): 91-115.

Romer, Christina D. “The Great Crash and the Onset of the Great Depression.” Quarterly Journal of Economics 105, no. 3 (1990): 597-624.

Romer, Christina D. “The Nation in Depression.” Journal of Economic Perspectives 7, no. 2 (1993): 19-39.

Snowdon, Brian and Howard R. Vane. Conversations with Leading Economists: Interpreting Modern Macroeconomics, Cheltenham, UK: Edward Elgar, 1999.

Soule, George H. Prosperity Decade, From War to Depression: 1917–1929. New York: Rinehart, 1947.

Temin, Peter. Did Monetary Forces Cause the Great Depression? New York: W.W. Norton, 1976.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: MIT Press, 1989.

White, Eugene N. “The Stock Market Boom and Crash of 1929 Revisited.” Journal of Economic Perspectives 4, no. 2 (1990): 67-83.

Wicker, Elmus. “Federal Reserve Monetary Policy, 1922–33: A Reinterpretation.” Journal of Political Economy 73, no. 4 (1965): 325-43.

1 Bankers’ acceptances are explained at http://www.rich.frb.org/pubs/instruments/ch10.html.

2 Liquidity is the ease of converting an asset into money.

3 The monetary base is measured as the sum of currency in the hands of the public plus reserves in the banking system. It is also called high-powered money since the monetary base is the quantity that gets multiplied into greater amounts of money supply as banks make loans and people spend and thereby create new bank deposits.

4 The money multiplier equals [D/R*(1 + D/C)]/(D/R + D/C + D/E), where

D = deposits, R = reserves, C = currency and E = excess reserves in the

banking system.

5 The real interest rate adjusts the observed (nominal) interest rate for inflation or deflation. Ex post refers to the real interest rate after the actual change in prices has been observed; ex ante refers to the real interest rate that is expected at the time the lending occurs.

6 See note 3.

Citation: Parker, Randall. “An Overview of the Great Depression”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/an-overview-of-the-great-depression/

The Freedmen’s Bureau

William Troost, University of British Columbia

The Bureau of Refugees, Freedmen, and Abandoned Lands, more commonly know as the Freedmen’s Bureau, was a federal agency established to help Southern blacks transition from their lives as slaves to free individuals. The challenges of this transformation were enormous as the Civil War devastated the region – leaving farmland dilapidated and massive amounts of capital destroyed. Additionally, the entire social order of the region was disturbed as slave owners and former slaves were forced to interact with one another in completely new ways. The Freedmen’s Bureau was an unprecedented foray by the federal government into the sphere of social welfare during a critical period of American history. This article briefly describes this unique agency, its colorful history, and many functions that the bureau performed during its brief existence.

The Beginning of the Bureau

In March 1863, the American Freedmen’s Inquiry Commission was set up to investigate “the measures which may best contribute to the protection and improvement of the recently emancipated freedmen of the United States, and to their self-defense and self-support.”1 The commission debated various methods and activities to alleviate the current condition of freedmen and aid their transition to free individuals. Basic aid activities to alleviate physical suffering and provide legal justice, education, and land redistribution were commonly mentioned in these meetings and hearings. This inquiry commission examined many issues and came up with some ideas that would eventually become the foundation for the eventual Freedmen’s Bureau Law. In 1864, the commission issued their final report which laid out the basic philosophy that would guide the actions of the Freedmen’s Bureau.

“The sum of our recommendations is this: Offer the freedmen temporary aid and counsel until they become a little accustomed to their new sphere of life; secure to them, by law, their just rights of person and property; relieve them, by a fair and equal administration of justice, from the depressing influence of disgraceful prejudice; above all, guard them against the virtual restoration of slavery in any form, and let them take care of themselves. If we do this, the future of the African race in this country will be conducive to its prosperity and associated with its well-being. There will be nothing connected with it to excite regret to inspire apprehension.”2

When the Congress finally got down to the business of writing a bill to aid the transition of the freedmen they tried to integrate many of the American Freedmen’s Inquiry Commission’s recommendations. Originally the agency set up to aid in this transition was to be named the Bureau of Emancipation. However, when the bill came up for a vote on March 1, 1864 the name was changed to the Bureau of Refugees, Freedmen, and Abandoned Lands. This change was due in large part to objections that the bill was exclusionary and aimed solely towards the aid of blacks. This name changed was aimed at enlarging support for the bill.

The House and the Senate argued about the powers and place that the bureau should reside within the government. Those in the House wanted the agency placed within the War Department, concluding that the power used to free the slaves would be best to aid them in their transition. Oppositely, in the Senate Charles Sumner’s Committee on Slavery and Freedom wanted the bureau placed within the Department of the Treasury – as it had the power to tax and had possession of confiscated lands. Sumner felt that they “should not be separated from their best source of livelihood.”3 After a year of debate, finally a compromise was agreed to that entrusted the Freedmen’s Bureau with the administration of confiscated lands while placing the bureau within the Department of War. Thus, On March 3, 1865, with the stroke of a pen, Abraham Lincoln signed into existence the Bureau of Refugees, Freedmen, and Abandoned Lands. Selected to head of the new bureau was General Otis Oliver Howard – commonly known as the Christian General. Howard had strong ties with the philanthropic community and forged strong ties with freedmen’s aid organizations.

The Freedmen’s Bureau was active in a variety of aid functions. Eric Foner writes it was “an experiment in social policy that did not belong to the America of its day”.4 The bureau did important work in many key areas and had many functions that even today are not considered the responsibility of the national government.

Relief Services

A key function of the bureau, especially in the beginning, was to provide temporary relief for the suffering of destitute freedmen. The bureau provided rations for those most in need due to the abandonment of plantations, poor crop yields, and unemployment. This aid was taken advantage of by a staggering number of both freedmen and refugees. A ration was defined as enough corn meal, flour, and sugar sufficient to feed a person for one week. In “the first 15 months following the war, the Bureau issued over 13 million rations, two thirds to blacks.”5 The size of this aid was staggering and while it was deemed a great necessity, it also fostered tremendous anxiety for both General Howard and the general population – mainly that it would cause idleness. Because of these worries, General Howard ordered that this form of relief be discontinued in the fall of 1866.

Health Care

In a similar vein the bureau also provided medical care to the recently freed slaves. The health situation of freedmen at the conclusion of the Civil War was atrocious. Frequent pandemics of cholera, poor sanitation, and outbreaks of smallpox killed scores of freedmen. Because the freed population lacked the financial assets to purchase private healthcare and were denied care in many other cases, the bureau played a valuable role.

“Since hospitals and doctors could not be relied on to provide adequate health care for freedmen, individual bureau agents on occasion responded innovatively to black distress. During epidemics, Pine Bluff and Little Rock agents relocated freedpersons to less contagion-ridden places. When blacks could not be moved, agents imposed quarantines to prevent the spread of disease. General Order Number 8…prohibited new residents from congregating in towns. The order also mandated weekly inspections of freedmen’s homes to check for filth and overcrowding.”6

In addition to preventing and containing outbreaks, the bureau also engaged more directly in health care. Being placed in the War Department, the bureau was also able to assume operations of hospitals established by the Army during the war. After the war it expanded the system to areas previously not under military control. Observing that freedmen were not receiving an adequate quality of health services, the bureau established dispensaries providing basic medical care and drugs free of charge, or at a nominal cost. The Bureau “managed in the early years of Reconstruction to treat an estimated half million suffering freedmen, as well as a smaller but significant number of whites.”7

Land Redistribution

Perhaps the most well-known function of the bureau was one that never came to fruition. During the course of the Civil War, the U.S. Army took control of a good deal of land that had been confiscated or abandoned by the Confederacy. From the time of emancipation there were rumors that confiscated lands would be provided to the recently freed slaves. This land would enable the blacks to be economically self-sufficient and provide protection from their former owners. In January 1865, General Sherman issued Special Field Orders, No. 15, which set aside the Sea Islands and lands from South Carolina to Florida for blacks to settle. According to his order, each family would receive forty acres of land and the loan of horses and mules from the Army. Similar to General Sherman’s order, the promise of land was incorporated into the bureau bill. Quickly the bureau helped blacks settle some of the abandoned lands and “by June 1865, roughly 10,000 families of freed people, with the assistance of the Freedmen’s Bureau, had taken up more than 400,000 acres.”8

While the promise of “forty acres and a mule” excited the freedmen, the widespread implementation of this policy was quickly thwarted. In the summer of 1865, President Andrew Johnson issued special pardons restoring the property of many Confederates – throwing into question the status of abandoned lands. In response, General Howard, the Commissioner of the Freedmen’s Bureau, issued Circular 13 which told agents to conserve forty-acre tracts of land for the freedmen – as he claimed presidential pardons conflicted with the laws establishing the bureau. However, Johnson quickly instructed Howard to rescind his circular and send out a new circular ordering the restoration to pardoned owners of all land except those tracts already sold. These actions by the President were devastating, as freedmen were evicted from lands that they had long occupied and improved. Johnson’s actions took away what many felt was the freedmen’s best chance at economic protection and self-sufficiency.

Judicial Functions

While the land distribution of the new agency was thwarted, the bureau was able to perform many duties. Bureau agents had judicial authority in the South attempting to secure equal justice from the state and local governments for both blacks and white Unionists. Local agents individually adjudicated a wide variety of disputes. In some circumstances the bureau established courts where freedmen could bring forth their complaints. After the local courts regained their jurisdiction, bureau agents kept an eye on local courts retaining the authority to overturn decisions that were discriminatory towards blacks. In May 1865, the Commissioner of the bureau issued a circular “authorizing assistant commissioners to exercise jurisdiction in cases where blacks were not allowed to testify.”9

In addition to these judicial functions, the bureau also helped provide legal services in the domestic sphere. Agents helped legitimize slave marriages and presided over freedmen marriage ceremonies in areas where black marriages were obstructed. Beginning in 1866, the bureau became responsible for filing the claims of black soldiers for back pay, pensions, and bounties. The claims division remained in operation until the end of the bureau’s existence. During a time when many of the states tried to strip rights away from blacks, the bureau was essential in providing freedmen redress and access to more equitable judicial decisions and services.

Labor Relations

Another important function of the bureau was to help draw up work contracts to help facilitate the hiring of freedmen. The abolition of slavery created economic confusion and stagnation as many planters had a difficult time finding labor to work their fields. Additionally, many blacks were anxious and unsure about working for former slave owners. “Into this chaos stepped the Freedmen’s Bureau as an intermediary.”10 The bureau helped planters and freedmen draft contracts on mutually agreeable terms – negotiating several hundred thousand contracts. Once agreed upon, the agency tried to make sure both planter and worker lived up to their part of the agreement. In essence, the bureau “would undertake the role of umpire.”11

Of the bureau’s many activities this was one of its most controversial. Both planters and freedmen complained about the insistence on labor contracts. Planters complained that labor contracts forbade the use of corporal punishment used in the past. They resented the limits on their activities and felt the restrictions of the contracts limited the productivity of their workers. On the other hand, freedmen complained that the contract structures were too restrictive and didn’t allow them to move freely. In essence, the bureau had an impossible task – trying to get the freedmen to return to work for former slave owners while preserving their rights and limiting abuse. The Freedmen’s Bureau’s judicial functions were of great help in enforcing these contracts in a fair manner making both parties live up to their end of the bargain. While historians have split over whether the bureau favored planters or the freedmen, Ralph Shlomowitz in his detailed analysis of bureau-assisted labor contracts found that contracts were determined by the free interplay of market forces.12 First, he finds contracts brokered by the bureau were extremely detailed to an extent that would not make sense in the absence of compliance. Second, contrary to popular belief he finds the share of crops received by labor was highly variable. In areas of higher quality land the share awarded to labor was less than in areas with lower land quality.

Educational Efforts

Prior to the Civil War it had been policy in the sixteen slave states to fine, whip, or imprison those who gave instruction to blacks or mulattos. In many states the punishments for teaching a person of color were quite severe. These laws severely restricted the educational opportunity of blacks – especially access to formal schooling. As a result, when given their freedom, many former slaves lacked the literacy skills necessary to protect themselves from discrimination and exploitation, and pursue many personal activities. This lack of literacy created great problems for blacks in a free labor system. Freedmen were repeatedly taken advantage of as they were often unable to read or draft contracts. Additionally, individuals lacked the ability to read newspapers and trade manuals, or worship by reading the Bible. Thus, when emancipated there was a great demand for freedmen schools.

General Howard quickly realized that education was perhaps the most important endeavor that the bureau could undertake. However, the financial resources and the few functions that the bureau was authorized to undertake limited the extent to which it was able to assist. Much of the early work in schooling was done by a number of benevolent and religious Northern societies. While initially the direct aid of the bureau was limited, it provided an essential role in organizing and coordinating these organizations in their efforts. The agency also allowed the use of many buildings in the Army’s possession and the bureau helped transport a trove of teachers from the North – commonly referred to as yankee school marms.

While the limits of the original Freedmen’s Bureau bill hamstrung the efforts of agents, subsequent bills changed the situation as the purse strings and functions of the bureau in the area of education were rapidly expanded. This shift in attention followed the lead of General Howard whose “stated goal was to close one after another of the original bureau divisions while the educational work was increased with all possible energy.”13 Among the provisions of the second bureau bill were: the appropriation of salaries for State Superintendents of Education, the repair and rental of school buildings, the ability to use military taxes to pay teachers’ salaries, and the establishment of the education division as a separate entity in the bureau.

These new resources were used to great success as enrollments at bureau-financed schools grew quickly, new schools were constructed in a variety of areas, and the quality and curriculum of the schools was significantly improved. The Freedmen’s Bureau was very successful in establishing a vast network of schools to help educate the freedmen. In retrospect this was a Herculean task for the federal government to accomplish. In a region where it was illegal to teach blacks how to read or write just a few years prior, the bureau was able to help establish nearly 1,600 day schools educating over 100,000 blacks at a time. The number of bureau-aided day and night schools in operation grew to a maximum of 1,737 in March 1870, employing 2,799 teachers, and instructing 103,396 pupils. In addition, 1,034 Sabbath schools were aided by the bureau that employed 4,988 teachers and instructed 85,557 pupils.

Matching the Integrated Public Use Sample of the 1870 Census and a constructed data set on bureau school location, one can examine the reach and prevalence of bureau-aided schools.14 Table 1 presents the summary statistics of various school concentration measures and educational outcomes for individual blacks 10-15 years old.

The variable “Freedmen’s Bureau School” equals one if there was at least one bureau-aided school in the individual’s county. The data reveals that 63.6 percent of blacks lived in counties with at least one bureau school. This shows the bureau was quite effective in reaching a large segment of the black population – as nearly two thirds of blacks living in the states of the ex-Confederacy had at least some minimal exposure to these schools. While the schools were widespread, it appears their concentration was somewhat low. For individuals living in a county with at least one bureau-aided school, the concentration of bureau-aided schools was 0.3165 per 30 square miles, or 0.4630 bureau aided-schools per 1,000 blacks.

Although the concentration of schools was somewhat low it appears they had a large impact on the educational outcomes of southern blacks. Ten to fifteen year olds living in a county with at least one bureau-aided school had literacy rates that were 6.1 percentage points higher. This appears to have been driven by the bureau increasing access to formal education for black children in these counties as school attendance rates were 7.5 percentage points higher than in counties without such schools.

Andrew Johnson and the Freedmen’s Bureau

Only eleven days after signing the bureau into existence, Abraham Lincoln was struck down by John Wilkes Booth. Taking his place in office was Andrew Johnson, a former Democratic Senator from Tennessee. Despite Johnson’s Southern roots, hopes were high that Congress and the new President could work closer together than the previous administration. President Lincoln and Congress had championed vastly different policies for Reconstruction. Lincoln preferred the term “Restoration” instead of “Reconstruction,” as he felt it was constitutionally impossible for a state to succeed.15 Lincoln championed the quick integration of the South into the Union and believed it could best be accomplished under the direction of the executive branch. Oppositely, Republicans in Congress led by Charles Sumner and Thaddeus Stevens felt the Confederate states had actually seceded and relinquished their constitutional rights. The Republicans in Congress advocated strict conditions for re-entry into the Union and programs aimed at reshaping society.

The ascension of Johnson to the presidency gave hope to Congress that they would have an ally in the White House in terms of Reconstruction philosophy. According to Howard Nash, the “Radicals were delighted….to have Vice President Andrew Johnson, who they had good reason to suppose was one of their number, elevated to the presidency.”16 In the months before and immediately after taking office, Johnson repeatedly talked about the need to punish rebels in the South. After Lincoln’s death Johnson became more impassioned in his speeches. In late April 1865 Johnson told an Indiana delegation “Treason must be made odious…traitors must be punished and impoverished…their social power must be destroyed.”17 If anything, many feared that Johnson may stray too far from the Presidential Reconstruction offered by Lincoln and be overly harsh in his treatment of the South.

Immediately after taking office Johnson honored Lincoln’s choice to head the bureau by appointing General Oliver Otis Howard as commissioner of the bureau. While this action raised hopes in Congress they would be able to work with the new administration, Johnson quickly switched course. After his selection of Howard, President Johnson and the “Radical” Republicans would scarcely agree on anything during the remainder of his term. On May 29, 1865, Johnson issued a proclamation that conferred amnesty, pardon, and the restoration of property rights for almost all Confederate soldiers who took an oath pledging loyalty to the Union. Johnson later came out in support of the black codes of the South, which tried to bring blacks back to a position of near slavery and argued that the Confederate states should be accepted back into the Union without the condition of ratifying and adopting the Fourteenth Amendment in their state constitutions.

The original bill signed by Lincoln established the bureau during and for a period of one year after the Civil War. The language of the bill was somewhat ambiguous, and with the surrender of Confederate forces military conflict had ceased. This led people to debate when the bureau would be discontinued. Consensus seemed to imply that if another bill wasn’t brought forth that the bureau would be discontinued in early 1866. In response Congress quickly got to work on a new Freedmen’s Bureau bill.

While Congress started work on a new bill, President Johnson tried to gain support for the view that the need for the bureau had come to an end. Ulysses S. Grant was called upon by the President to make a whirlwind tour of the South, and report on the present situation. The route set up was exceptionally brief and skewed to those areas best under control. Accordingly, his report said that the Freedmen’s Bureau had done good work and it appeared as though the freedmen were now able to fend for themselves without the help of the federal government.

In contrast, Carl Schurz made a long tour of the South only a few months after Grant and found the freedmen in a much different situation. In many areas the bureau was viewed as the only restraint to the most insidious of treatment of blacks. As Q.A. Gilmore stated in the report,

“For reasons already suggested I believe that the restoration of civil power that would take the control of this question out of the hands of the United States authorities (whether exercised through the military authorities or through the Freedmen’s Bureau) would, instead of removing existing evils, be almost certain to augment them.”18

While the first bill was adequate in many ways, it was rather weak in a few areas. In particular, the bill didn’t have any appropriations for officers of the bureau or direct funds earmarked for the establishment of schools. General Howard and many of his officers reported on the great need for the bureau and pushed for its existence indefinitely or at least until the freedmen were in a less vulnerable position. After listening to the reports and the recommendations of General Howard, a new bill was crafted by Senator Lyman Trumbull, a moderate Republican. The new bill proposed the bureau should remain in existence until abolished by law, provide more explicit aid to education and land to the freedmen, and protect the civil rights of blacks. The bill passed in both the Senate and House and was sent to Andrew Johnson, who promptly vetoed the measure. In his response to the Senate, Johnson wrote “there can be no necessity for the enlargement of the powers of the bureau for which provision is made in the bill.”19

While the President’s message was definitive, the veto came as a shock to many in Congress. President Johnson had been consulted prior to its passage and assured General Howard and Senator Trumbull that he would support the bill. In response to the President’s opposition, the Senate and House passed a bill that addressed some of the complaints that Johnson had with the bill, including limiting the length of the bill to two more years. Even after this watering down of the bill, it was once again vetoed. However, the new bill garnered enough support to override President Johnson’s veto. The veto of the bill and the subsequent override officially established a policy of open hostility between the legislative and executive branch. Prior to the Johnson administration, overriding a veto was extremely rare – as it had only occurred six times up until this time.20 However, after the passage of this bill it became mere commonplace for the remainder of Johnson’s term, as Congress would overturn fifteen vetoes during the less than four years Johnson was in office.

End of the Bureau

While work in the educational division picked up after the passage of the second bill, many of the other activities of the bureau were winding down. On July 25, 1868 a bill was signed into law requiring the withdrawal of most bureau officers from the states, and to stop the functions of the bureau except those that were related to education and claims. Although the educational activities of the bureau were to continue for an indefinite period of time, most state superintendent of education offices had closed by the middle of 1870. On November 30, 1870 Rev. Alvord resigned his post as General Superintendent of Education.21 While some small activities of the bureau continued after his resignation, these activities were scaled back greatly and largely consisted of correspondence. Finally due to lack of appropriations the activities of the bureau ceased in March 1871.

The expiration of the bureau was somewhat anti-climatic. A number of representatives wanted to establish a permanent bureau or organization for blacks, so that they could regulate their relations with the national and state governments.22 However, this concept was too radical to get passed by enough of a margin to override a veto. There was also talk of moving many of its functions into other parts of the government. However, over time the appropriations began to dwindle and the urgency to work out a proposal for transfer withered away in a manner similar to the bureau.

References

Alston, Lee J. and Joseph P. Ferrie. “Paternalism in Agricultural Labor Contracts in the U.S. South: Implications for the Growth of the Welfare State.” American Economic Review 83, no. 4 (1993): 852-76.

American Freedmen’s Inquiry Commission. Records of the American Freedmen’s Inquiry Commission, Final Report, Senate Executive Document 53, 38th Congress, 1st Session, Serial 1176, 1864.

Cimbala, Paul and Randall Miller. The Freedmen’s Bureau and Reconstruction: Reconsiderations. New York: Fordham University Press, 1999.

Congressional Research Service, http://clerk.house.gov/art_history/house_history/vetoes.html

Finley, Randy. From Slavery to Uncertain Freedom: The Freedmen’s Bureau in Arkansas, 1865-1869. Fayetteville: University of Arkansas Press, 1996.

Johnson, Andrew. “Message of the President: Returning Bill (S.60),” Pg. 3, 39th Congress, 1st Session, Executive Document No. 25, February 19, 1866.

McFeely, William S. Yankee Stepfather: General O.O. Howard and the Freedmen. New York: W.W. Norton, 1994.

Milton, George Fort. The Age of Hate: Andrew Johnson and the Radicals. New York: Coward-McCann, 1930.

Nash, Howard P. Andrew Johnson: Congress and Reconstruction. Rutherford, NJ: Fairleigh Dickinson University Press, 1972.

Parker, Marjorie H. “Some Educational Activities of the Freedmen’s Bureau.” Journal of Negro Education 23, no. 1 (1954): 9-21.

Q.A. Gillmore to Carl Schurz, July 27, 1865, Documents Accompanying the Report of Major General Carl Schurz, Hilton Head, SC.

Ruggles, Steven, Matthew Sobek, Trent Alexander, Catherine A. Fitch, Ronald Goeken, Patricia Kelly Hall, Miriam King, and Chad Ronnander. Integrated Public Use Microdata Series: Version 3.0 [Machine-readable database]. Minneapolis, MN: Minnesota Population Center [producer and distributor], 2004.

Shlomowitz, Ralph. “The Transition from Slave to Freedman Labor Arrangements in Southern Agriculture, 1865-1870.” Journal of Economic History 39, no. 1 (1979): 333-36.

Shlomowitz, Ralph, “The Origins of Southern Sharecropping,” Agricultural History 53, no. 3 (1979): 557-75.

Simpson, Brooks D. “Ulysses S. Grant and the Freedmen’s Bureau.” In The Freedmen’s Bureau and Reconstruction: Reconsiderations, edited by Paul A. Cimbala and Randall M. Miller. New York: Fordham University Press, 1999.

Citation: Troost, William. “Freedmen’s Bureau”. EH.Net Encyclopedia, edited by Robert Whaples. June 5, 2008. URL http://eh.net/encyclopedia/the-freedmens-bureau/