EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

History of Workplace Safety in the United States, 1880-1970

Mark Aldrich, Smith College

The dangers of work are usually measured by the number of injuries or fatalities occurring to a group of workers, usually over a period of one year. 1 Over the past century such measures reveal a striking improvement in the safety of work in all the advanced countries. In part this has been the result of the gradual shift of jobs from relatively dangerous goods production such as farming, fishing, logging, mining, and manufacturing into such comparatively safe work as retail trade and services. But even the dangerous trades are now far safer than they were in 1900. To take but one example, mining today remains a comparatively risky activity. Its annual fatality rate is about nine for every one hundred thousand miners employed. A century ago in 1900 about three hundred out of every one hundred thousand miners were killed on the job each year. 2

The Nineteenth Century

Before the late nineteenth century we know little about the safety of American workplaces because contemporaries cared little about it. As a result, only fragmentary information exists prior to the 1880s. Pre-industrial laborers faced risks from animals and hand tools, ladders and stairs. Industrialization substituted steam engines for animals, machines for hand tools, and elevators for ladders. But whether these new technologies generally worsened the dangers of work is unclear. What is clear is that nowhere was the new work associated with the industrial revolution more dangerous than in America.

US Was Unusually Dangerous

Americans modified the path of industrialization that had been pioneered in Britain to fit the particular geographic and economic circumstances of the American continent. Reflecting the high wages and vast natural resources of a new continent, this American system encouraged use of labor saving machines and processes. These developments occurred within a legal and regulatory climate that diminished employer’s interest in safety. As a result, Americans developed production methods that were both highly productive and often very dangerous. 3

Accidents Were “Cheap”

While workers injured on the job or their heirs might sue employers for damages, winning proved difficult. Where employers could show that the worker had assumed the risk, or had been injured by the actions of a fellow employee, or had himself been partly at fault, courts would usually deny liability. A number or surveys taken about 1900 showed that only about half of all workers fatally injured recovered anything and their average compensation only amounted to about half a year’s pay. Because accidents were so cheap, American industrial methods developed with little reference to their safety. 4

Mining

Nowhere was the American system more dangerous than in early mining. In Britain, coal seams were deep and coal expensive. As a result, British mines used mining methods that recovered nearly all of the coal because they used waste rock to hold up the roof. British methods also concentrated the working, making supervision easy, and required little blasting. American coal deposits by contrast, were both vast and near the surface; they could be tapped cheaply using techniques known as “room and pillar” mining. Such methods used coal pillars and timber to hold up the roof, because timber and coal were cheap. Since miners worked in separate rooms, labor supervision was difficult and much blasting was required to bring down the coal. Miners themselves were by no means blameless; most were paid by the ton, and when safety interfered with production, safety often took a back seat. For such reasons, American methods yielded more coal per worker than did European techniques, but they were far more dangerous, and toward the end of the nineteenth century, the dangers worsened (see Table 1).5

Table 1
British and American Mine Safety, 1890 -1904
(Fatality rates per Thousand Workers per Year)

Years American Anthracite American Bituminous Great Britain
1890-1894 3.29 2.52 1.61
1900-1904 3.13 3.53 1.28

Source: British data from Great Britain, General Report. Other data from Aldrich, Safety First.

Railroads

Nineteenth century American railroads were also comparatively dangerous to their workers – and their passengers as well – and for similar reasons. Vast North American distances and low population density turned American carriers into predominantly freight haulers – and freight was far more dangerous to workers than passenger traffic, for men had to go in between moving cars for coupling and uncoupling and ride the cars to work brakes. The thin traffic and high wages also forced American carriers to economize on both capital and labor. Accordingly, American carriers were poorly built and used few signals, both of which resulted in many derailments and collisions. Such conditions made American railroad work far more dangerous than that in Britain (see Table 2).6

Table 2
Comparative Safety of British and American Railroad Workers, 1889 – 1901
(Fatality Rates per Thousand Workers per Year)

1889 1895 1901
British railroad workers
All causes
1.14 0.95 0.89
British trainmena
All causes
4.26 3.22 2.21
Coupling 0.94 0.83 0.74
American Railroad workers
All causes
2.67 2.31 2.50
American trainmen
All causes
8.52 6.45 7.35
Coupling 1.73c 1.20 0.78
Brakingb 3.25c 2.44 2.03

Source: Aldrich, Safety First, Table 1 and Great Britain Board of Trade, General Report.

1

Note: Death rates are per thousand employees.
a. Guards, brakemen, and shunters.
b. Deaths from falls from cars and striking overhead obstructions.

Manufacturing

American manufacturing also developed in a distinctively American fashion that substituted power and machinery for labor and manufactured products with interchangeable arts for ease in mass production. Whether American methods were less safe than those in Europe is unclear but by 1900 they were extraordinarily risky by modern standards, for machines and power sources were largely unguarded. And while competition encouraged factory managers to strive for ever-increased output, they showed little interest in improving safety.7

Worker and Employer Responses

Workers and firms responded to these dangers in a number of ways. Some workers simply left jobs they felt were too dangerous, and risky jobs may have had to offer higher pay to attract workers. After the Civil War life and accident insurance companies expanded, and some workers purchased insurance or set aside savings to offset the income risks from death or injury. Some unions and fraternal organizations also offered their members insurance. Railroads and some mines also developed hospital and insurance plans to care for injured workers while many carriers provided jobs for all their injured men. 8

Improving safety, 1910-1939

Public efforts to improve safety date from the very beginnings of industrialization. States established railroad regulatory commissions as early as the 1840s. But while most of the commissions were intended to improve safety, they had few powers and were rarely able to exert much influence on working conditions. Similarly, the first state mining commission began in Pennsylvania in 1869, and other states soon followed. Yet most of the early commissions were ineffectual and as noted safety actually deteriorated after the Civil War. Factory commissions also dated from but most were understaffed and they too had little power.9

Railroads

The most successful effort to improve work safety during the nineteenth century began on the railroads in the 1880s as a small band of railroad regulators, workers, and managers began to campaign for the development of better brakes and couplers for freight cars. In response George Westinghouse modified his passenger train air brake in about 1887 so it would work on long freights, while at roughly the same time Ely Janney developed an automatic car coupler. For the railroads such equipment meant not only better safety, but also higher productivity and after 1888 they began to deploy it. The process was given a boost in 1889-1890 when the newly-formed Interstate Commerce Commission (ICC) published its first accident statistics. They demonstrated conclusively the extraordinary risks to trainmen from coupling and riding freight (Table 2). In 1893 Congress responded, passing the Safety Appliance Act, which mandated use of such equipment. It was the first federal law intended primarily to improve work safety, and by 1900 when the new equipment was widely diffused, risks to trainmen had fallen dramatically.10

Federal Safety Regulation

In the years between 1900 and World War I, a rather strange band of Progressive reformers, muckraking journalists, businessmen, and labor unions pressed for changes in many areas of American life. These years saw the founding of the Federal Food and Drug Administration, the Federal Reserve System and much else. Work safety also became of increased public concern and the first important developments came once again on the railroads. Unions representing trainmen had been impressed by the safety appliance act of 1893 and after 1900 they campaigned for more of the same. In response Congress passed a host of regulations governing the safety of locomotives and freight cars. While most of these specific regulations were probably modestly beneficial, collectively their impact was small because unlike the rules governing automatic couplers and air brakes they addressed rather minor risks.11

In 1910 Congress also established the Bureau of Mines in response to a series of disastrous and increasingly frequent explosions. The Bureau was to be a scientific, not a regulatory body and it was intended to discover and disseminate new knowledge on ways to improve mine safety.12

Workers’ Compensation Laws Enacted

Far more important were new laws that raised the cost of accidents to employers. In 1908 Congress passed a federal employers’ liability law that applied to railroad workers in interstate commerce and sharply limited defenses an employee could claim. Worker fatalities that had once cost the railroads perhaps $200 now cost $2,000. Two years later in 1910, New York became the first state to pass a workmen’s compensation law. This was a European idea. Instead of requiring injured workers to sue for damages in court and prove the employer was negligent, the new law automatically compensated all injuries at a fixed rate. Compensation appealed to businesses because it made costs more predictable and reduced labor strife. To reformers and unions it promised greater and more certain benefits. Samuel Gompers, leader of the American Federation of Labor had studied the effects of compensation in Germany. He was impressed with how it stimulated business interest in safety, he said. Between 1911 and 1921 forty-four states passed compensation laws.13

Employers Become Interested in Safety

The sharp rise in accident costs that resulted from compensation laws and tighter employers’ liability initiated the modern concern with work safety and initiated the long-term decline in work accidents and injuries. Large firms in railroading, mining, manufacturing and elsewhere suddenly became interested in safety. Companies began to guard machines and power sources while machinery makers developed safer designs. Managers began to look for hidden dangers at work, and to require that workers wear hard hats and safety glasses. They also set up safety departments run by engineers and safety committees that included both workers and managers. In 1913 companies founded the National Safety Council to pool information. Government agencies such as the Bureau of Mines and National Bureau of Standards provided scientific support while universities also researched safety problems for firms and industries14

Accident Rates Begin to Fall Steadily

During the years between World War I and World War II the combination of higher accident costs along with the institutionalization of safety concerns in large firms began to show results. Railroad employee fatality rates declined steadily after 1910 and at some large companies such as DuPont and whole industries such as steel making (see Table 3) safety also improved dramatically. Largely independent changes in technology and labor markets also contributed to safety as well. The decline in labor turnover meant fewer new employees who were relatively likely to get hurt, while the spread of factory electrification not only improved lighting but reduced the dangers from power transmission as well. In coal mining the shift from underground work to strip mining also improved safety. Collectively these long-term forces reduced manufacturing injury rates about 38 percent between 1926 and 1939 (see Table 4).15

Table 3
Steel Industry fatality and Injury rates, 1910-1939
(Rates are per million manhours)

Period Fatality rate Injury Rate
1910-1913 0.40 44.1
1937-1939 0.13 11.7

Pattern of Improvement Was Uneven

Yet the pattern of improvement was uneven, both over time and among firms and industries. Safety still deteriorated in times of economic boon when factories mines and railroads were worked to the limit and labor turnover rose. Nor were small companies as successful in reducing risks, for they paid essentially the same compensation insurance premium irrespective of their accident rate, and so the new laws had little effect there. Underground coal mining accidents also showed only modest improvement. Safety was also expensive in coal and many firms were small and saw little payoff from a lower accident rate. The one source of danger that did decline was mine explosions, which diminished in response to technologies developed by the Bureau of Mines. Ironically, however, in 1940 six disastrous blasts that killed 276 men finally led to federal mine inspection in 1941.16

Table 4
Work Injury Rates, Manufacturing and Coal Mining, 1926-1970
(Per Million Manhours)

.

Year Manufacturing Coal Mining
1926 24.2
1931 18.9 89.9
1939 14.9 69.5
1945 18.6 60.7
1950 14.7 53.3
1960 12.0 43.4
1970 15.2 42.6

Source: U.S. Department of Commerce Bureau of the Census, Historical Statistics of the United States, Colonial Times to 1970 (Washington, 1975), Series D-1029 and D-1031.

Postwar Trends, 1945-1970

The economic boon and associated labor turnover during World War II worsened work safety in nearly all areas of the economy, but after 1945 accidents again declined as long-term forces reasserted themselves (Table 4). In addition, after World War II newly powerful labor unions played an increasingly important role in work safety. In the 1960s however economic expansion again led to rising injury rates and the resulting political pressures led Congress to establish the Occupational Safety and Health Administration (OSHA) and the Mine Safety and Health Administration in 1970. The continuing problem of mine explosions also led to the foundation of the Mine Safety and Health Administration (MSHA) that same year. The work of these agencies had been controversial but on balance they have contributed to the continuing reductions in work injuries after 1970.17

References and Further Reading

Aldrich, Mark. Safety First: Technology, Labor and Business in the Building of Work Safety, 1870-1939. Baltimore: Johns Hopkins University Press, 1997.

Aldrich, Mark. “Preventing ‘The Needless Peril of the Coal Mine': the Bureau of Mines and the Campaign Against Coal Mine Explosions, 1910-1940.” Technology and Culture 36, no. 3 (1995): 483-518.

Aldrich, Mark. “The Peril of the Broken Rail: the Carriers, the Steel Companies, and Rail Technology, 1900-1945.” Technology and Culture 40, no. 2 (1999): 263-291

Aldrich, Mark. “Train Wrecks to Typhoid Fever: The Development of Railroad Medicine Organizations, 1850 -World War I.” Bulletin of the History of Medicine, 75, no. 2 (Summer 2001): 254-89.

Derickson Alan. “Participative Regulation of Hazardous Working Conditions: Safety Committees of the United Mine Workers of America,” Labor Studies Journal 18, no. 2 (1993): 25-38.

Dix, Keith. Work Relations in the Coal Industry: The Hand Loading Era. Morgantown: University of West Virginia Press, 1977. The best discussion of coalmine work for this period.

Dix, Keith. What’s a Coal Miner to Do? Pittsburgh: University of Pittsburgh Press, 1988. The best discussion of coal mine labor during the era of mechanization.

Fairris, David. “From Exit to Voice in Shopfloor Governance: The Case of Company Unions.” Business History Review 69, no. 4 (1995): 494-529.

Fairris, David. “Institutional Change in Shopfloor Governance and the Trajectory of Postwar Injury Rates in U.S. Manufacturing, 1946-1970.” Industrial and Labor Relations Review 51, no. 2 (1998): 187-203.

Fishback, Price. Soft Coal Hard Choices: The Economic Welfare of Bituminous Coal Miners, 1890-1930. New York: Oxford University Press, 1992. The best economic analysis of the labor market for coalmine workers.

Fishback, Price and Shawn Kantor. A Prelude to the Welfare State: The Origins of Workers’ Compensation. Chicago: University of Chicago Press, 2000. The best discussions of how employers’ liability rules worked.

Graebner, William. Coal Mining Safety in the Progressive Period. Lexington: University of Kentucky Press, 1976.

Great Britain Board of Trade. General Report upon the Accidents that Have Occurred on Railways of the United Kingdom during the Year 1901. London, HMSO, 1902.

Great Britain Home Office Chief Inspector of Mines. General Report with Statistics for 1914, Part I. London: HMSO, 1915.

Hounshell, David. From the American System to Mass Production, 1800-1932: The Development of Manufacturing Technology in the United States. Baltimore: Johns Hopkins University Press, 1984.

Humphrey, H. B. “Historical Summary of Coal-Mine Explosions in the United States — 1810-1958.” United States Bureau of Mines Bulletin 586 (1960).

Kirkland, Edward. Men, Cities, and Transportation. 2 vols. Cambridge: Harvard University Press, 1948, Discusses railroad regulation and safety in New England.

Lankton, Larry. Cradle to Grave: Life, Work, and Death in Michigan Copper Mines. New York: Oxford University Press, 1991.

Licht, Walter. Working for the Railroad. Princeton: Princeton University Press, 1983.

Long, Priscilla. Where the Sun Never Shines. New York: Paragon, 1989. Covers coal mine safety at the end of the nineteenth century.

Mendeloff, John. Regulating Safety: An Economic and Political Analysis of Occupational Safety and Health Policy. Cambridge: MIT Press, 1979. An accessible modern discussion of safety under OSHA.

National Academy of Sciences. Toward Safer Underground Coal Mines. Washington, DC: NAS, 1982.

Rogers, Donald. “From Common Law to Factory Laws: The Transformation of Workplace Safety Law in Wisconsin before Progressivism.” American Journal of Legal History (1995): 177-213.

Root, Norman and Daley, Judy. “Are Women Safer Workers? A New Look at the Data.” Monthly Labor Review 103, no. 9 (1980): 3-10.

Rosenberg, Nathan. Technology and American Economic Growth. New York: Harper and Row, 1972. Analyzes the forces shaping American technology.

Rosner, David and Gerald Markowity, editors. Dying for Work. Blomington: Indiana University Press, 1987.

Shaw, Robert. Down Brakes: A History of Railroad Accidents, Safety Precautions, and Operating Practices in the United States of America. London: P. R. Macmillan. 1961.

Trachenberg, Alexander. The History of Legislation for the Protection of Coal Miners in Pennsylvania, 1824 – 1915. New York: International Publishers. 1942.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States, Colonial Times to 1970. Washington, DC, 1975.

Usselman, Steven. “Air Brakes for Freight Trains: Technological Innovation in the American Railroad Industry, 1869-1900.” Business History Review 58 (1984): 30-50.

Viscusi, W. Kip. Risk By Choice: Regulating Health and Safety in the Workplace. Cambridge: Harvard University Press, 1983. The most readable treatment of modern safety issues by a leading scholar.

Wallace, Anthony. Saint Clair. New York: Alfred A. Knopf, 1987. Provides a superb discussion of early anthracite mining and safety.

Whaples, Robert and David Buffum. “Fraternalism, Paternalism, the Family and the Market: Insurance a Century Ago.” Social Science History 15 (1991): 97-122.

White, John. The American Railroad Freight Car. Baltimore: Johns Hopkins University Press, 1993. The definitive history of freight car technology.

Whiteside, James. Regulating Danger: The Struggle for Mine Safety in the Rocky Mountain Coal Industry. Lincoln: University of Nebraska Press, 1990.

Wokutch, Richard. Worker Protection Japanese Style: Occupational Safety and Health in the Auto Industry. Ithaca, NY: ILR, 1992

Worrall, John, editor. Safety and the Work Force: Incentives and Disincentives in Workers’ Compensation. Ithaca, NY: ILR Press, 1983.

1 Injuries or fatalities are expressed as rates. For example, if ten workers are injured out of 450 workers during a year, the rate would be .006666. For readability it might be expressed as 6.67 per thousand or 666.7 per hundred thousand workers. Rates may also be expressed per million workhours. Thus if the average work year is 2000 hours, ten injuries in 450 workers results in [10/450×2000]x1,000,000 = 11.1 injuries per million hours worked.

2 For statistics on work injuries from 1922-1970 see U.S. Department of Commerce, Historical Statistics, Series 1029-1036. For earlier data are in Aldrich, Safety First, Appendix 1-3.

3 Hounshell, American System. Rosenberg, Technology,. Aldrich, Safety First.

4 On the workings of the employers’ liability system see Fishback and Kantor, A Prelude, chapter 2

5 Dix, Work Relations, and his What’s a Coal Miner to Do? Wallace, Saint Clair, is a superb discussion of early anthracite mining and safety. Long, Where the Sun, Fishback, Soft Coal, chapters 1, 2, and 7. Humphrey, “Historical Summary.” Aldrich, Safety First, chapter 2.

6 Aldrich, Safety First chapter 1.

7 Aldrich, Safety First chapter 3

8 Fishback and Kantor, A Prelude, chapter 3, discusses higher pay for risky jobs as well as worker savings and accident insurance See also Whaples and Buffum, “Fraternalism, Paternalism.” Aldrich, ” Train Wrecks to Typhoid Fever.”

9Kirkland, Men, Cities. Trachenberg, The History of Legislation Whiteside, Regulating Danger. An early discussion of factory legislation is in Susan Kingsbury, ed.,xxxxx. Rogers,” From Common Law.”

10 On the evolution of freight car technology see White, American Railroad Freight Car, Usselman “Air Brakes for Freight trains,” and Aldrich, Safety First, chapter 1. Shaw, Down Brakes, discusses causes of train accidents.

11 Details of these regulations may be found in Aldrich, Safety First, chapter 5.

12 Graebner, Coal-Mining Safety, Aldrich, “‘The Needless Peril.”

13 On the origins of these laws see Fishback and Kantor, A Prelude, and the sources cited therein.

14 For assessments of the impact of early compensation laws see Aldrich, Safety First, chapter 5 and Fishback and Kantor, A Prelude, chapter 3. Compensation in the modern economy is discussed in Worrall, Safety and the Work Force. Government and other scientific work that promoted safety on railroads and in coal mining are discussed in Aldrich, “‘The Needless Peril’,” and “The Broken Rail.”

15 Farris, “From Exit to Voice.”

16 Aldrich, “‘Needless Peril,” and Humphrey

17 Derickson, “Participative Regulation” and Fairris, “Institutional Change,” also emphasize the role of union and shop floor issues in shaping safety during these years. Much of the modern literature on safety is highly quantitative. For readable discussions see Mendeloff, Regulating Safety (Cambridge: MIT Press, 1979), and

Citation: Aldrich, Mark. “History of Workplace Safety in the United States, 1880-1970″. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/history-of-workplace-safety-in-the-united-states-1880-1970/

The International Natural Rubber Market, 1870-1930

Zephyr Frank, Stanford University and Aldo Musacchio, Ibmec SãoPaulo

Overview of the Rubber Market, 1870-1930

Natural rubber was first used by the indigenous peoples of the Amazon basin for a variety of purposes. By the middle of the eighteenth century, Europeans had begun to experiment with rubber as a waterproofing agent. In the early nineteenth century, rubber was used to make waterproof shoes (Dean, 1987). The best source of latex, the milky fluid from which natural rubber products were made, was hevea brasiliensis, which grew predominantly in the Brazilian Amazon (but also in the Amazonian regions of Bolivia and Peru). Thus, by geographical accident, the first period of rubber’s commercial history, from the late 1700s through 1900, was centered in Brazil; the second period, from roughly 1910 on, was increasingly centered in East Asia as the result of plantation development. The first century of rubber was typified by relatively low levels of production, high wages, and very high prices; the period following 1910 was one of rapidly increasing production, low wages, and falling prices.

Uses of Rubber

The early uses of the material were quite limited. Initially the problem of natural rubber was its sensitivity to temperature changes, which altered its shape and consistency. In 1839 Charles Goodyear improved the process called vulcanization, which modified rubber so that it would support extreme temperatures. It was then that natural rubber became suitable for producing hoses, tires, industrial bands, sheets, shoes, shoe soles, and other products. What initially caused the beginning of the “Rubber Boom,” however, was the popularization of the bicycle. The boom would then be accentuated after 1900 by the development of the automobile industry and the expansion of the tire industry to produce car tires (Weinstein, 1983; Dean 1987).

Brazil’s Initial Advantage and High-Wage Cost Structure

Until the turn of the twentieth century Brazil and the countries that share the Amazon basin (i.e. Bolivia, Venezuela and Peru), were the only exporters of natural rubber. Brazil sold almost ninety percent of the total rubber commercialized in the world. The fundamental fact that explains Brazil’s entry into and domination of natural rubber production during the period 1870 through roughly 1913 is that most of the world’s rubber trees grew naturally in the Amazon region of Brazil. The Brazilian rubber industry developed a high-wage cost structure as the result of labor scarcity and lack of competition in the early years of rubber production. Since there were no credit markets to finance the trips of the workers of other parts of Brazil to the Amazon, workers paid their trips with loans from their future employers. Much like indenture servitude during colonial times in the United States, these loans were paid back to the employers with work once the laborers were established in the Amazon basin. Another factor that increased the costs of producing rubber was that most provisions for tappers in the field had to be shipped in from outside the region at great expense (Barham and Coomes, 1994). This made Brazilian production very expensive compared to the future plantations in Asia. Nevertheless Brazil’s system of production worked well as long as two conditions were met: first, that the demand for rubber did not grow too quickly, for wild rubber production could not expand rapidly owing to labor and environmental constraints; second, that competition based on some other more efficient arrangement of factors of production did not exist. As can be seen in Figure 1, Brazil dominated the natural rubber market until the first decade of the twentieth century.

Between 1900 and 1913, these conditions ceased to hold. First, the demand for rubber skyrocketed [see Figure 2], providing a huge incentive for other producers to enter the market. Prices had been high before, but Brazilian supply had been quite capable of meeting demand; now, prices were high and demand appeared insatiable. Plantations, which had been possible since the 1880s, now became a reality mainly in the colonies of Southeast Asia. Because Brazil was committed to a high-wage, labor-scarce production regime, it was unable to counter the entry of Asian plantations into the market it had dominated for half a century.

Southeast Asian Plantations Develop a Low-Cost, Labor-Intensive Alternative

In Asia, the British and Dutch drew upon their superior stocks of capital and vast pools of cheap colonial labor to transform rubber collection into a low-cost, labor-intensive industry. Investment per tapper in Brazil was reportedly 337 pounds sterling circa 1910; in the low-cost Asian plantations, investment was estimated at just 210 pounds per worker (Dean, 1987). Not only were Southeast Asian tappers cheaper, they were potentially eighty percent more productive (Dean, 1987).

Ironically, the new plantation system proved equally susceptible to uncertainty and competition. Unexpected sources of uncertainty arose in the technological development of automobile tires. In spite of colonialism, the British and Dutch were unable to collude to control production and prices plummeted after 1910. When the British did attempt to restrict production in the 1920s, the United States attempted to set up plantations in Brazil and the Dutch were happy to take market share. Yet it was too late for Brazil: the cost structure of Southeast Asian plantations could not be matched. In a sense, then, the game was no longer worth the candle: in order to compete in rubber production, Brazil would have to have had significantly lower wages — which would only have been possible with a vastly expanded transport network and domestic agriculture sector in the hinterland of the Amazon basin. Such an expensive solution made no economic sense in the 1910s and 20s when coffee and nascent industrialization in São Paulo offered much more promising prospects.

Natural Rubber Extraction and Commercialization: Brazil

Rubber Tapping in the Amazon Rainforest

One disadvantage Brazilian rubber producers suffered was that the organization of production depended on the distribution of Hevea brasiliensis trees in the forest. The owner (or often lease concessionary) of a large land plot would hire tappers to gather rubber by gouging the tree trunk with an axe. In Brazil, the usual practice was to make a big dent in the tree and put a small bowl to collect the latex that would come out of the trunk. Typically, tappers had two “rows” of trees they worked on, alternating one row per day. The “rows” contained several circular roads that went through the forest with more than 100 trees each. Rubber could only be collected during the tapping season (August to January), and the living conditions of tappers were hard. As the need for rubber expanded, tappers had to be sent deep into the Amazon rainforest to look for unexplored land with more productive trees. Tappers established their shacks close to the river because rubber, once smoked, was sent by boat to Manaus (capital of the state of Amazonas) or to Belém (capital of the state of Pará), both entrepots for rubber exporting to Europe and the US.[1]

Competition or Exploitation? Tappers and Seringalistas

After collecting the rubber, tappers would go back to their shacks and smoke the resin in order to make balls of partially filtered and purified rough rubber that could be sold at the ports. There is much discussion about the commercialization of the product. Weinstein (1983) argues that the seringalista — the employer of the rubber tapper — controlled the transportation of rubber to the ports, where he sold the rubber, many times in exchange for goods that could be sold (with a large gain) back to the tapper. In this economy money was scarce and the “wages” of tappers or seringueiros were determined by the price of rubber. Wages depended on the current price of rubber; the usual agreement for tappers was to split the gross profits with their patrons. These salaries were most commonly paid in goods, such as cigarettes, food, and tools. According to Weinstein (1983), the goods were overpriced by the seringalistas to extract larger profits from the seringueiros work. Barham and Coomes (1994), on the other hand, argue that the structure of the market in the Amazon was less closed and that independent traders would travel around the basin in small boats, willing to exchange goods for rubber. Poor monitoring by employers and an absent state facilitated these under-the-counter transactions, which allowed tappers to get better pay for their work.

Exporting Rubber

From the ports, rubber was in the hands of mainly Brazilian, British and American exporters. Contrary to what Weinstein (1983) argued, Brazilian producers or local merchants from the interior could choose whether to send the rubber on consignment to a New York commission house, rather than selling it to a exporter in the Amazon (Shelley, 1918). Rubber was taken, like other commodities, to ports in Europe and the US to be distributed to the industries that bought large amounts of the product in the London or New York commodities exchanges. A large part of rubber produced was traded at these exchanges, but tire manufacturers and other large consumers also made direct purchases from the distributors in the country of origin.[2]

Rubber Production in Southeast Asia

Seeds Smuggled from Brazil to Britain

The Hevea brasiliensis, the most important type of rubber tree, was an Amazonian species. This is why the countries of the Amazon basin were the main producers of rubber at the beginning of the international rubber trade. How, then, did British and Dutch colonies in Southeast Asia end up dominating the market? Brazil tried to prevent Hevea brasiliensis seeds from being exported, as the Brazilian government knew that by being the main producers of rubber, profits from rubber trading were insured. Protecting property rights in seeds proved a futile exercise. In 1876, the Englishman and aspiring author and rubber expert, Henry Wickham, smuggled 70,000 seeds to London, a feat for which he earned Brazil’s eternal opprobrium and an English knighthood. After experimenting with the seeds, 2,800 plants were raised at the Royal Botanical Gardens in London (Kew Gardens) and then shipped to Perideniya Gardens in Ceylon. In 1877 a case of 22 plants reached Singapore and were planted at the Singapore Botanical Garden. In the same year the first plant arrived in the Malay States. Since rubber trees needed between 6 to 8 years to be mature enough to yield good rubber, tapping began in the 1880s.

Scientific Research to Maximize Yields

In order to develop rubber extraction in the Malay States, more scientific intervention was needed. In 1888, H. N. Ridley was appointed director of the Singapore Botanical Garden and began experimenting with tapping methods. The final result of all the experimentations with different methods of tapping in Southeast Asia was the discovery of how to extract rubber in such a way that the tree would maintain a high yield for a long period of time. Rather than making a deep gouge with an axe on the rubber tree, as in Brazil, Southeast Asian tappers scraped the trunk of the tree by making a series of overlapped Y-shaped cuts with an axe, such that at the bottom there would be a canal ending in a collecting receptacle. According to Akers (1912), the tapping techniques in Asia insured the exploitation of the trees for longer periods, because the Brazilian technique scarred the tree’s bark and lowered yields over time.

Rapid Commercial Development and the Automobile Boom

Commercial planting in the Malay States began in 1895. The development of large-scale plantations was slow because of the lack of capital. Investors did not get interested in plantations until the prospects for rubber improved radically with the spectacular development of the automobile industry. By 1905, European capitalists were sufficiently interested in investing in large-scale plantations in Southeast Asia to plant some 38,000 acres of trees. Between 1905 and 1911 the annual increase was over 70,000 acres per year, and, by the end of 1911, the acreage in the Malay States reached 542,877 (Baxendale, 1913). The expansion of plantations was possible because of the sophistication in the organization of such enterprises. Joint stock companies were created to exploit the land grants and capital was raised through stock issues on the London Stock Exchange. The high returns during the first years (1906-1910) made investors ever more optimistic and capital flowed in large amounts. Plantations depended on a very disciplined system of labor and an intensive use of land.

Malaysia’s Advantages over Brazil

In addition to the intensive use of land, the production system in Malaysia had several economic advantages over that of Brazil. First, in the Malay States there was no specific tapping season, unlike Brazil where the rain did not allow tappers to collect rubber during six months of the year. Second, health conditions were better on the plantations, where rubber companies typically provided basic medical care and built infirmaries. In Brazil, by contrast, yellow fever and malaria made survival harder for rubber tappers who were dispersed in the forest and without even rudimentary medical attention. Finally, better living conditions and the support of the British and Dutch colonial authorities helped to attract Indian labor to the rubber plantations. Japanese and Chinese labor also immigrated to the plantations in Southeast Asia in response to relatively high wages (Baxendale, 1913).

Initially, demand for rubber was associated with specialized industrial components (belts and gaskets, etc.), consumer goods (golf balls, shoe soles, galoshes, etc.), and bicycle tires. Prior to the development of the automobile as a mass-marketed phenomenon, the Brazilian wild rubber industry was capable of meeting world demand and, furthermore, it was impossible for rubber producers to predict the scope and growth of the automobile industry prior to the 1900s. Thus, as Figure 3 indicates, growth in demand, as measured by U.K. imports, was not particularly rapid in the period 1880-1899. There was no reason to believe, in the early 1880s, that demand for rubber would explode as it did in the 1890s. Even as demand rose in the 1890s with the bicycle craze, the rate of increase was not beyond the capacity of wild rubber producers in Brazil and elsewhere (see figure 3). High rubber prices did not induce rapid increases in production or plantation development in the nineteenth century. In this context, Brazil developed a reasonably efficient industry based on its natural resource endowment and limited labor and capital sources.

In the first three decades of the twentieth century, major changes in both supply and demand created unprecedented uncertainty in rubber markets. On the supply side, Southeast Asian rubber plantations transformed the cost structure and capacity of the industry. On the demand side, and directly inducing plantation development, automobile production and associated demand for rubber exploded. Then, in the 1920s, competition and technological advance in tire production led to another shift in the market with profound consequences for rubber producers and tire manufacturers alike.

Rapid Price Fluctuations and Output Lags

Figure 1 shows the fluctuations of the Rubber Smoked Sheet type 1 (RSS1) price in London on an annual basis. The movements from 1906 to 1910 were very volatile on a monthly basis, as well, thus complicating forecasts for producers and making it hard for producers to decide how to react to market signals. Even though the information of prices and amounts in the markets were published every month in the major rubber journals, producers did not have a good idea of what was going to happen in the long run. If prices were high today, then they wanted to expand the area planted, but since it took from 6 to 8 years for trees to yield good rubber, they would have to wait to see the result of the expansion in production many years and price swings later. Since many producers reacted in the same way, periods of overproduction of rubber six to eight -odd years after a price rise were common.[3] Overproduction meant low prices, but since investments were mostly sunk (the costs of preparing the land, planting the trees and bringing in the workers could not be recovered and these resources could not be easily shifted to other uses), the market tended to stay oversupplied for long periods of time.

In figure 1 we see the annual price of Malaysian rubber plotted over time.

The years 1905 and 1906 marked historic highs for rubber prices, only to be surpassed briefly in 1909 and 1910. The area planted in rubber throughout Asia grew from 15,000 acres in 1901 to 433,000 acres in 1907; these plantings matured circa 1913, and cultivated rubber surpassed Brazilian wild rubber in volume exported.[4] The growth of the Asian rubber industry soon swamped Brazil’s market share and drove prices well below pre-Boom levels. After the major peak in prices of 1910, prices plummeted and followed a downward trend throughout the 1920s. By 1921, the bottom had dropped out of the market, and Malaysian rubber producers were induced by the British colonial authorities to enter into a scheme to restrict production. Plantations received export coupons that set quotas that limited the supply of rubber. The shortage of rubber did not affect prices until 1924 when the consumption passed the production of rubber and prices started to rise rapidly. This scheme had a short success because competition from the Dutch plantations in southeast Asia and others drove prices down by 1926. The plan was officially ended in 1928.[5]

Automobiles’ Impact on Rubber Demand

In order to understand the boom in rubber production, it is fundamental to look at the automobile industry. Cars had originally been adapted from horse-drawn carriages; some ran on wooden wheels, some on metal, some shod as it were in solid rubber. In any case, the ride at the speeds cars were soon capable of was impossible to bear. The pneumatic tire was quickly adopted from the bicycle, and the automobile tire industry was born — soon to account for well over half of rubber company sales in the United States where the vast majority of automobiles were manufactured in the early years of the industry.[6] The amount of rubber required to satisfy demand for automobile tires led first to a spike in rubber prices; second, it led to the development of rubber plantations in Asia.[7]

The connection between automobiles, plantations, and the rubber tire industry was explicit and obvious to observers at the time. Harvey Firestone, son of the founder of the company, put it this way:

It was not until 1898 that any serious attention was paid to plantation development. Then came the automobile, and with it the awakening on the part of everybody that without rubber there could be no tires, and without tires there could be no automobiles. (Firestone, 1932, p. 41)

Thus the emergence of a strong consuming sector linked to the automobile was necessary. For instance, the average price of rubber from 1880-1884 was 401 pounds sterling per ton; from 1900 to 1904, when the first plantations were beginning to be set up, the average price was 459 pounds sterling per ton. Thus, Asian plantations were developed both in response to high rubber prices and to what everyone could see was an exponentially growing source of demand in automobiles. Previous consumers of rubber did not show the kind of dynamism needed to spur entry by plantations into the natural rubber market, even though prices were very high throughout most of second half of the nineteenth century.

Producers Need to Forecast Future Supply and Demand Conditions

Rubber producers made decisions about production and planting during the period 1900-1912 with the aim to reap windfall profits, instead of thinking about the long-run sustainability of their business. High prices were an incentive for all to increase production, but increasing production, through more acreage planted could mean a loss for everyone in the future (because too much supply could drive the prices down). Yet, current prices could not yield profits when investment decisions had to be made six or more years in advance, as was the case in plantation production: in order to invest in plantations, capital had to be able to predict future interactions in supply and demand. Demand, although high and apparently relatively price inelastic, was not entirely predictable. It was predictable enough, however, for planters to expand acreage in rubber in Asia at a dramatic rate. Planters were often uncertain as to the aggregate level of supply: new plantations were constantly coming into production; others were entering into decline or bankruptcy. Thus their investments could yield a lot in the short run, but if all the people reacted in the same way, prices were driven down and profits were low too. This is what happened in the 1920s, after all the acreage expansion of the first two decades of the century.

Demand Growth Unexpectedly Slows in the 1920s

Plantings between 1912 and 1916 were destined to come into production during a period in which growth in the automobile industry leveled off significantly owing to recession in 1920-21. Making maters worse for rubber producers, major advances in tire technology further controlled demand — for example, the change from corded to balloon tires increased average tire tread mileage from 8,000 to 15,000 miles.[8] The shift from corded to balloon tires decreased demand for natural rubber even as the automobile industry recovered from recession in the early 1920s. In addition, better design of tire casings circa 1920 led to the growth of the retreading industry, the result of which was further saving on rubber. Finally, better techniques in cotton weaving lowered friction and heat and further extended tire life.[9] As rubber supplies increased and demand decreased and became more price inelastic, prices plummeted: neither demand nor price proved predictable over the long run and suppliers paid a stiff price for overextending themselves during the boom years. Rubber tire manufacturers suffered the same fate: competition and technology (which they themselves introduced) pushed prices downward and, at the same time, flattened demand (Allen, 1936).[10]

Now, if one looks at the price of rubber and the rate of growth in demand as measured by imports in the 1920s, it is clear that the industry was over-invested in capacity. The consequences of technological change were dramatic for tire manufacturer profits as well as for rubber producers.

Conclusion

The natural rubber trade underwent several radical transformations over the period 1870 to 1930. First, prior to 1910, it was associated with high costs of production and high prices for final goods; most rubber was produced, during this period, by tapping rubber trees in the Amazon region of Brazil. After 1900, and especially after 1910, rubber was increasingly produced on low-cost plantations in Southeast Asia. The price of rubber fell with plantation development and, at the same time, the volume of rubber demanded by car tire manufacturers expanded dramatically. Uncertainty, in terms of both supply and demand, (often driven by changing tire technology) meant that natural rubber producers and tire manufacturers both experienced great volatility in returns. The overall evolution of the natural rubber trade and the related tire manufacture industry was toward large volume, low-cost production in an internationally competitive environment marked by commodity price volatility and declining levels of profit as the industry matured.

References

Akers, C. E. Report on the Amazon Valley: Its Rubber Industry and Other Resources. London: Waterlow & Sons, 1912.

Allen, Hugh. The House of Goodyear. Akron: Superior Printing, 1936.

Alves Pinto, Nelson Prado. Política Da Borracha No Brasil. A Falência Da Borracha Vegetal. São Paulo: HUCITEC, 1984.

Babcock, Glenn D. History of the United States Rubber Company. Indiana: Bureau of Business Research, 1966.

Barham, Bradford, and Oliver Coomes. “The Amazon Rubber Boom: Labor Control, Resistance, and Failed Plantation Development Revisited.” Hispanic American Historical Review 74, no. 2 (1994): 231-57.

Barham, Bradford, and Oliver Coomes. Prosperity’s Promise. The Amazon Rubber Boom and Distorted Economic Development. Boulder: Westview Press, 1996.

Barham, Bradford, and Oliver Coomes. “Wild Rubber: Industrial Organisation and the Microeconomics of Extraction during the Amazon Rubber Boom (1860-1920).” Hispanic American Historical Review 26, no. 1 (1994): 37-72.

Baxendale, Cyril. “The Plantation Rubber Industry.” India Rubber World, 1 January 1913.

Blackford, Mansel and Kerr, K. Austin. BFGoodrich. Columbus: Ohio State University Press, 1996.

Brazil. Instituto Brasileiro de Geografia e Estatística. Anuário Estatístico Do Brasil. Rio de Janeiro: Instituto Brasileiro de Geografia e Estatística, 1940.

Dean, Warren. Brazil and the Struggle for Rubber: A Study in Environmental History. Cambridge: Cambridge University Press, 1987.

Drabble, J. H. Rubber in Malaya, 1876-1922. Oxford: Oxford University Press, 1973.

Firestone, Harvey Jr. The Romance and Drama of the Rubber Industry. Akron: Firestone Tire and Rubber Co., 1932.

Santos, Roberto. História Econômica Da Amazônia (1800-1920). São Paulo: T.A. Queiroz, 1980.

Schurz, William Lytle, O. D Hargis, Curtis Fletcher Marbut, and C. B Manifold. Rubber Production in the Amazon Valley by William L. Schurz, Commercial Attaché, and O.D. Hargis, Special Agent, of the Department of Commerce, and C.F. Marbut, Chief, Division of Soil Survey, and C.B. Manifold, Soil Surveyor, of the Department of Agriculture. U.S. Bureau of Foreign and Domestic Commerce (Dept. of Commerce) Trade Promotion Series: Crude Rubber Survey: Crude Rubber Survey: Trade Promotion Series, no. 4. no. 28. Washington: Govt. Print. Office, 1925.

Shelley, Miguel. “Financing Rubber in Brazil.” India Rubber World, 1 July 1918.

Weinstein, Barbara. The Amazon Rubber Boom, 1850-1920. Stanford: Stanford University Press, 1983.


Notes:

[1] Rubber taping in the Amazon basin is described in Weinstein (1983), Barham and Coomes (1994), Stanfield (1998), and in several articles published in India Rubber World, the main journal on rubber trading. See, for example, the explanation of tapping in the October 1, 1910 issue, or “The Present and Future of the Native Havea Rubber Industry” in the January 1, 1913 issue. For a detailed analysis of the rubber industry by region in Brazil by contemporary observers, see Schurz et al (1925).

[2] Newspapers such as The Economist or the London Times included sections on rubber trading, such as weekly or monthly reports of the market conditions, prices and other information. For the dealings between tire manufacturers and distributors in Brazil and Malaysia see Firestone (1932).

[3] Using cross-correlations of production and prices, we found that changes in production at time t were correlated with price changes in t-6 and t-8 (years). This is only weak evidence because these correlations are not statistically significant.

[4] Drabble (1973), 213, 220. The expansion in acreage was accompanied by a boom in company formation.

[5] Drabble (1973), 192-199. This was the so-called Stevenson Committee restriction, which lasted from 1922 to 1926. This plan basically limited the amount of rubber each planter could export assigning quotas through coupons.

[6] Pneumatic tires were first adapted to automobiles in 1896; Dunlop’s pneumatic bicycle tire was introduced in 1888. The great advantage of these tires over solid rubber was that they generated far less friction, extending tread life, and, of course, cushioned the ride and allowed for higher speeds.

[7] Early histories of the rubber industry tended to blame Brazilian “monopolists” for holding up supply and reaping windfall profits, see, e.g., Allen (1936), 116-117. In fact, rubber production in Brazil was far from monopolistic; other reasons account for supply inelasticity.

[8] Blackford and Kerr (1996), p. 88.

[9] The so-called “supertwist” weave allowed for the manufacture of larger, more durable tires, especially for trucks. Allen (1936), pp. 215-216.

[10] Allen (1936), p. 320.

Citation: Frank, Zephyr and Aldo Musacchio. “The International Natural Rubber Market, 1870-1930″. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-international-natural-rubber-market-1870-1930/

English Poor Laws

George Boyer, Cornell University

A compulsory system of poor relief was instituted in England during the reign of Elizabeth I. Although the role played by poor relief was significantly modified by the Poor Law Amendment Act of 1834, the Crusade Against Outrelief of the 1870s, and the adoption of various social insurance programs in the early twentieth century, the Poor Law continued to assist the poor until it was replaced by the welfare state in 1948. For nearly three centuries, the Poor Law constituted “a welfare state in miniature,” relieving the elderly, widows, children, the sick, the disabled, and the unemployed and underemployed (Blaug 1964). This essay will outline the changing role played by the Poor Law, focusing on the eighteenth and nineteenth centuries.

The Origins of the Poor Law

While legislation dealing with vagrants and beggars dates back to the fourteenth century, perhaps the first English poor law legislation was enacted in 1536, instructing each parish to undertake voluntary weekly collections to assist the “impotent” poor. The parish had been the basic unit of local government since at least the fourteenth century, although Parliament imposed few if any civic functions on parishes before the sixteenth century. Parliament adopted several other statutes relating to the poor in the next sixty years, culminating with the Acts of 1597-98 and 1601 (43 Eliz. I c. 2), which established a compulsory system of poor relief that was administered and financed at the parish (local) level. These Acts laid the groundwork for the system of poor relief up to the adoption of the Poor Law Amendment Act in 1834. Relief was to be administered by a group of overseers, who were to assess a compulsory property tax, known as the poor rate, to assist those within the parish “having no means to maintain them.” The poor were divided into three groups: able-bodied adults, children, and the old or non-able-bodied (impotent). The overseers were instructed to put the able-bodied to work, to give apprenticeships to poor children, and to provide “competent sums of money” to relieve the impotent.

Deteriorating economic conditions and loss of traditional forms of charity in the 1500s

The Elizabethan Poor Law was adopted largely in response to a serious deterioration in economic circumstances, combined with a decline in more traditional forms of charitable assistance. Sixteenth century England experienced rapid inflation, caused by rapid population growth, the debasement of the coinage in 1526 and 1544-46, and the inflow of American silver. Grain prices more than tripled from 1490-1509 to 1550-69, and then increased by an additional 73 percent from 1550-69 to 1590-1609. The prices of other commodities increased nearly as rapidly — the Phelps Brown and Hopkins price index rose by 391 percent from 1495-1504 to 1595-1604. Nominal wages increased at a much slower rate than did prices; as a result, real wages of agricultural and building laborers and of skilled craftsmen declined by about 60 percent over the course of the sixteenth century. This decline in purchasing power led to severe hardship for a large share of the population. Conditions were especially bad in 1595-98, when four consecutive poor harvests led to famine conditions. At the same time that the number of workers living in poverty increased, the supply of charitable assistance declined. The dissolution of the monasteries in 1536-40, followed by the dissolution of religious guilds, fraternities, almshouses, and hospitals in 1545-49, “destroyed much of the institutional fabric which had provided charity for the poor in the past” (Slack 1990). Given the circumstances, the Acts of 1597-98 and 1601 can be seen as an attempt by Parliament both to prevent starvation and to control public order.

The Poor Law, 1601-1750

It is difficult to determine how quickly parishes implemented the Poor Law. Paul Slack (1990) contends that in 1660 a third or more of parishes regularly were collecting poor rates, and that by 1700 poor rates were universal. The Board of Trade estimated that in 1696 expenditures on poor relief totaled £400,000 (see Table 1), slightly less than 1 percent of national income. No official statistics exist for this period concerning the number of persons relieved or the demographic characteristics of those relieved, but it is possible to get some idea of the makeup of the “pauper host” from local studies undertaken by historians. These suggest that, during the seventeenth century, the bulk of relief recipients were elderly, orphans, or widows with young children. In the first half of the century, orphans and lone-parent children made up a particularly large share of the relief rolls, while by the late seventeenth century in many parishes a majority of those collecting regular weekly “pensions” were aged sixty or older. Female pensioners outnumbered males by as much as three to one (Smith 1996). On average, the payment of weekly pensions made up about two-thirds of relief spending in the late seventeenth and early eighteenth centuries; the remainder went to casual benefits, often to able-bodied males in need of short-term relief because of sickness or unemployment.

Settlement Act of 1662

One of the issues that arose in the administration of relief was that of entitlement: did everyone within a parish have a legal right to relief? Parliament addressed this question in the Settlement Act of 1662, which formalized the notion that each person had a parish of settlement, and which gave parishes the right to remove within forty days of arrival any newcomer deemed “likely to be chargeable” as well as any non-settled applicant for relief. While Adam Smith, and some historians, argued that the Settlement Law put a serious brake on labor mobility, available evidence suggests that parishes used it selectively, to keep out economically undesirable migrants such as single women, older workers, and men with large families.

Relief expenditures increased sharply in the first half of the eighteenth century, as can be seen in Table 1. Nominal expenditures increased by 72 percent from 1696 to 1748-50 despite the fact that prices were falling and population was growing slowly; real expenditures per capita increased by 84 percent. A large part of this rise was due to increasing pension benefits, especially for the elderly. Some areas also experienced an increase in the number of able-bodied relief recipients. In an attempt to deter some of the poor from applying for relief, Parliament in 1723 adopted the Workhouse Test Act, which empowered parishes to deny relief to any applicant who refused to enter a workhouse. While many parishes established workhouses as a result of the Act, these were often short-lived, and the vast majority of paupers continued to receive outdoor relief (that is, relief in their own homes).

The Poor Law, 1750-1834

The period from 1750 to 1820 witnessed an explosion in relief expenditures. Real per capita expenditures more than doubled from 1748-50 to 1803, and remained at a high level until the Poor Law was amended in 1834 (see Table 1). Relief expenditures increased from 1.0% of GDP in 1748-50 to a peak of 2.7% of GDP in 1818-20 (Lindert 1998). The demographic characteristics of the pauper host changed considerably in the late eighteenth and early nineteenth centuries, especially in the rural south and east of England. There was a sharp increase in numbers receiving casual benefits, as opposed to regular weekly pensions. The age distribution of those on relief became younger — the share of paupers who were prime-aged (20- 59) increased significantly, and the share aged 60 and over declined. Finally, the share of relief recipients in the south and east who were male increased from about a third in 1760 to nearly two-thirds in 1820. In the north and west there also were shifts toward prime-age males and casual relief, but the magnitude of these changes was far smaller than elsewhere (King 2000).

Gilbert’s Act and the Removal Act

There were two major pieces of legislation during this period. Gilbert’s Act (1782) empowered parishes to join together to form unions for the purpose of relieving their poor. The Act stated that only the impotent poor should be relieved in workhouses; the able-bodied should either be found work or granted outdoor relief. To a large extent, Gilbert’s Act simply legitimized the policies of a large number of parishes that found outdoor relief both less and expensive and more humane that workhouse relief. The other major piece of legislation was the Removal Act of 1795, which amended the Settlement Law so that no non-settled person could be removed from a parish unless he or she applied for relief.

Speenhamland System and other forms of poor relief

During this period, relief for the able-bodied took various forms, the most important of which were: allowances-in-aid-of-wages (the so-called Speenhamland system), child allowances for laborers with large families, and payments to seasonally unemployed agricultural laborers. The system of allowances-in-aid-of-wages was adopted by magistrates and parish overseers throughout large parts of southern England to assist the poor during crisis periods. The most famous allowance scale, though by no means the first, was that adopted by Berkshire magistrates at Speenhamland on May 6, 1795. Under the allowance system, a household head (whether employed or unemployed) was guaranteed a minimum weekly income, the level of which was determined by the price of bread and by the size of his or her family. Such scales typically were instituted only during years of high food prices, such as 1795-96 and 1800-01, and removed when prices declined. Child allowance payments were widespread in the rural south and east, which suggests that laborers’ wages were too low to support large families. The typical parish paid a small weekly sum to laborers with four or more children under age 10 or 12. Seasonal unemployment had been a problem for agricultural laborers long before 1750, but the extent of seasonality increased in the second half of the eighteenth century as farmers in southern and eastern England responded to the sharp increase in grain prices by increasing their specialization in grain production. The increase in seasonal unemployment, combined with the decline in other sources of income, forced many agricultural laborers to apply for poor relief during the winter.

Regional differences in relief expenditures and recipients

Table 2 reports data for fifteen counties located throughout England on per capita relief expenditures for the years ending in March 1783-85, 1803, 1812, and 1831, and on relief recipients in 1802-03. Per capita expenditures were higher on average in agricultural counties than in more industrial counties, and were especially high in the grain-producing southern counties — Oxford, Berkshire, Essex, Suffolk, and Sussex. The share of the population receiving poor relief in 1802-03 varied significantly across counties, being 15 to 23 percent in the grain- producing south and less than 10 percent in the north. The demographic characteristics of those relieved also differed across regions. In particular, the share of relief recipients who were elderly or disabled was higher in the north and west than it was in the south; by implication, the share that were able-bodied was higher in the south and east than elsewhere. Economic historians typically have concluded that these regional differences in relief expenditures and numbers on relief were caused by differences in economic circumstances; that is, poverty was more of a problem in the agricultural south and east than it was in the pastoral southwest or in the more industrial north (Blaug 1963; Boyer 1990). More recently, King (2000) has argued that the regional differences in poor relief were determined not by economic structure but rather by “very different welfare cultures on the part of both the poor and the poor law administrators.”

Causes of the Increase in Relief to Able-bodied Males

What caused the increase in the number of able-bodied males on relief? In the second half of the eighteenth century, a large share of rural households in southern England suffered significant declines in real income. County-level cross-sectional data suggest that, on average, real wages for day laborers in agriculture declined by 19 percent from 1767-70 to 1795 in fifteen southern grain-producing counties, then remained roughly constant from 1795 to 1824, before increasing to a level in 1832 about 10 percent above that of 1770 (Bowley 1898). Farm-level time-series data yield a similar result — real wages in the southeast declined by 13 percent from 1770-79 to 1800-09, and remained low until the 1820s (Clark 2001).

Enclosures

Some historians contend that the Parliamentary enclosure movement, and the plowing over of commons and waste land, reduced the access of rural households to land for growing food, grazing animals, and gathering fuel, and led to the immiseration of large numbers of agricultural laborers and their families (Hammond and Hammond 1911; Humphries 1990). More recent research, however, suggests that only a relatively small share of agricultural laborers had common rights, and that there was little open access common land in southeastern England by 1750 (Shaw-Taylor 2001; Clark and Clark 2001). Thus, the Hammonds and Humphries probably overstated the effect of late eighteenth-century enclosures on agricultural laborers’ living standards, although those laborers who had common rights must have been hurt by enclosures.

Declining cottage industry

Finally, in some parts of the south and east, women and children were employed in wool spinning, lace making, straw plaiting, and other cottage industries. Employment opportunities in wool spinning, the largest cottage industry, declined in the late eighteenth century, and employment in the other cottage industries declined in the early nineteenth century (Pinchbeck 1930; Boyer 1990). The decline of cottage industry reduced the ability of women and children to contribute to household income. This, in combination with the decline in agricultural laborers’ wage rates and, in some villages, the loss of common rights, caused many rural household’s incomes in southern England to fall dangerously close to subsistence by 1795.

North and Midlands

The situation was different in the north and midlands. The real wages of day laborers in agriculture remained roughly constant from 1770 to 1810, and then increased sharply, so that by the 1820s wages were about 50 percent higher than they were in 1770 (Clark 2001). Moreover, while some parts of the north and midlands experienced a decline in cottage industry, in Lancashire and the West Riding of Yorkshire the concentration of textile production led to increased employment opportunities for women and children.

The Political Economy of the Poor Law, 1795-1834

A comparison of English poor relief with poor relief on the European continent reveals a puzzle: from 1795 to 1834 relief expenditures per capita, and expenditures as a share of national product, were significantly higher in England than on the continent. However, differences in spending between England and the continent were relatively small before 1795 and after 1834 (Lindert 1998). Simple economic explanations cannot account for the different patterns of English and continental relief.

Labor-hiring farmers take advantage of the poor relief system

The increase in relief spending in the late-eighteenth and early-nineteenth centuries was partly a result of politically-dominant farmers taking advantage of the poor relief system to shift some of their labor costs onto other taxpayers (Boyer 1990). Most rural parish vestries were dominated by labor-hiring farmers as a result of “the principle of weighting the right to vote according to the amount of property occupied,” introduced by Gilbert’s Act (1782), and extended in 1818 by the Parish Vestry Act (Brundage 1978). Relief expenditures were financed by a tax levied on all parishioners whose property value exceeded some minimum level. A typical rural parish’s taxpayers can be divided into two groups: labor-hiring farmers and non-labor-hiring taxpayers (family farmers, shopkeepers, and artisans). In grain-producing areas, where there were large seasonal variations in the demand for labor, labor-hiring farmers anxious to secure an adequate peak season labor force were able to reduce costs by laying off unneeded workers during slack seasons and having them collect poor relief. Large farmers used their political power to tailor the administration of poor relief so as to lower their labor costs. Thus, some share of the increase in relief spending in the early nineteenth century represented a subsidy to labor-hiring farmers rather than a transfer from farmers and other taxpayers to agricultural laborers and their families. In pasture farming areas, where the demand for labor was fairly constant over the year, it was not in farmers’ interests to shed labor during the winter, and the number of able-bodied laborers receiving casual relief was smaller. The Poor Law Amendment Act of 1834 reduced the political power of labor-hiring farmers, which helps to account for the decline in relief expenditures after that date.

The New Poor Law, 1834-70

The increase in spending on poor relief in the late eighteenth and early nineteenth centuries, combined with the attacks on the Poor Laws by Thomas Malthus and other political economists and the agricultural laborers’ revolt of 1830-31 (the Captain Swing riots), led the government in 1832 to appoint the Royal Commission to Investigate the Poor Laws. The Commission published its report, written by Nassau Senior and Edwin Chadwick, in March 1834. The report, described by historian R. H. Tawney (1926) as “brilliant, influential and wildly unhistorical,” called for sweeping reforms of the Poor Law, including the grouping of parishes into Poor Law unions, the abolition of outdoor relief for the able-bodied and their families, and the appointment of a centralized Poor Law Commission to direct the administration of poor relief. Soon after the report was published Parliament adopted the Poor Law Amendment Act of 1834, which implemented some of the report’s recommendations and left others, like the regulation of outdoor relief, to the three newly appointed Poor Law Commissioners.

By 1839 the vast majority of rural parishes had been grouped into poor law unions, and most of these had built or were building workhouses. On the other hand, the Commission met with strong opposition when it attempted in 1837 to set up unions in the industrial north, and the implementation of the New Poor Law was delayed in several industrial cities. In an attempt to regulate the granting of relief to able-bodied males, the Commission, and its replacement in 1847, the Poor Law Board, issued several orders to selected Poor Law Unions. The Outdoor Labour Test Order of 1842, sent to unions without workhouses or where the workhouse test was deemed unenforceable, stated that able-bodied males could be given outdoor relief only if they were set to work by the union. The Outdoor Relief Prohibitory Order of 1844 prohibited outdoor relief for both able-bodied males and females except on account of sickness or “sudden and urgent necessity.” The Outdoor Relief Regulation Order of 1852 extended the labor test for those relieved outside of workhouses.

Historical debate about the effect of the New Poor Law

Historians do not agree on the effect of the New Poor Law on the local administration of relief. Some contend that the orders regulating outdoor relief largely were evaded by both rural and urban unions, many of whom continued to grant outdoor relief to unemployed and underemployed males (Rose 1970; Digby 1975). Others point to the falling numbers of able- bodied males receiving relief in the national statistics and the widespread construction of union workhouses, and conclude that the New Poor Law succeeded in abolishing outdoor relief for the able-bodied by 1850 (Williams 1981). A recent study by Lees (1998) found that in three London parishes and six provincial towns in the years around 1850 large numbers of prime-age males continued to apply for relief, and that a majority of those assisted were granted outdoor relief. The Poor Law also played an important role in assisting the unemployed in industrial cities during the cyclical downturns of 1841-42 and 1847-48 and the Lancashire cotton famine of 1862-65 (Boot 1990; Boyer 1997). There is no doubt, however, that spending on poor relief declined after 1834 (see Table 1). Real per capita relief expenditures fell by 43 percent from 1831 to 1841, and increased slowly thereafter.

Beginning in 1840, data on the number of persons receiving poor relief are available for two days a year, January 1 and July 1; the “official” estimates in Table 1 of the annual number relieved were constructed as the average of the number relieved on these two dates. Studies conducted by Poor Law administrators indicate that the number recorded in the day counts was less than half the number assisted during the year. Lees’s “revised” estimates of annual relief recipients (see Table 1) assumes that the ratio of actual to counted paupers was 2.24 for 1850- 1900 and 2.15 for 1905-14; these suggest that from 1850 to 1870 about 10 percent of the population was assisted by the Poor Law each year. Given the temporary nature of most spells of relief, over a three year period as much as 25 percent of the population made use of the Poor Law (Lees 1998).

The Crusade Against Outrelief

In the 1870s Poor Law unions throughout England and Wales curtailed outdoor relief for all types of paupers. This change in policy, known as the Crusade Against Outrelief, was not a result of new government regulations, although it was encouraged by the newly formed Local Government Board (LGB). The Board was aided in convincing the public of the need for reform by the propaganda of the Charity Organization Society (COS), founded in 1869. The LGB and the COS maintained that the ready availability of outdoor relief destroyed the self-reliance of the poor. The COS went on to argue that the shift from outdoor to workhouse relief would significantly reduce the demand for assistance, since most applicants would refuse to enter workhouses, and therefore reduce Poor Law expenditures. A policy that promised to raise the morals of the poor and reduce taxes was hard for most Poor Law unions to resist (MacKinnon 1987).

The effect of the Crusade can be seen in Table 1. The deterrent effect associated with the workhouse led to a sharp fall in numbers on relief — from 1871 to 1876, the number of paupers receiving outdoor relief fell by 33 percent. The share of paupers relieved in workhouses increased from 12-15 percent in 1841-71 to 22 percent in 1880, and it continued to rise to 35 percent in 1911. The extent of the crusade varied considerably across poor law unions. Urban unions typically relieved a much larger share of their paupers in workhouses than did rural unions, but there were significant differences in practice across cities. In 1893, over 70 percent of the paupers in Liverpool, Manchester, Birmingham, and in many London Poor Law unions received indoor relief; however, in Leeds, Bradford, Newcastle, Nottingham and several other industrial and mining cities the majority of paupers continued to receive outdoor relief (Booth 1894).

Change in the attitude of the poor toward relief

The last third of the nineteenth century also witnessed a change in the attitude of the poor towards relief. Prior to 1870, a large share of the working class regarded access to public relief as an entitlement, although they rejected the workhouse as a form of relief. Their opinions changed over time, however, and by the end of the century most workers viewed poor relief as stigmatizing (Lees 1998). This change in perceptions led many poor people to go to great lengths to avoid applying for relief, and available evidence suggests that there were large differences between poverty rates and pauperism rates in late Victorian Britain. For example, in York in 1900, 3,451 persons received poor relief at some point during the year, less than half of the 7,230 persons estimated by Rowntree to be living in primary poverty.

The Declining Role of the Poor Law, 1870-1914

Increased availability of alternative sources of assistance

The share of the population on relief fell sharply from 1871 to 1876, and then continued to decline, at a much slower pace, until 1914. Real per capita relief expenditures increased from 1876 to 1914, largely because the Poor Law provided increasing amounts of medical care for the poor. Otherwise, the role played by the Poor Law declined over this period, due in large part to an increase in the availability of alternative sources of assistance. There was a sharp increase in the second half of the nineteenth century in the membership of friendly societies — mutual help associations providing sickness, accident, and death benefits, and sometimes old age (superannuation) benefits — and of trade unions providing mutual insurance policies. The benefits provided workers and their families with some protection against income loss, and few who belonged to friendly societies or unions providing “friendly” benefits ever needed to apply to the Poor Law for assistance.

Work relief

Local governments continued to assist unemployed males after 1870, but typically not through the Poor Law. Beginning with the Chamberlain Circular in 1886 the Local Government Board encouraged cities to set up work relief projects when unemployment was high. The circular stated that “it is not desirable that the working classes should be familiarised with Poor Law relief,” and that the work provided should “not involve the stigma of pauperism.” In 1905 Parliament adopted the Unemployed Workman Act, which established in all large cities distress committees to provide temporary employment to workers who were unemployed because of a “dislocation of trade.”

Liberal welfare reforms, 1906-1911

Between 1906 and 1911 Parliament passed several pieces of social welfare legislation collectively known as the Liberal welfare reforms. These laws provided free meals and medical inspections (later treatment) for needy school children (1906, 1907, 1912) and weekly pensions for poor persons over age 70 (1908), and established national sickness and unemployment insurance (1911). The Liberal reforms purposely reduced the role played by poor relief, and paved the way for the abolition of the Poor Law.

The Last Years of the Poor Law

During the interwar period the Poor Law served as a residual safety net, assisting those who fell through the cracks of the existing social insurance policies. The high unemployment of 1921-38 led to a sharp increase in numbers on relief. The official count of relief recipients rose from 748,000 in 1914 to 1,449,000 in 1922; the number relieved averaged 1,379,800 from 1922 to 1938. A large share of those on relief were unemployed workers and their dependents, especially in 1922-26. Despite the extension of unemployment insurance in 1920 to virtually all workers except the self-employed and those in agriculture or domestic service, there still were large numbers who either did not qualify for unemployment benefits or who had exhausted their benefits, and many of them turned to the Poor Law for assistance. The vast majority were given outdoor relief; from 1921 to 1923 the number of outdoor relief recipients increased by 1,051,000 while the number receiving indoor relieve increased by 21,000.

The Poor Law becomes redundant and is repealed

Despite the important role played by poor relief during the interwar period, the government continued to adopt policies, which bypassed the Poor Law and left it “to die by attrition and surgical removals of essential organs” (Lees 1998). The Local Government Act of 1929 abolished the Poor Law unions, and transferred the administration of poor relief to the counties and county boroughs. In 1934 the responsibility for assisting those unemployed who were outside the unemployment insurance system was transferred from the Poor Law to the Unemployment Assistance Board. Finally, from 1945 to 1948, Parliament adopted a series of laws that together formed the basis for the welfare state, and made the Poor Law redundant. The National Assistance Act of 1948 officially repealed all existing Poor Law legislation, and replaced the Poor Law with the National Assistance Board to act as a residual relief agency.

Table 1
Relief Expenditures and Numbers on Relief, 1696-1936

Expend. Real Expend. Expend. Number Share of Number Share of Share of
on expend. as share as share relieved Pop. relieved pop. paupers
Year Relief per capita of GDP of GDP (Official) relieved (Lees) relieved relieved
(£s) 1803=100 (Slack) (Lindert) 1 000s (Official) 1 000s (Lees) indoors
1696 400 24.9 0.8
1748-50 690 45.8 1.0 0.99
1776 1 530 64.0 1.6 1.59
1783-85 2 004 75.6 2.0 1.75
1803 4 268 100.0 1.9 2.15 1 041 11.4 8.0
1813 6 656 91.8 2.58
1818 7 871 116.8
1821 6 959 113.6 2.66
1826 5 929 91.8
1831 6 799 107.9 2.00
1836 4 718 81.1
1841 4 761 61.8 1.12 1 299 8.3 2 910 18.5 14.8
1846 4 954 69.4 1 332 8.0 2 984 17.8 15.0
1851 4 963 67.8 1.07 941 5.3 2 108 11.9 12.1
1856 6 004 62.0 917 4.9 2 054 10.9 13.6
1861 5 779 60.0 0.86 884 4.4 1 980 9.9 13.2
1866 6 440 65.0 916 4.3 2 052 9.7 13.7
1871 7 887 73.3 1 037 4.6 2 323 10.3 14.2
1876 7 336 62.8 749 3.1 1 678 7.0 18.1
1881 8 102 69.1 0.70 791 3.1 1 772 6.9 22.3
1886 8 296 72.0 781 2.9 1 749 6.4 23.2
1891 8 643 72.3 760 2.6 1 702 5.9 24.0
1896 10 216 84.7 816 2.7 1 828 6.0 25.9
1901 11 549 84.7 777 2.4 1 671 5.2 29.2
1906 14 036 96.9 892 2.6 1 918 5.6 31.1
1911 15 023 93.6 886 2.5 1 905 5.3 35.1
1921 31 925 75.3 627 1.7 35.7
1926 40 083 128.3 1 331 3.4 17.7
1931 38 561 133.9 1 090 2.7 21.5
1936 44 379 165.7 1 472 3.6 12.6

Notes: Relief expenditure data are for the year ended on March 25. In calculating real per capita expenditures, I used cost of living and population data for the previous year.

Table 2
County-level Poor Relief Data, 1783-1831

Per capita Per capita Per capita Per capita Share of Percent Share of
relief relief relief relief Percent of Recipients of land in Pop
spending spending spending spending population over 60 or arable Employed
County (s.) (s.) (s.) (s.) relieved Disabled farming in Agric
1783-5 1802-03 1812 1831 1802-03 1802-03 c. 1836 1821
North
Durham 2.78 6.50 9.92 6.83 9.3 22.8 54.9 20.5
Northumberland 2.81 6.67 7.92 6.25 8.8 32.2 46.5 26.8
Lancashire 3.48 4.42 7.42 4.42 6.7 15.0 27.1 11.2
West Riding 2.91 6.50 9.92 5.58 9.3 18.1 30.0 19.6
Midlands
Stafford 4.30 6.92 8.50 6.50 9.1 17.2 44.8 26.6
Nottingham 3.42 6.33 10.83 6.50 6.8 17.3 na 35.4
Warwick 6.70 11.25 13.33 9.58 13.3 13.7 47.5 27.9
Southeast
Oxford 7.07 16.17 24.83 16.92 19.4 13.2 55.8 55.4
Berkshire 8.65 15.08 27.08 15.75 20.0 12.7 58.5 53.3
Essex 9.10 12.08 24.58 17.17 16.4 12.7 72.4 55.7
Suffolk 7.35 11.42 19.33 18.33 16.6 11.4 70.3 55.9
Sussex 11.52 22.58 33.08 19.33 22.6 8.7 43.8 50.3
Southwest
Devon 5.53 7.25 11.42 9.00 12.3 23.1 22.5 40.8
Somerset 5.24 8.92 12.25 8.83 12.0 20.8 24.4 42.8
Cornwall 3.62 5.83 9.42 6.67 6.6 31.0 23.8 37.7
England & Wales 4.06 8.92 12.75 10.08 11.4 16.0 48.0 33.0

References

Blaug, Mark. “The Myth of the Old Poor Law and the Making of the New.” Journal of Economic History 23 (1963): 151-84.

Blaug, Mark. “The Poor Law Report Re-examined.” Journal of Economic History (1964) 24: 229-45.

Boot, H. M. “Unemployment and Poor Law Relief in Manchester, 1845-50.” Social History 15 (1990): 217-28.

Booth, Charles. The Aged Poor in England and Wales. London: MacMillan, 1894.

Boyer, George R. “Poor Relief, Informal Assistance, and Short Time during the Lancashire Cotton Famine.” Explorations in Economic History 34 (1997): 56-76.

Boyer, George R. An Economic History of the English Poor Law, 1750-1850. Cambridge: Cambridge University Press, 1990.

Brundage, Anthony. The Making of the New Poor Law. New Brunswick, N.J.: Rutgers University Press, 1978.

Clark, Gregory. “Farm Wages and Living Standards in the Industrial Revolution: England, 1670-1869.” Economic History Review, 2nd series 54 (2001): 477-505.

Clark, Gregory and Anthony Clark. “Common Rights to Land in England, 1475-1839.” Journal of Economic History 61 (2001): 1009-36.

Digby, Anne. “The Labour Market and the Continuity of Social Policy after 1834: The Case of the Eastern Counties.” Economic History Review, 2nd series 28 (1975): 69-83.

Eastwood, David. Governing Rural England: Tradition and Transformation in Local Government, 1780-1840. Oxford: Clarendon Press, 1994.

Fraser, Derek, editor. The New Poor Law in the Nineteenth Century. London: Macmillan, 1976.

Hammond, J. L. and Barbara Hammond. The Village Labourer, 1760-1832. London: Longmans, Green, and Co., 1911.

Hampson, E. M. The Treatment of Poverty in Cambridgeshire, 1597-1834. Cambridge: Cambridge University Press, 1934

Humphries, Jane. “Enclosures, Common Rights, and Women: The Proletarianization of Families in the Late Eighteenth and Early Nineteenth Centuries.” Journal of Economic History 50 (1990): 17-42.

King, Steven. Poverty and Welfare in England, 1700-1850: A Regional Perspective. Manchester: Manchester University Press, 2000.

Lees, Lynn Hollen. The Solidarities of Strangers: The English Poor Laws and the People, 1770-1948. Cambridge: Cambridge University Press, 1998.

Lindert, Peter H. “Poor Relief before the Welfare State: Britain versus the Continent, 1780- 1880.” European Review of Economic History 2 (1998): 101-40.

MacKinnon, Mary. “English Poor Law Policy and the Crusade Against Outrelief.” Journal of Economic History 47 (1987): 603-25.

Marshall, J. D. The Old Poor Law, 1795-1834. 2nd edition. London: Macmillan, 1985.

Pinchbeck, Ivy. Women Workers and the Industrial Revolution, 1750-1850. London: Routledge, 1930.

Pound, John. Poverty and Vagrancy in Tudor England, 2nd edition. London: Longmans, 1986.

Rose, Michael E. “The New Poor Law in an Industrial Area.” In The Industrial Revolution, edited by R.M. Hartwell. Oxford: Oxford University Press, 1970.

Rose, Michael E. The English Poor Law, 1780-1930. Newton Abbot: David & Charles, 1971.

Shaw-Taylor, Leigh. “Parliamentary Enclosure and the Emergence of an English Agricultural Proletariat.” Journal of Economic History 61 (2001): 640-62.

Slack, Paul. Poverty and Policy in Tudor and Stuart England. London: Longmans, 1988.

Slack, Paul. The English Poor Law, 1531-1782. London: Macmillan, 1990.

Smith, Richard (1996). “Charity, Self-interest and Welfare: Reflections from Demographic and Family History.” In Charity, Self-Interest and Welfare in the English Past, edited by Martin Daunton. NewYork: St Martin’s.

Sokoll, Thomas. Household and Family among the Poor: The Case of Two Essex Communities in the Late Eighteenth and Early Nineteenth Centuries. Bochum: Universitätsverlag Brockmeyer, 1993.

Solar, Peter M. “Poor Relief and English Economic Development before the Industrial Revolution.” Economic History Review, 2nd series 48 (1995): 1-22.

Tawney, R. H. Religion and the Rise of Capitalism: A Historical Study. London: J. Murray, 1926.

Webb, Sidney and Beatrice Webb. English Poor Law History. Part I: The Old Poor Law. London: Longmans, 1927.

Williams, Karel. From Pauperism to Poverty. London: Routledge, 1981.

Citation: Boyer, George. “English Poor Laws”. EH.Net Encyclopedia, edited by Robert Whaples. May 7, 2002. URL http://eh.net/encyclopedia/english-poor-laws/

The Economic History of Norway

Ola Honningdal Grytten, Norwegian School of Economics and Business Administration

Overview

Norway, with its population of 4.6 million on the northern flank of Europe, is today one of the most wealthy nations in the world, both measured as GDP per capita and in capital stock. On the United Nation Human Development Index, Norway has been among the three top countries for several years, and in some years the very top nation. Huge stocks of natural resources combined with a skilled labor force and the adoption of new technology made Norway a prosperous country during the nineteenth and twentieth century.

Table 1 shows rates of growth in the Norwegian economy from 1830 to the present using inflation-adjusted gross domestic product (GDP). This article splits the economic history of Norway into two major phases — before and after the nation gained its independence in 1814.

Table 1
Phases of Growth in the Real Gross Domestic Product of Norway, 1830-2003

(annual growth rates as percentages)

Year GDP GDP per capita
1830-1843 1.91 0.86
1843-1875 2.68 1.59
1875-1914 2.02 1.21
1914-1945 2.28 1.55
1945-1973 4.73 3.81
1973-2003 3.28 2.79
1830-2003 2.83 2.00

Source: Grytten (2004b)

Before Independence

The Norwegian economy was traditionally based on local farming communities combined with other types of industry, basically fishing, hunting, wood and timber along with a domestic and international-trading merchant fleet. Due to topography and climatic conditions the communities in the North and the West were more dependent on fish and foreign trade than the communities in the south and east, which relied mainly on agriculture. Agricultural output, fish catches and wars were decisive for the waves in the economy previous to independence. This is reflected in Figure 1, which reports a consumer price index for Norway from 1516 to present.

The peaks in this figure mark the sixteenth-century Price Revolution (1530s to 1590s), the Thirty Years War (1618-1648), the Great Nordic War (1700-1721), the Napoleonic Wars (1800-1815), the only period of hyperinflation in Norway — World War I (1914-1918) — and the stagflation period, i.e. high rates of inflation combined with a slowdown in production, in the 1970s and early 1980s.

Figure 1
Consumer Price Index for Norway, 1516-2003 (1850 = 100).

Figure 1
Source: Grytten (2004a)

During the last decades of the eighteenth century the Norwegian economy bloomed along with a first era of liberalism. Foreign trade of fish and timber had already been important for the Norwegian economy for centuries, and now the merchant fleet was growing rapidly. Bergen, located at the west coast, was the major city, with a Hanseatic office and one of the Nordic countries’ largest ports for domestic and foreign trade.

When Norway gained its independence from Denmark in 1814, after a tight union covering 417 years, it was a typical egalitarian country with a high degree of self-supply from agriculture, fisheries and hunting. According to the population censuses from 1801 and 1815 more than ninety percent of the population of 0.9 million lived in rural areas, mostly on small farms.

After Independence (1814)

Figure 2 shows annual development in GDP by expenditure (in fixed 2000 prices) from 1830 to 2003. The series, with few exceptions, reveal steady growth rates with few huge fluctuations. However, economic growth as a more or less continuous process started in the 1840s. We can also conclude that the growth process slowed down during the last three decades of the nineteenth century. The years 1914-1945 were more volatile than any other period in question, while there was an impressive and steady rate of growth until the mid 1970s and from then on slower growth.

Figure 2
Gross Domestic Product for Norway by Expenditure Category
(in 2000 Norwegian Kroner)

Figure 2
Source: Grytten (2004b)

Stagnation and Institution Building, 1814-1843

The newborn state lacked its own institutions, industrial entrepreneurs and domestic capital. However, due to its huge stocks of natural resources and its geographical closeness to the sea and to the United Kingdom, the new state, linked to Sweden in a loose royal union, seized its opportunities after some decades. By 1870 it had become a relatively wealthy nation. Measured in GDP per capita Norway was well over the European average, in the middle of the West European countries, and in fact, well above Sweden.

During the first decades after its independence from Denmark, the new state struggled with the international recession after the Napoleonic wars, deflationary monetary policy, and protectionism from the UK.

The Central Bank of Norway was founded in 1816, and a national currency, the spesidaler pegged to silver was introduced. The daler depreciated heavily during the first troubled years of recession in the 1820s.

The Great Boom, 1843-1875

After the Norwegian spesidaler gained its par value to silver in 1842, Norway saw a period of significant economic growth up to the mid 1870s. This impressive growth was mirrored in only a few other countries. The growth process was very much initiated by high productivity growth in agriculture and the success of the foreign sector. The adoption of new structures and technology along with substitution from arable to lifestock production made labor productivity in agriculture increase by about 150 percent between 1835 and 1910. The exports of timber, fish and in particular maritime services achieved high growth rates. In fact, Norway became a major power in shipping services during this period, accounting for about seven percent of the world merchant fleet in 1875. Norwegian sailing vessels freighted international goods all over the world at low prices.

The success of the Norwegian foreign sector can be explained by a number of factors. Liberalization of world trade and high international demand secured a market for Norwegian goods and services. In addition, Norway had vast stocks of fish and timber along with maritime skills. According to recent calculations, GDP per capita had an annual growth rate of 1.6 percent 1843 to 1876, well above the European average. At the same time the Norwegian annual rate of growth for exports was 4.8 percent. The first modern large-scale manufacturing industry in Norway saw daylight in the 1840s, when textile plants and mechanized industry were established. A second wave of industrialization took place in the 1860s and 1870s. Following the rapid productivity growth in agriculture, food processing and dairy production industries showed high growth in this period.

During this great boom, capital was imported mainly from Britain, but also from Sweden, Denmark and Germany, the four most important Norwegian trading partners at the time. In 1536 the King of Denmark and Norway chose the Lutheran faith as the state religion. In consequence of the Reformation, reading became compulsory; consequently Norway acquired a generally skilled and independent labor force. The constitution from 1814 also cleared the way for liberalism and democracy. The puritan revivals during the nineteenth century created a business environment, which raised entrepreneurship, domestic capital and a productive labor force. In the western and southern parts of the country these puritan movements are still strong, both in daily life and within business.

Relative Stagnation with Industrialization, 1875-1914

Norway’s economy was hit hard during the “depression” from mid 1870s to the early 1890s. GDP stagnated, particular during the 1880s, and prices fell until 1896. This stagnation is mirrored in the large-scale emigration from Norway to North America in the 1880s. At its peak in 1882 as many as 28,804 persons, 1.5 percent of the population, left the country. All in all, 250,000 emigrated in the period 1879-1893, equal to 60 percent of the birth surplus. Only Ireland had higher emigration rates than Norway between 1836 and 1930, when 860,000 Norwegians left the country.

The long slow down can largely been explained by Norway’s dependence on the international economy and in particular the United Kingdom, which experienced slower economic growth than the other major economies of the time. As a result of the international slowdown, Norwegian exports contracted in several years, but expanded in others. A second reason for the slowdown in Norway was the introduction of the international gold standard. Norway adopted gold in January 1874, and due to the trade deficit, lack of gold and lack of capital, the country experienced a huge contraction in gold reserves and in the money stock. The deflationary effect strangled the economy. Going onto the gold standard caused the appreciation of the Norwegian currency, the krone, as gold became relatively more expensive compared to silver. A third explanation of Norway’s economic problems in the 1880s is the transformation from sailing to steam vessels. Norway had by 1875 the fourth biggest merchant fleet in the world. However, due to lack of capital and technological skills, the transformation from sail to steam was slow. Norwegian ship owners found a niche in cheap second-hand sailing vessels. However, their market was diminishing, and finally, when the Norwegian steam fleet passed the size of the sailing fleet in 1907, Norway was no longer a major maritime power.

A short boom occurred from the early 1890s to 1899. Then, a crash in the Norwegian building industry led to a major financial crash and stagnation in GDP per capita from 1900 to 1905. Thus from the middle of the 1870s until 1905 Norway performed relatively bad. Measured in GDP per capita, Norway, like Britain, experienced a significant stagnation relative to most western economies.

After 1905, when Norway gained full independence from Sweden, a heavy wave of industrialization took place. In the 1890s the fish preserving and cellulose and paper industries started to grow rapidly. From 1905, when Norsk Hydro was established, manufacturing industry connected to hydroelectrical power took off. It is argued, quite convincingly, that if there was an industrial breakthrough in Norway, it must have taken place during the years 1905-1920. However, the primary sector, with its labor-intensive agriculture and increasingly more capital-intensive fisheries, was still the biggest sector.

Crises and Growth, 1914-1945

Officially Norway was neutral during World War I. However, in terms of the economy, the government clearly took the side of the British and their allies. Through several treaties Norway gave privileges to the allied powers, which protected the Norwegian merchant fleet. During the war’s first years, Norwegian ship owners profited from the war, and the economy boomed. From 1917, when Germany declared war against non-friendly vessels, Norway took heavy losses. A recession replaced the boom.

Norway suspended gold redemption in August 1914, and due to inflationary monetary policy during the war and in the first couple of years afterward, demand was very high. When the war came to an end this excess demand was met by a positive shift in supply. Thus, Norway, like other Western countries experienced a significant boom in the economy from the spring of 1919 to the early autumn 1920. The boom was followed by high inflation, trade deficits, currency depreciation and an overheated economy.

The international postwar recession beginning in autumn 1920, hit Norway more severely than most other countries. In 1921 GDP per capita fell by eleven percent, which was only exceeded by the United Kingdom. There are two major reasons for the devastating effect of the post-war recession. In the first place, as a small open economy, Norway was more sensitive to international recessions than most other countries. This was in particular the case because the recession hit the country’s most important trading partners, the United Kingdom and Sweden, so hard. Secondly, the combination of strong and mostly pro-cyclical inflationary monetary policy from 1914 to 1920 and thereafter a hard deflationary policy made the crisis worse (Figure 3).

Figure 3
Money Aggregates for Norway, 1910-1930

Figure 3
Source: Klovland (2004a)

In fact, Norway pursued a long, but non-persistent deflationary monetary policy aimed at restoring the par value of the krone (NOK) up to May 1928. In consequence, another recession hit the economy during the middle of the 1920s. Hence, Norway was one of the worst performers in the western world in the 1920s. This can best be seen in the number of bankruptcies, a huge financial crisis and mass unemployment. Bank losses amounted to seven percent of GDP in 1923. Total unemployment rose from about one percent in 1919 to more than eight percent in 1926 and 1927. In manufacturing it reached more than 18 percent the same years.

Despite a rapid boom and success within the whaling industry and shipping services, the country never saw a convincing recovery before the Great Depression hit Europe in late summer 1930. The worst year for Norway was 1931, when GDP per capita fell by 8.4 percent. This, however, was not only due to the international crisis, but also to a massive and violent labor conflict that year. According to the implicit GDP deflator prices fell more than 63 percent from 1920 to 1933.

All in all, however, the depression of the 1930s was milder and shorter in Norway than in most western countries. This was partly due to the deflationary monetary policy in the 1920s, which forced Norwegian companies to become more efficient in order to survive. However, it was probably more important that Norway left gold as early as September 27th, 1931 only a week after the United Kingdom. Those countries that left gold early, and thereby employed a more inflationary monetary policy, were the best performers in the 1930s. Among them were Norway and its most important trading partners, the United Kingdom and Sweden.

During the recovery period, Norway in particular saw growth in manufacturing output, exports and import substitution. This can to a large extent be explained by currency depreciation. Also, when the international merchant fleet contracted during the drop in international trade, the Norwegian fleet grew rapidly, as Norwegian ship owners were pioneers in the transformation from steam to diesel engines, tramp to line freights and into a new expanding niche: oil tankers.

The primary sector was still the largest in the economy during the interwar years. Both fisheries and agriculture struggled with overproduction problems, however. These were dealt with by introducing market controls and cartels, partly controlled by the industries themselves and partly by the government.

The business cycle reached its bottom in late 1932. Despite relatively rapid recovery and significant growth both in GDP and in employment, unemployment stayed high, and reached 10-11 percent on annual basis from 1931 to 1933 (Figure 4).

Figure 4
Unemployment Rate and Public Relief Work
as a Percent of the Work Force, 1919-1939

Figure 4
Source: Hodne and Grytten (2002)

The standard of living became poorer in the primary sector, among those employed in domestic services and for the underemployed and unemployed and their households. However, due to the strong deflation, which made consumer prices fall by than 50 percent from autumn 1920 to summer 1933, employees in manufacturing, construction and crafts experienced an increase in real wages. Unemployment stayed persistently high due to huge growth in labor supply, as result of immigration restrictions by North American countries from the 1920s onwards.

Denmark and Norway were both victims of a German surprise attack the 9th of April 1940. After two months of fighting, the allied troops surrendered in Norway on June 7th and the Norwegian royal family and government escaped to Britain.

From then until the end of the war there were two Norwegian economies, the domestic German-controlled and the foreign Norwegian- and Allied-controlled economy. The foreign economy was primarily established on the basis of the huge Norwegian merchant fleet, which again was among the biggest in the world accounting for more than seven percent of world total tonnage. Ninety percent of this floating capital escaped the Germans. The ships were united into one state-controlled company, NORTASHIP, which earned money to finance the foreign economy. The domestic economy, however, struggled with a significant fall in production, inflationary pressure and rationing of important goods, which three million Norwegians had to share with 400.000 Germans occupying the country.

Economic Planning and Growth, 1945-1973

After the war the challenge was to reconstruct the economy and re-establish political and economic order. The Labor Party, in office from 1935, grabbed the opportunity to establish a strict social democratic rule, with a growing public sector and widespread centralized economic planning. Norway first declined the U.S. proposition of financial aid after the world. However, due to lack of hard currencies they accepted the Marshall aid program. By receiving 400 million dollars from 1948 to 1952, Norway was one of the biggest per capita recipients.

As part of the reconstruction efforts Norway joined the Bretton Woods system, GATT, the IMF and the World Bank. Norway also chose to become member of NATO and the United Nations. In 1958 the country also joined the European Free Trade Area (EFTA). The same year Norway made the krone convertible to the U.S. dollar, as many other western countries did with their currencies.

The years from 1950 to 1973 are often called the golden era of the Norwegian economy. GDP per capita showed an annual growth rate of 3.3 percent. Foreign trade stepped up even more, unemployment barely existed and the inflation rate was stable. This has often been explained by the large public sector and good economic planning. The Nordic model, with its huge public sector, has been said to be a success in this period. If one takes a closer look into the situation, one will, nevertheless, find that the Norwegian growth rate in the period was lower than that for most western nations. The same is true for Sweden and Denmark. The Nordic model delivered social security and evenly-distributed wealth, but it did not necessarily give very high economic growth.

Figure 5
Public Sector as a Percent of GDP, 1900-1990

Figure 5
Source: Hodne and Grytten (2002)

Petroleum Economy and Neoliberalism, 1973 to the Present

After the Bretton Woods system fell apart (between August 1971 and March 1973) and the oil price shock in autumn 1973, most developed economies went into a period of prolonged recession and slow growth. In 1969 Philips Petroleum discovered petroleum resources at the Ekofisk field, which was defined as part of the Norwegian continental shelf. This enabled Norway to run a countercyclical financial policy during the stagflation period in the 1970s. Thus, economic growth was higher and unemployment lower than for most other western countries. However, since the countercyclical policy focused on branch and company subsidies, Norwegian firms soon learned to adapt to policy makers rather than to the markets. Hence, both productivity and business structure did not have the incentives to keep pace with changes in international markets.

Norway lost significant competitive power, and large-scale deindustrialization took place, despite efforts to save manufacturing industry. Another reason for deindustrialization was the huge growth in the profitable petroleum sector. Persistently high oil prices from the autumn 1973 to the end of 1985 pushed labor costs upward, through spillover effects from high wages in the petroleum sector. High labor costs made the Norwegian foreign sector less competitive. Thus, Norway saw deindustrialization at a more rapid pace than most of her largest trading partners. Due to the petroleum sector, however, Norway experienced high growth rates in all the three last decades of the twentieth century, bringing Norway to the top of the world GDP per capita list at the dawn of the new millennium. Nevertheless, Norway had economic problems both in the eighties and in the nineties.

In 1981 a conservative government replaced Labor, which had been in power for most of the post-war period. Norway had already joined the international wave of credit liberalization, and the new government gave fuel to this policy. However, along with the credit liberalization, the parliament still ran a policy that prevented market forces from setting interest rates. Instead they were set by politicians, in contradiction to the credit liberalization policy. The level of interest rates was an important part of the political game for power, and thus, they were set significantly below the market level. In consequence, a substantial credit boom was created in the early 1980s, and continued to the late spring of 1986. As a result, Norway had monetary expansion and an artificial boom, which created an overheated economy. When oil prices fell dramatically from December 1985 onwards, the trade surplus was suddenly turned to a huge deficit (Figure 6).

Figure 6
North Sea Oil Prices and Norway’s Trade Balance, 1975-2000

Figure 6
Source: Statistics Norway

The conservative-center government was forced to keep a tighter fiscal policy. The new Labor government pursued this from May 1986. Interest rates were persistently high as the government now tried to run a trustworthy fixed-currency policy. In the summer of 1990 the Norwegian krone was officially pegged to the ECU. When the international wave of currency speculation reached Norway during autumn 1992 the central bank finally had to suspend the fixed exchange rate and later devaluate.

In consequence of these years of monetary expansion and thereafter contraction, most western countries experienced financial crises. It was relatively hard in Norway. Prices of dwellings slid, consumers couldn’t pay their bills, and bankruptcies and unemployment reached new heights. The state took over most of the larger commercial banks to avoid a total financial collapse.

After the suspension of the ECU and the following devaluation, Norway had growth until 1998, due to optimism, an international boom and high prices of petroleum. The Asian financial crisis also rattled the Norwegian stock market. At the same time petroleum prices fell rapidly, due to internal problems among the OPEC countries. Hence, the krone depreciated. The fixed exchange rate policy had to be abandoned and the government adopted inflation targeting. Along with changes in monetary policy, the center coalition government was also able to monitor a tighter fiscal policy. At the same time interest rates were high. As result, Norway escaped the overheating process of 1993-1997 without any devastating effects. Today the country has a strong and sound economy.

The petroleum sector is still very important in Norway. In this respect the historical tradition of raw material dependency has had its renaissance. Unlike many other countries rich in raw materials, natural resources have helped make Norway one of the most prosperous economies in the world. Important factors for Norway’s ability to turn resource abundance into economic prosperity are an educated work force, the adoption of advanced technology used in other leading countries, stable and reliable institutions, and democratic rule.

References

Basberg, Bjørn L. Handelsflåten i krig: Nortraship: Konkurrent og alliert. Oslo: Grøndahl and Dreyer, 1992.

Bergh, Tore Hanisch, Even Lange and Helge Pharo. Growth and Development. Oslo: NUPI, 1979.

Brautaset, Camilla. “Norwegian Exports, 1830-1865: In Perspective of Historical National Accounts.” Ph.D. dissertation. Norwegian School of Economics and Business Administration, 2002.

Bruland, Kristine. British Technology and European Industrialization. Cambridge: Cambridge University Press, 1989.

Danielsen, Rolf, Ståle Dyrvik, Tore Grønlie, Knut Helle and Edgar Hovland. Norway: A History from the Vikings to Our Own Times. Oslo: Scandinavian University Press, 1995.

Eitrheim. Øyvind, Jan T. Klovland and Jan F. Qvigstad, editors. Historical Monetary Statistics for Norway, 1819-2003. Oslo: Norges Banks skriftserie/Occasional Papers, no 35, 2004.

Hanisch, Tore Jørgen. “Om virkninger av paripolitikken.” Historisk tidsskrift 58, no. 3 (1979): 223-238.

Hanisch, Tore Jørgen, Espen Søilen and Gunhild Ecklund. Norsk økonomisk politikk i det 20. århundre. Verdivalg i en åpen økonomi. Kristiansand: Høyskoleforlaget, 1999.

Grytten, Ola Honningdal. “A Norwegian Consumer Price Index 1819-1913 in a Scandinavian Perspective.” European Review of Economic History 8, no.1 (2004): 61-79.

Grytten, Ola Honningdal. “A Consumer Price Index for Norway, 1516-2003.” Norges Bank: Occasional Papers, no. 1 (2004a): 47-98.

Grytten. Ola Honningdal. “The Gross Domestic Product for Norway, 1830-2003.” Norges Bank: Occasional Papers, no. 1 (2004b): 241-288.

Hodne, Fritz. An Economic History of Norway, 1815-1970. Tapir: Trondheim, 1975.

Hodne, Fritz. The Norwegian Economy, 1920-1980. London: Croom Helm and St. Martin’s, 1983.

Hodne, Fritz and Ola Honningdal Grytten. Norsk økonomi i det 19. århundre. Bergen: Fagbokforlaget, 2000.

Hodne, Fritz and Ola Honningdal Grytten. Norsk økonomi i det 20. århundre. Bergen: Fagbokforlaget, 2002.

Klovland, Jan Tore. “Monetary Policy and Business Cycles in the Interwar Years: The Scandinavian Experience.” European Review of Economic History 2, no. 2 (1998):

Klovland, Jan Tore. “Monetary Aggregates in Norway, 1819-2003.” Norges Bank: Occasional Papers, no. 1 (2004a): 181-240.

Klovland, Jan Tore. “Historical Exchange Rate Data, 1819-2003”. Norges Bank: Occasional Papers, no. 1 (2004b): 289-328.

Lange, Even, editor. Teknologi i virksomhet. Verkstedsindustri i Norge etter 1840. Oslo: Ad Notam Forlag, 1989.

Nordvik, Helge W. “Finanspolitikken og den offentlige sektors rolle i norsk økonomi i mellomkrigstiden”. Historisk tidsskrift 58, no. 3 (1979): 239-268.

Sejersted, Francis. Demokratisk kapitalisme. Oslo: Universitetsforlaget, 1993.

Søilen. Espen. “Fra frischianisme til keynesianisme? En studie av norsk økonomisk politikk i lys av økonomisk teori, 1945-1980.” Ph.D. dissertation. Bergen: Norwegian School of Economics and Business Administration, 1998.

Citation: Grytten, Ola. “The Economic History of Norway”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-norway/

Economic History of Malaysia

John H. Drabble, University of Sydney, Australia

General Background

The Federation of Malaysia (see map), formed in 1963, originally consisted of Malaya, Singapore, Sarawak and Sabah. Due to internal political tensions Singapore was obliged to leave in 1965. Malaya is now known as Peninsular Malaysia, and the two other territories on the island of Borneo as East Malaysia. Prior to 1963 these territories were under British rule for varying periods from the late eighteenth century. Malaya gained independence in 1957, Sarawak and Sabah (the latter known previously as British North Borneo) in 1963, and Singapore full independence in 1965. These territories lie between 2 and 6 degrees north of the equator. The terrain consists of extensive coastal plains backed by mountainous interiors. The soils are not naturally fertile but the humid tropical climate subject to monsoonal weather patterns creates good conditions for plant growth. Historically much of the region was covered in dense rainforest (jungle), though much of this has been removed for commercial purposes over the last century leading to extensive soil erosion and silting of the rivers which run from the interiors to the coast.

SINGAPORE

The present government is a parliamentary system at the federal level (located in Kuala Lumpur, Peninsular Malaysia) and at the state level, based on periodic general elections. Each Peninsular state (except Penang and Melaka) has a traditional Malay ruler, the Sultan, one of whom is elected as paramount ruler of Malaysia (Yang dipertuan Agung) for a five-year term.

The population at the end of the twentieth century approximated 22 million and is ethnically diverse, consisting of 57 percent Malays and other indigenous peoples (collectively known as bumiputera), 24 percent Chinese, 7 percent Indians and the balance “others” (including a high proportion of non-citizen Asians, e.g., Indonesians, Bangladeshis, Filipinos) (Andaya and Andaya, 2001, 3-4)

Significance as a Case Study in Economic Development

Malaysia is generally regarded as one of the most successful non-western countries to have achieved a relatively smooth transition to modern economic growth over the last century or so. Since the late nineteenth century it has been a major supplier of primary products to the industrialized countries; tin, rubber, palm oil, timber, oil, liquified natural gas, etc.

However, since about 1970 the leading sector in development has been a range of export-oriented manufacturing industries such as textiles, electrical and electronic goods, rubber products etc. Government policy has generally accorded a central role to foreign capital, while at the same time working towards more substantial participation for domestic, especially bumiputera, capital and enterprise. By 1990 the country had largely met the criteria for a Newly-Industrialized Country (NIC) status (30 percent of exports to consist of manufactured goods). While the Asian economic crisis of 1997-98 slowed growth temporarily, the current plan, titled Vision 2020, aims to achieve “a fully developed industrialized economy by that date. This will require an annual growth rate in real GDP of 7 percent” (Far Eastern Economic Review, Nov. 6, 2003). Malaysia is perhaps the best example of a country in which the economic roles and interests of various racial groups have been pragmatically managed in the long-term without significant loss of growth momentum, despite the ongoing presence of inter-ethnic tensions which have occasionally manifested in violence, notably in 1969 (see below).

The Premodern Economy

Malaysia has a long history of internationally valued exports, being known from the early centuries A.D. as a source of gold, tin and exotics such as birds’ feathers, edible birds’ nests, aromatic woods, tree resins etc. The commercial importance of the area was enhanced by its strategic position athwart the seaborne trade routes from the Indian Ocean to East Asia. Merchants from both these regions, Arabs, Indians and Chinese regularly visited. Some became domiciled in ports such as Melaka [formerly Malacca], the location of one of the earliest local sultanates (c.1402 A.D.) and a focal point for both local and international trade.

From the early sixteenth century the area was increasingly penetrated by European trading interests, first the Portuguese (from 1511), then the Dutch East India Company [VOC](1602) in competition with the English East India Company [EIC] (1600) for the trade in pepper and various spices. By the late eighteenth century the VOC was dominant in the Indonesian region while the EIC acquired bases in Malaysia, beginning with Penang (1786), Singapore (1819) and Melaka (1824). These were major staging posts in the growing trade with China and also served as footholds from which to expand British control into the Malay Peninsula (from 1870), and northwest Borneo (Sarawak from 1841 and North Borneo from 1882). Over these centuries there was an increasing inflow of migrants from China attracted by the opportunities in trade and as a wage labor force for the burgeoning production of export commodities such as gold and tin. The indigenous people also engaged in commercial production (rice, tin), but remained basically within a subsistence economy and were reluctant to offer themselves as permanent wage labor. Overall, production in the premodern economy was relatively small in volume and technologically undeveloped. The capitalist sector, already foreign dominated, was still in its infancy (Drabble, 2000).

The Transition to Capitalist Production

The nineteenth century witnessed an enormous expansion in world trade which, between 1815 and 1914, grew on average at 4-5 percent a year compared to 1 percent in the preceding hundred years. The driving force came from the Industrial Revolution in the West which saw the innovation of large scale factory production of manufactured goods made possible by technological advances, accompanied by more efficient communications (e.g., railways, cars, trucks, steamships, international canals [Suez 1869, Panama 1914], telegraphs) which speeded up and greatly lowered the cost of long distance trade. Industrializing countries required ever-larger supplies of raw materials as well as foodstuffs for their growing populations. Regions such as Malaysia with ample supplies of virgin land and relative proximity to trade routes were well placed to respond to this demand. What was lacking was an adequate supply of capital and wage labor. In both aspects, the deficiency was supplied largely from foreign sources.

As expanding British power brought stability to the region, Chinese migrants started to arrive in large numbers with Singapore quickly becoming the major point of entry. Most arrived with few funds but those able to amass profits from trade (including opium) used these to finance ventures in agriculture and mining, especially in the neighboring Malay Peninsula. Crops such as pepper, gambier, tapioca, sugar and coffee were produced for export to markets in Asia (e.g. China), and later to the West after 1850 when Britain moved toward a policy of free trade. These crops were labor, not capital, intensive and in some cases quickly exhausted soil fertility and required periodic movement to virgin land (Jackson, 1968).

Tin

Besides ample land, the Malay Peninsula also contained substantial deposits of tin. International demand for tin rose progressively in the nineteenth century due to the discovery of a more efficient method for producing tinplate (for canned food). At the same time deposits in major suppliers such as Cornwall (England) had been largely worked out, thus opening an opportunity for new producers. Traditionally tin had been mined by Malays from ore deposits close to the surface. Difficulties with flooding limited the depth of mining; furthermore their activity was seasonal. From the 1840s the discovery of large deposits in the Peninsula states of Perak and Selangor attracted large numbers of Chinese migrants who dominated the industry in the nineteenth century bringing new technology which improved ore recovery and water control, facilitating mining to greater depths. By the end of the century Malayan tin exports (at approximately 52,000 metric tons) supplied just over half the world output. Singapore was a major center for smelting (refining) the ore into ingots. Tin mining also attracted attention from European, mainly British, investors who again introduced new technology – such as high-pressure hoses to wash out the ore, the steam pump and, from 1912, the bucket dredge floating in its own pond, which could operate to even deeper levels. These innovations required substantial capital for which the chosen vehicle was the public joint stock company, usually registered in Britain. Since no major new ore deposits were found, the emphasis was on increased efficiency in production. European operators, again employing mostly Chinese wage labor, enjoyed a technical advantage here and by 1929 accounted for 61 percent of Malayan output (Wong Lin Ken, 1965; Yip Yat Hoong, 1969).

Rubber

While tin mining brought considerable prosperity, it was a non-renewable resource. In the early twentieth century it was the agricultural sector which came to the forefront. The crops mentioned previously had boomed briefly but were hard pressed to survive severe price swings and the pests and diseases that were endemic in tropical agriculture. The cultivation of rubber-yielding trees became commercially attractive as a raw material for new industries in the West, notably for tires for the booming automobile industry especially in the U.S. Previously rubber had come from scattered trees growing wild in the jungles of South America with production only expandable at rising marginal costs. Cultivation on estates generated economies of scale. In the 1870s the British government organized the transport of specimens of the tree Hevea Brasiliensis from Brazil to colonies in the East, notably Ceylon and Singapore. There the trees flourished and after initial hesitancy over the five years needed for the trees to reach productive age, planters Chinese and European rushed to invest. The boom reached vast proportions as the rubber price reached record heights in 1910 (see Fig.1). Average values fell thereafter but investors were heavily committed and planting continued (also in the neighboring Netherlands Indies [Indonesia]). By 1921 the rubber acreage in Malaysia (mostly in the Peninsula) had reached 935 000 hectares (about 1.34 million acres) or some 55 percent of the total in South and Southeast Asia while output stood at 50 percent of world production.

Fig.1. Average London Rubber Prices, 1905-41 (current values)

As a result of this boom, rubber quickly surpassed tin as Malaysia’s main export product, a position that it was to hold until 1980. A distinctive feature of the industry was that the technology of extracting the rubber latex from the trees (called tapping) by an incision with a special knife, and its manufacture into various grades of sheet known as raw or plantation rubber, was easily adopted by a wide range of producers. The larger estates, mainly British-owned, were financed (as in the case of tin mining) through British-registered public joint stock companies. For example, between 1903 and 1912 some 260 companies were registered to operate in Malaya. Chinese planters for the most part preferred to form private partnerships to operate estates which were on average smaller. Finally, there were the smallholdings (under 40 hectares or 100 acres) of which those at the lower end of the range (2 hectares/5 acres or less) were predominantly owned by indigenous Malays who found growing and selling rubber more profitable than subsistence (rice) farming. These smallholders did not need much capital since their equipment was rudimentary and labor came either from within their family or in the form of share-tappers who received a proportion (say 50 percent) of the output. In Malaya in 1921 roughly 60 percent of the planted area was estates (75 percent European-owned) and 40 percent smallholdings (Drabble, 1991, 1).

The workforce for the estates consisted of migrants. British estates depended mainly on migrants from India, brought in under government auspices with fares paid and accommodation provided. Chinese business looked to the “coolie trade” from South China, with expenses advanced that migrants had subsequently to pay off. The flow of immigration was directly related to economic conditions in Malaysia. For example arrivals of Indians averaged 61 000 a year between 1900 and 1920. Substantial numbers also came from the Netherlands Indies.

Thus far, most capitalist enterprise was located in Malaya. Sarawak and British North Borneo had a similar range of mining and agricultural industries in the 19th century. However, their geographical location slightly away from the main trade route (see map) and the rugged internal terrain costly for transport made them less attractive to foreign investment. However, the discovery of oil by a subsidiary of Royal Dutch-Shell starting production from 1907 put Sarawak more prominently in the business of exports. As in Malaya, the labor force came largely from immigrants from China and to a lesser extent Java.

The growth in production for export in Malaysia was facilitated by development of an infrastructure of roads, railways, ports (e.g. Penang, Singapore) and telecommunications under the auspices of the colonial governments, though again this was considerably more advanced in Malaya (Amarjit Kaur, 1985, 1998)

The Creation of a Plural Society

By the 1920s the large inflows of migrants had created a multi-ethnic population of the type which the British scholar, J.S. Furnivall (1948) described as a plural society in which the different racial groups live side by side under a single political administration but, apart from economic transactions, do not interact with each other either socially or culturally. Though the original intention of many migrants was to come for only a limited period (say 3-5 years), save money and then return home, a growing number were staying longer, having children and becoming permanently domiciled in Malaysia. The economic developments described in the previous section were unevenly located, for example, in Malaya the bulk of the tin mines and rubber estates were located along the west coast of the Peninsula. In the boom-times, such was the size of the immigrant inflows that in certain areas they far outnumbered the indigenous Malays. In social and cultural terms Indians and Chinese recreated the institutions, hierarchies and linguistic usage of their countries of origin. This was particularly so in the case of the Chinese. Not only did they predominate in major commercial centers such as Penang, Singapore, and Kuching, but they controlled local trade in the smaller towns and villages through a network of small shops (kedai) and dealerships that served as a pipeline along which export goods like rubber went out and in return imported manufactured goods were brought in for sale. In addition Chinese owned considerable mining and agricultural land. This created a distribution of wealth and division of labor in which economic power and function were directly related to race. In this situation lay the seeds of growing discontent among bumiputera that they were losing their ancestral inheritance (land) and becoming economically marginalized. As long as British colonial rule continued the various ethnic groups looked primarily to government to protect their interests and maintain peaceable relations. An example of colonial paternalism was the designation from 1913 of certain lands in Malaya as Malay Reservations in which only indigenous people could own and deal in property (Lim Teck Ghee, 1977).

Benefits and Drawbacks of an Export Economy

Prior to World War II the international economy was divided very broadly into the northern and southern hemispheres. The former contained most of the industrialized manufacturing countries and the latter the principal sources of foodstuffs and raw materials. The commodity exchange between the spheres was known as the Old International Division of Labor (OIDL). Malaysia’s place in this system was as a leading exporter of raw materials (tin, rubber, timber, oil, etc.) and an importer of manufactures. Since relatively little processing was done on the former prior to export, most of the value-added component in the final product accrued to foreign manufacturers, e.g. rubber tire manufacturers in the U.S.

It is clear from this situation that Malaysia depended heavily on earnings from exports of primary commodities to maintain the standard of living. Rice had to be imported (mainly from Burma and Thailand) because domestic production supplied on average only 40 percent of total needs. As long as export prices were high (for example during the rubber boom previously mentioned), the volume of imports remained ample. Profits to capital and good smallholder incomes supported an expanding economy. There are no official data for Malaysian national income prior to World War II, but some comparative estimates are given in Table 1 which indicate that Malayan Gross Domestic Product (GDP) per person was easily the leader in the Southeast and East Asian region by the late 1920s.

Table 1
GDP per Capita: Selected Asian Countries, 1900-1990
(in 1985 international dollars)

1900 1929 1950 1973 1990
Malaya/Malaysia1 6002 1910 1828 3088 5775
Singapore - - 22763 5372 14441
Burma 523 651 304 446 562
Thailand 594 623 652 1559 3694
Indonesia 617 1009 727 1253 2118
Philippines 735 1106 943 1629 1934
South Korea 568 945 565 1782 6012
Japan 724 1192 1208 7133 13197

Notes: Malaya to 19731; Guesstimate2; 19603

Source: van der Eng (1994).

However, the international economy was subject to strong fluctuations. The levels of activity in the industrialized countries, especially the U.S., were the determining factors here. Almost immediately following World War I there was a depression from 1919-22. Strong growth in the mid and late-1920s was followed by the Great Depression (1929-32). As industrial output slumped, primary product prices fell even more heavily. For example, in 1932 rubber sold on the London market for about one one-hundredth of the peak price in 1910 (Fig.1). The effects on export earnings were very severe; in Malaysia’s case between 1929 and 1932 these dropped by 73 percent (Malaya), 60 percent (Sarawak) and 50 percent (North Borneo). The aggregate value of imports fell on average by 60 percent. Estates dismissed labor and since there was no social security, many workers had to return to their country of origin. Smallholder incomes dropped heavily and many who had taken out high-interest secured loans in more prosperous times were unable to service these and faced the loss of their land.

The colonial government attempted to counteract this vulnerability to economic swings by instituting schemes to restore commodity prices to profitable levels. For the rubber industry this involved two periods of mandatory restriction of exports to reduce world stocks and thus exert upward pressure on market prices. The first of these (named the Stevenson scheme after its originator) lasted from 1 October 1922- 1 November 1928, and the second (the International Rubber Regulation Agreement) from 1 June 1934-1941. Tin exports were similarly restricted from 1931-41. While these measures did succeed in raising world prices, the inequitable treatment of Asian as against European producers in both industries has been debated. The protective policy has also been blamed for “freezing” the structure of the Malaysian economy and hindering further development, for instance into manufacturing industry (Lim Teck Ghee, 1977; Drabble, 1991).

Why No Industrialization?

Malaysia had very few secondary industries before World War II. The little that did appear was connected mainly with the processing of the primary exports, rubber and tin, together with limited production of manufactured goods for the domestic market (e.g. bread, biscuits, beverages, cigarettes and various building materials). Much of this activity was Chinese-owned and located in Singapore (Huff, 1994). Among the reasons advanced are; the small size of the domestic market, the relatively high wage levels in Singapore which made products uncompetitive as exports, and a culture dominated by British trading firms which favored commerce over industry. Overshadowing all these was the dominance of primary production. When commodity prices were high, there was little incentive for investors, European or Asian, to move into other sectors. Conversely, when these prices fell capital and credit dried up, while incomes contracted, thus lessening effective demand for manufactures. W.G. Huff (2002) has argued that, prior to World War II, “there was, in fact, never a good time to embark on industrialization in Malaya.”

War Time 1942-45: The Japanese Occupation

During the Japanese occupation years of World War II, the export of primary products was limited to the relatively small amounts required for the Japanese economy. This led to the abandonment of large areas of rubber and the closure of many mines, the latter progressively affected by a shortage of spare parts for machinery. Businesses, especially those Chinese-owned, were taken over and reassigned to Japanese interests. Rice imports fell heavily and thus the population devoted a large part of their efforts to producing enough food to stay alive. Large numbers of laborers (many of whom died) were conscripted to work on military projects such as construction of the Thai-Burma railroad. Overall the war period saw the dislocation of the export economy, widespread destruction of the infrastructure (roads, bridges etc.) and a decline in standards of public health. It also saw a rise in inter-ethnic tensions due to the harsh treatment meted out by the Japanese to some groups, notably the Chinese, compared to a more favorable attitude towards the indigenous peoples among whom (Malays particularly) there was a growing sense of ethnic nationalism (Drabble, 2000).

Postwar Reconstruction and Independence

The returning British colonial rulers had two priorities after 1945; to rebuild the export economy as it had been under the OIDL (see above), and to rationalize the fragmented administrative structure (see General Background). The first was accomplished by the late 1940s with estates and mines refurbished, production restarted once the labor force had been brought back and adequate rice imports regained. The second was a complex and delicate political process which resulted in the formation of the Federation of Malaya (1948) from which Singapore, with its predominantly Chinese population (about 75%), was kept separate. In Borneo in 1946 the state of Sarawak, which had been a private kingdom of the English Brooke family (so-called “White Rajas”) since 1841, and North Borneo, administered by the British North Borneo Company from 1881, were both transferred to direct rule from Britain. However, independence was clearly on the horizon and in Malaya tensions continued with the guerrilla campaign (called the “Emergency”) waged by the Malayan Communist Party (membership largely Chinese) from 1948-60 to force out the British and set up a Malayan Peoples’ Republic. This failed and in 1957 the Malayan Federation gained independence (Merdeka) under a “bargain” by which the Malays would hold political paramountcy while others, notably Chinese and Indians, were given citizenship and the freedom to pursue their economic interests. The bargain was institutionalized as the Alliance, later renamed the National Front (Barisan Nasional) which remains the dominant political grouping. In 1963 the Federation of Malaysia was formed in which the bumiputera population was sufficient in total to offset the high proportion of Chinese arising from the short-lived inclusion of Singapore (Andaya and Andaya, 2001).

Towards the Formation of a National Economy

Postwar two long-term problems came to the forefront. These were (a) the political fragmentation (see above) which had long prevented a centralized approach to economic development, coupled with control from Britain which gave primacy to imperial as opposed to local interests and (b) excessive dependence on a small range of primary products (notably rubber and tin) which prewar experience had shown to be an unstable basis for the economy.

The first of these was addressed partly through the political rearrangements outlined in the previous section, with the economic aspects buttressed by a report from a mission to Malaya from the International Bank for Reconstruction and Development (IBRD) in 1954. The report argued that Malaya “is now a distinct national economy.” A further mission in 1963 urged “closer economic cooperation between the prospective Malaysia[n] territories” (cited in Drabble, 2000, 161, 176). The rationale for the Federation was that Singapore would serve as the initial center of industrialization, with Malaya, Sabah and Sarawak following at a pace determined by local conditions.

The second problem centered on economic diversification. The IBRD reports just noted advocated building up a range of secondary industries to meet a larger portion of the domestic demand for manufactures, i.e. import-substitution industrialization (ISI). In the interim dependence on primary products would perforce continue.

The Adoption of Planning

In the postwar world the development plan (usually a Five-Year Plan) was widely adopted by Less-Developed Countries (LDCs) to set directions, targets and estimated costs. Each of the Malaysian territories had plans during the 1950s. Malaya was the first to get industrialization of the ISI type under way. The Pioneer Industries Ordinance (1958) offered inducements such as five-year tax holidays, guarantees (to foreign investors) of freedom to repatriate profits and capital etc. A modest degree of tariff protection was granted. The main types of goods produced were consumer items such as batteries, paints, tires, and pharmaceuticals. Just over half the capital invested came from abroad, with neighboring Singapore in the lead. When Singapore exited the federation in 1965, Malaysia’s fledgling industrialization plans assumed greater significance although foreign investors complained of stifling bureaucracy retarding their projects.

Primary production, however, was still the major economic activity and here the problem was rejuvenation of the leading industries, rubber in particular. New capital investment in rubber had slowed since the 1920s, and the bulk of the existing trees were nearing the end of their economic life. The best prospect for rejuvenation lay in cutting down the old trees and replanting the land with new varieties capable of raising output per acre/hectare by a factor of three or four. However, the new trees required seven years to mature. Corporately owned estates could replant progressively, but smallholders could not face such a prolonged loss of income without support. To encourage replanting, the government offered grants to owners, financed by a special duty on rubber exports. The process was a lengthy one and it was the 1980s before replanting was substantially complete. Moreover, many estates elected to switch over to a new crop, oil palms (a product used primarily in foodstuffs), which offered quicker returns. Progress was swift and by the 1960s Malaysia was supplying 20 percent of world demand for this commodity.

Another priority at this time consisted of programs to improve the standard of living of the indigenous peoples, most of whom lived in the rural areas. The main instrument was land development, with schemes to open up large areas (say 100,000 acres or 40 000 hectares) which were then subdivided into 10 acre/4 hectare blocks for distribution to small farmers from overcrowded regions who were either short of land or had none at all. Financial assistance (repayable) was provided to cover housing and living costs until the holdings became productive. Rubber and oil palms were the main commercial crops planted. Steps were also taken to increase the domestic production of rice to lessen the historical dependence on imports.

In the primary sector Malaysia’s range of products was increased from the 1960s by a rapid increase in the export of hardwood timber, mostly in the form of (unprocessed) saw-logs. The markets were mainly in East Asia and Australasia. Here the largely untapped resources of Sabah and Sarawak came to the fore, but the rapid rate of exploitation led by the late twentieth century to damaging effects on both the environment (extensive deforestation, soil-loss, silting, changed weather patterns), and the traditional hunter-gatherer way of life of forest-dwellers (decrease in wild-life, fish, etc.). Other development projects such as the building of dams for hydroelectric power also had adverse consequences in all these respects (Amarjit Kaur, 1998; Drabble, 2000; Hong, 1987).

A further major addition to primary exports came from the discovery of large deposits of oil and natural gas in East Malaysia, and off the east coast of the Peninsula from the 1970s. Gas was exported in liquified form (LNG), and was also used domestically as a substitute for oil. At peak values in 1982, petroleum and LNG provided around 29 percent of Malaysian export earnings but had declined to 18 percent by 1988.

Industrialization and the New Economic Policy 1970-90

The program of industrialization aimed primarily at the domestic market (ISI) lost impetus in the late 1960s as foreign investors, particularly from Britain switched attention elsewhere. An important factor here was the outbreak of civil disturbances in May 1969, following a federal election in which political parties in the Peninsula (largely non-bumiputera in membership) opposed to the Alliance did unexpectedly well. This brought to a head tensions, which had been rising during the 1960s over issues such as the use of the national language, Malay (Bahasa Malaysia) as the main instructional medium in education. There was also discontent among Peninsular Malays that the economic fruits since independence had gone mostly to non-Malays, notably the Chinese. The outcome was severe inter-ethnic rioting centered in the federal capital, Kuala Lumpur, which led to the suspension of parliamentary government for two years and the implementation of the New Economic Policy (NEP).

The main aim of the NEP was a restructuring of the Malaysian economy over two decades, 1970-90 with the following aims:

  1. to redistribute corporate equity so that the bumiputera share would rise from around 2 percent to 30 percent. The share of other Malaysians would increase marginally from 35 to 40 percent, while that of foreigners would fall from 63 percent to 30 percent.
  2. to eliminate the close link between race and economic function (a legacy of the colonial era) and restructure employment so that that the bumiputera share in each sector would reflect more accurately their proportion of the total population (roughly 55 percent). In 1970 this group had about two-thirds of jobs in the primary sector where incomes were generally lowest, but only 30 percent in the secondary sector. In high-income middle class occupations (e.g. professions, management) the share was only 13 percent.
  3. To eradicate poverty irrespective of race. In 1970 just under half of all households in Peninsular Malaysia had incomes below the official poverty line. Malays accounted for about 75 percent of these.

The principle underlying these aims was that the redistribution would not result in any one group losing in absolute terms. Rather it would be achieved through the process of economic growth, i.e. the economy would get bigger (more investment, more jobs, etc.). While the primary sector would continue to receive developmental aid under the successive Five Year Plans, the main emphasis was a switch to export-oriented industrialization (EOI) with Malaysia seeking a share in global markets for manufactured goods. Free Trade Zones (FTZs) were set up in places such as Penang where production was carried on with the undertaking that the output would be exported. Firms locating there received concessions such as duty-free imports of raw materials and capital goods, and tax concessions, aimed at primarily at foreign investors who were also attracted by Malaysia’s good facilities, relatively low wages and docile trade unions. A range of industries grew up; textiles, rubber and food products, chemicals, telecommunications equipment, electrical and electronic machinery/appliances, car assembly and some heavy industries, iron and steel. As with ISI, much of the capital and technology was foreign, for example the Japanese firm Mitsubishi was a partner in a venture to set up a plant to assemble a Malaysian national car, the Proton, from mostly imported components (Drabble, 2000).

Results of the NEP

Table 2 below shows the outcome of the NEP in the categories outlined above.

Table 2
Restructuring under the NEP, 1970-90

1970 1990
Wealth Ownership (%) Bumiputera 2.0 20.3
Other Malaysians 34.6 54.6
Foreigners 63.4 25.1
Employment
(%) of total
workers
in each
sector
Primary sector (agriculture, mineral
extraction, forest products and fishing)
Bumiputera 67.6 [61.0]* 71.2 [36.7]*
Others 32.4 28.8
Secondary sector
(manufacturing and construction)
Bumiputera 30.8 [14.6]* 48.0 [26.3]*
Others 69.2 52.0
Tertiary sector (services) Bumiputera 37.9 [24.4]* 51.0 [36.9]*
Others 62.1 49.0

Note: [ ]* is the proportion of the ethnic group thus employed. The “others” category has not been disaggregated by race to avoid undue complexity.
Source: Drabble, 2000, Table 10.9.

Section (a) shows that, overall, foreign ownership fell substantially more than planned, while that of “Other Malaysians” rose well above the target. Bumiputera ownership appears to have stopped well short of the 30 percent mark. However, other evidence suggests that in certain sectors such as agriculture/mining (35.7%) and banking/insurance (49.7%) bumiputera ownership of shares in publicly listed companies had already attained a level well beyond the target. Section (b) indicates that while bumiputera employment share in primary production increased slightly (due mainly to the land schemes), as a proportion of that ethnic group it declined sharply, while rising markedly in both the secondary and tertiary sectors. In middle class employment the share rose to 27 percent.

As regards the proportion of households below the poverty line, in broad terms the incidence in Malaysia fell from approximately 49 percent in 1970 to 17 percent in 1990, but with large regional variations between the Peninsula (15%), Sarawak (21 %) and Sabah (34%) (Drabble, 2000, Table 13.5). All ethnic groups registered big falls, but on average the non-bumiputera still enjoyed the lowest incidence of poverty. By 2002 the overall level had fallen to only 4 percent.

The restructuring of the Malaysian economy under the NEP is very clear when we look at the changes in composition of the Gross Domestic Product (GDP) in Table 3 below.

Table 3
Structural Change in GDP 1970-90 (% shares)

Year Primary Secondary Tertiary
1970 44.3 18.3 37.4
1990 28.1 30.2 41.7

Source: Malaysian Government, 1991, Table 3-2.

Over these three decades Malaysia accomplished a transition from a primary product-dependent economy to one in which manufacturing industry had emerged as the leading growth sector. Rubber and tin, which accounted for 54.3 percent of Malaysian export value in 1970, declined sharply in relative terms to a mere 4.9 percent in 1990 (Crouch, 1996, 222).

Factors in the structural shift

The post-independence state played a leading role in the transformation. The transition from British rule was smooth. Apart from the disturbances in 1969 government maintained a firm control over the administrative machinery. Malaysia’s Five Year Development plans were a model for the developing world. Foreign capital was accorded a central role, though subject to the requirements of the NEP. At the same time these requirements discouraged domestic investors, the Chinese especially, to some extent (Jesudason, 1989).

Development was helped by major improvements in education and health. Enrolments at the primary school level reached approximately 90 percent by the 1970s, and at the secondary level 59 percent of potential by 1987. Increased female enrolments, up from 39 percent to 58 percent of potential from 1975 to 1991, were a notable feature, as was the participation of women in the workforce which rose to just over 45 percent of total employment by 1986/7. In the tertiary sector the number of universities increased from one to seven between 1969 and 1990 and numerous technical and vocational colleges opened. Bumiputera enrolments soared as a result of the NEP policy of redistribution (which included ethnic quotas and government scholarships). However, tertiary enrolments totaled only 7 percent of the age group by 1987. There was an “educational-occupation mismatch,” with graduates (bumiputera especially) preferring jobs in government, and consequent shortfalls against strong demand for engineers, research scientists, technicians and the like. Better living conditions (more homes with piped water and more rural clinics, for example) led to substantial falls in infant mortality, improved public health and longer life-expectancy, especially in Peninsular Malaysia (Drabble, 2000, 248, 284-6).

The quality of national leadership was a crucial factor. This was particularly so during the NEP. The leading figure here was Dr Mahathir Mohamad, Malaysian Prime Minister from 1981-2003. While supporting the NEP aim through positive discrimination to give bumiputera an economic stake in the country commensurate with their indigenous status and share in the population, he nevertheless emphasized that this should ultimately lead them to a more modern outlook and ability to compete with the other races in the country, the Chinese especially (see Khoo Boo Teik, 1995). There were, however, some paradoxes here. Mahathir was a meritocrat in principle, but in practice this period saw the spread of “money politics” (another expression for patronage) in Malaysia. In common with many other countries Malaysia embarked on a policy of privatization of public assets, notably in transportation (e.g. Malaysian Airlines), utilities (e.g. electricity supply) and communications (e.g. television). This was done not through an open process of competitive tendering but rather by a “nebulous ‘first come, first served’ principle” (Jomo, 1995, 8) which saw ownership pass directly to politically well-connected businessmen, mainly bumiputera, at relatively low valuations.

The New Development Policy

Positive action to promote bumiputera interests did not end with the NEP in 1990, this was followed in 1991 by the New Development Policy (NDP), which emphasized assistance only to “Bumiputera with potential, commitment and good track records” (Malaysian Government, 1991, 17) rather than the previous blanket measures to redistribute wealth and employment. In turn the NDP was part of a longer-term program known as Vision 2020. The aim here is to turn Malaysia into a fully industrialized country and to quadruple per capita income by the year 2020. This will require the country to continue ascending the technological “ladder” from low- to high-tech types of industrial production, with a corresponding increase in the intensity of capital investment and greater retention of value-added (i.e. the value added to raw materials in the production process) by Malaysian producers.

The Malaysian economy continued to boom at historically unprecedented rates of 8-9 percent a year for much of the 1990s (see next section). There was heavy expenditure on infrastructure, for example extensive building in Kuala Lumpur such as the Twin Towers (currently the highest buildings in the world). The volume of manufactured exports, notably electronic goods and electronic components increased rapidly.

Asian Financial Crisis, 1997-98

The Asian financial crisis originated in heavy international currency speculation leading to major slumps in exchange rates beginning with the Thai baht in May 1997, spreading rapidly throughout East and Southeast Asia and severely affecting the banking and finance sectors. The Malaysian ringgit exchange rate fell from RM 2.42 to 4.88 to the U.S. dollar by January 1998. There was a heavy outflow of foreign capital. To counter the crisis the International Monetary Fund (IMF) recommended austerity changes to fiscal and monetary policies. Some countries (Thailand, South Korea, and Indonesia) reluctantly adopted these. The Malaysian government refused and implemented independent measures; the ringgitbecame non-convertible externally and was pegged at RM 3.80 to the US dollar, while foreign capital repatriated before staying at least twelve months was subject to substantial levies. Despite international criticism these actions stabilized the domestic situation quite effectively, restoring net growth (see next section) especially compared to neighboring Indonesia.

Rates of Economic Growth

Malaysia’s economic growth in comparative perspective from 1960-90 is set out in Table 4 below.

Table 4
Asia-Pacific Region: Growth of Real GDP (annual average percent)

1960-69 1971-80 1981-89
Japan 10.9 5.0 4.0
Asian “Tigers”
Hong Kong 10.0 9.5 7.2
South Korea 8.5 8.7 9.3
Singapore 8.9 9.0 6.9
Taiwan 11.6 9.7 8.1
ASEAN-4
Indonesia 3.5 7.9 5.2
Malaysia 6.5 8.0 5.4
Philippines 4.9 6.2 1.7
Thailand 8.3 9.9 7.1

Source: Drabble, 2000, Table 10.2; figures for Japan are for 1960-70, 1971-80, and 1981-90.

The data show that Japan, the dominant Asian economy for much of this period, progressively slowed by the 1990s (see below). The four leading Newly Industrialized Countries (Asian “Tigers” as they were called) followed EOF strategies and achieved very high rates of growth. Among the four ASEAN (Association of Southeast Asian Nations formed 1967) members, again all adopting EOI policies, Thailand stood out followed closely by Malaysia. Reference to Table 1 above shows that by 1990 Malaysia, while still among the leaders in GDP per head, had slipped relative to the “Tigers.”

These economies, joined by China, continued growth into the 1990s at such high rates (Malaysia averaged around 8 percent a year) that the term “Asian miracle” became a common method of description. The exception was Japan which encountered major problems with structural change and an over-extended banking system. Post-crisis the countries of the region have started recovery but at differing rates. The Malaysian economy contracted by nearly 7 percent in 1998, recovered to 8 percent growth in 2000, slipped again to under 1 percent in 2001 and has since stabilized at between 4 and 5 percent growth in 2002-04.

The new Malaysian Prime Minister (since October 2003), Abdullah Ahmad Badawi, plans to shift the emphasis in development to smaller, less-costly infrastructure projects and to break the previous dominance of “money politics.” Foreign direct investment will still be sought but priority will be given to nurturing the domestic manufacturing sector.

Further improvements in education will remain a key factor (Far Eastern Economic Review, Nov.6, 2003).

Overview

Malaysia owes its successful historical economic record to a number of factors. Geographically it lies close to major world trade routes bringing early exposure to the international economy. The sparse indigenous population and labor force has been supplemented by immigrants, mainly from neighboring Asian countries with many becoming permanently domiciled. The economy has always been exceptionally open to external influences such as globalization. Foreign capital has played a major role throughout. Governments, colonial and national, have aimed at managing the structure of the economy while maintaining inter-ethnic stability. Since about 1960 the economy has benefited from extensive restructuring with sustained growth of exports from both the primary and secondary sectors, thus gaining a double impetus.

However, on a less positive assessment, the country has so far exchanged dependence on a limited range of primary products (e.g. tin and rubber) for dependence on an equally limited range of manufactured goods, notably electronics and electronic components (59 percent of exports in 2002). These industries are facing increasing competition from lower-wage countries, especially India and China. Within Malaysia the distribution of secondary industry is unbalanced, currently heavily favoring the Peninsula. Sabah and Sarawak are still heavily dependent on primary products (timber, oil, LNG). There is an urgent need to continue the search for new industries in which Malaysia can enjoy a comparative advantage in world markets, not least because inter-ethnic harmony depends heavily on the continuance of economic prosperity.

Select Bibliography

General Studies

Amarjit Kaur. Economic Change in East Malaysia: Sabah and Sarawak since 1850. London: Macmillan, 1998.

Andaya, L.Y. and Andaya, B.W. A History of Malaysia, second edition. Basingstoke: Palgrave, 2001.

Crouch, Harold. Government and Society in Malaysia. Sydney: Allen and Unwin, 1996.

Drabble, J.H. An Economic History of Malaysia, c.1800-1990: The Transition to Modern Economic Growth. Basingstoke: Macmillan and New York: St. Martin’s Press, 2000.

Furnivall, J.S. Colonial Policy and Practice: A Comparative Study of Burma and Netherlands India. Cambridge (UK), 1948.

Huff, W.G. The Economic Growth of Singapore: Trade and Development in the Twentieth Century. Cambridge: Cambridge University Press, 1994.

Jomo, K.S. Growth and Structural Change in the Malaysian Economy. London: Macmillan, 1990.

Industries/Transport

Alavi, Rokiah. Industrialization in Malaysia: Import Substitution and Infant Industry Performance. London: Routledge, 1966.

Amarjit Kaur. Bridge and Barrier: Transport and Communications in Colonial Malaya 1870-1957. Kuala Lumpur: Oxford University Press, 1985.

Drabble, J.H. Rubber in Malaya 1876-1922: The Genesis of the Industry. Kuala Lumpur: Oxford University Press, 1973.

Drabble, J.H. Malayan Rubber: The Interwar Years. London: Macmillan, 1991.

Huff, W.G. “Boom or Bust Commodities and Industrialization in Pre-World War II Malaya.” Journal of Economic History 62, no. 4 (2002): 1074-1115.

Jackson, J.C. Planters and Speculators: European and Chinese Agricultural Enterprise in Malaya 1786-1921. Kuala Lumpur: University of Malaya Press, 1968.

Lim Teck Ghee. Peasants and Their Agricultural Economy in Colonial Malaya, 1874-1941. Kuala Lumpur: Oxford University Press, 1977.

Wong Lin Ken. The Malayan Tin Industry to 1914. Tucson: University of Arizona Press, 1965.

Yip Yat Hoong. The Development of the Tin Mining Industry of Malaya. Kuala Lumpur: University of Malaya Press, 1969.

New Economic Policy

Jesudason, J.V. Ethnicity and the Economy: The State, Chinese Business and Multinationals in Malaysia. Kuala Lumpur: Oxford University Press, 1989.

Jomo, K.S., editor. Privatizing Malaysia: Rents, Rhetoric, Realities. Boulder, CO: Westview Press, 1995.

Khoo Boo Teik. Paradoxes of Mahathirism: An Intellectual Biography of Mahathir Mohamad. Kuala Lumpur: Oxford University Press, 1995.

Vincent, J.R., R.M. Ali and Associates. Environment and Development in a Resource-Rich Economy: Malaysia under the New Economic Policy. Cambridge, MA: Harvard University Press, 1997

Ethnic Communities

Chew, Daniel. Chinese Pioneers on the Sarawak Frontier, 1841-1941. Kuala Lumpur: Oxford University Press, 1990.

Gullick, J.M. Malay Society in the Late Nineteenth Century. Kuala Lumpur: Oxford University Press, 1989.

Hong, Evelyne. Natives of Sarawak: Survival in Borneo’s Vanishing Forests. Penang: Institut Masyarakat Malaysia, 1987.

Shamsul, A.B. From British to Bumiputera Rule. Singapore: Institute of Southeast Asian Studies, 1986.

Economic Growth

Far Eastern Economic Review. Hong Kong. An excellent weekly overview of current regional affairs.

Malaysian Government. The Second Outline Perspective Plan, 1991-2000. Kuala Lumpur: Government Printer, 1991.

Van der Eng, Pierre. “Assessing Economic Growth and the Standard of Living in Asia 1870-1990.” Milan, Eleventh International Economic History Congress, 1994.

Citation: Drabble, John. “The Economic History of Malaysia”. EH.Net Encyclopedia, edited by Robert Whaples. July 31, 2004. URL http://eh.net/encyclopedia/economic-history-of-malaysia/

History of Labor Turnover in the U.S.

Laura Owen, DePaul University

Labor turnover measures the movement of workers in and out of employment with a particular firm. Consequently, concern with the issue and interest in measuring such movement only arose when working for an employer (rather than self-employment in craft or agricultural production) became the norm. The rise of large scale firms in the late nineteenth century and the decreasing importance (in percentage terms) of agricultural employment meant that a growing number of workers were employed by firms. It was only in this context that interest in measuring labor turnover and understanding its causes began.

Trends in Labor Turnover

Labor turnover is typically measured in terms of the separation rate (quits, layoffs, and discharges per 100 employees on the payroll). The aggregate data on turnover among U.S. workers is available from a series of studies focusing almost entirely on the manufacturing sector. These data show high rates of labor turnover (annual rates exceeding 100%) in the early decades of the twentieth century, substantial declines in the 1920s, significant fluctuations during the economic crisis of the 1930s and the boom of the World War II years, and a return to the low rates of the 1920s in the post-war era. (See Figure 1 and its notes.) Firm and state level data (from the late nineteenth and early twentieth centuries) also indicate that labor turnover rates exceeding 100 were common to many industries.

Contemporaries expressed concern over the high rates of labor turnover in the early part of the century and conducted numerous studies to understand its causes and consequences. (See for example, Douglas 1918, Lescohier 1923, and Slichter 1921.) Some of these studies focused on the irregularity in labor demand which resulted in seasonal and cyclical layoffs. Others interpreted the high rates of labor turnover as an indication of worker dissatisfaction and labor relations problems. Many observers began to recognize that labor turnover was costly for the firm (in terms of increased hiring and training expenditures) and for the worker (in terms of irregularity of income flows).

Both the high rates of labor turnover in the early years of the twentieth century and the dramatic declines in the 1920s are closely linked with changes in the worker-initiated component of turnover rates. During the 1910s and 1920s, quits accounted (on average) for over seventy percent of all separations and the decline in annual separation rates from 123.4 in 1920 to 37.1 in 1928 was primarily driven be a decline in quit rates, from 100.9 to 25.8 per 100 employees.

Explanations of the Decline in Turnover in the 1920s

The aggregate decline in labor turnover in the 1920s appears to be the beginning of a long run trend. Numerous studies, seeking to identify why workers began quitting their jobs less frequently, have pointed to the role of altered employment relationships. (See, for example, Owen 1995b, Ozanne 1967, and Ross 1958.) The new practices of employers, categorized initially as welfare work and later as the development of internal labor markets, included a variety of policies aimed at strengthening the attachment between workers and firms. The most important of these policies were the establishment of personnel or employment departments, the offering of seniority-based compensation, and the provision of on-the-job training and internal promotion ladders. In the U.S., these changes in employment practices began at a few firms around the turn of the twentieth century, intensified during WWI and became more widespread in the 1920s. However, others have suggested that the changes in quit behavior in the 1920s were the result of immigration declines (due to newly implemented quotas) and slack labor markets (Goldin 2000, Jacoby 1985).

Even the motivation of firms’ implementation of the new practices is subject to debate. One argument focuses on how the shift from craft to mass production increased the importance of firm-specific skills and on-the-job training. Firms’ greater investment in training meant that it was more costly to have workers leave and provided the incentive for firms to lower turnover. However, others have provided evidence that job ladders and internal promotion were not always implemented to reward the increased worker productivity resulting from on-the-job training. Rather, these employment practices were sometimes attempts to appease workers and to prevent unionization. Labor economists have also noted that providing various forms of deferred compensation (pensions, wages which increase with seniority, etc.) can increase worker effort and reduce the costs of monitoring workers. Whether promotion ladders established within firms reflect an attempt to forestall unionization, a means of protecting firm investments in training by lowering turnover, or a method of ensuring worker effort is still open to debate, though the explanations are not necessarily mutually exclusive (Jacoby 1983, Lazear 1981, Owen 1995b, Sundstrum 1988, Stone 1974).

Subsequent Patterns of Labor Turnover

In the 1930s and 1940s the volatility in labor turnover increased and the relationships between the components of total separations shifted (Figure 1). The depressed labor markets of the 1930s meant that procyclical quit rates declined, but increased layoffs kept total separation rates relatively high, (on average 57 per 100 employees between 1930 and 1939). During the tight labor markets of the World War II years, turnover again reached rates exceeding 100%, with increases in quits acting as the primary determinant. Quits and total separations declined after the war, producing much lower and less volatile turnover rates between 1950 and 1970 (Figure 1).

Though the decline in labor turnover in the early part of the twentieth century was seen by many as a sign of improved labor-management relations, the low turnover rates of the post-WWII era led macroeconomists to begin to question the benefits of strong attachments between workers and firms. Specifically, there was concern that long-term employment contracts (either implicit or explicit) might generate wage rigidities which could result in increased unemployment and other labor market adjustment problems (Ross 1958). More recently, labor economists have wondered whether the movement toward long-term attachments between workers and firms is reversing itself. “Changes in Job Stability and Job Security” a special issue of the Journal of Labor Economics (October 1999) includes numerous analyses suggesting that job instability increased among some groups of workers (particularly those with longer tenure) amidst the restructuring activities of the 1990s.

Turnover Data and Methods of Analysis

The historical analyses of labor turnover have relied upon two types of data. The first type consists of firm-level data on turnover within a particular workplace or governmental collections (through firms) of data on the level of turnover within particular industries or geographic locales. If these turnover data are broken down into their components – quits, layoffs, and discharges – a quit rate model (such as the one developed by Parsons 1973) can be employed to analyze the worker-initiated component of turnover as it relates to job search behavior. These analyses (see for example, Owen 1995a) estimate quit rates as a function of variables reflecting labor demand conditions (e.g., unemployment and relative wages) and of labor supply variables reflecting the composition of the labor force (e.g., age/gender distributions and immigrant flows).

The second type of turnover data is generated using employment records or governmental surveys as the source for information specific to individual workers. Job histories can be created with these data and used to analyze the impact of individual characteristics such as age, education, and occupation, on labor turnover, firm tenure and occupational experience. Analysis of this type of data typically employs a “hazard” model that estimates the probability of a worker’s leaving a job as a function of individual worker characteristics. (See, for example, Carter and Savoca 1992, Maloney 1998, Whatley and Sedo 1998.)

Labor Turnover and Long Term Employment

Another measure of worker/firm attachment is tenure – the number of years a worker stays with a particular job or firm. While significant declines in labor turnover (such as those observed in the 1920s) will likely be reflected in rising average tenure with the firm, high rates of labor turnover do not imply that long tenure is not present among the workforce. If high turnover is concentrated among a subset of workers (the young or the unskilled), then high turnover can co-exist with the existence of life-time jobs for another subset (the skilled). For example, the high rates of labor turnover that were common until the mid-1920s co-existed with long term jobs for some workers. The evidence indicates that while long-term employment became more common in the twentieth century, it was not completely absent from nineteenth-century labor markets (Carter 1988, Carter and Savoca 1990, Hall 1982).

Notes on Turnover Data in Figure 1

The turnover data used to generate Figure 1 come from three separate sources: Brissenden and Frankel (1920) for the 1910-1918 data; Berridge (1929) for the 1919-1929 data; and U.S. Bureau of the Census (1972) for the 1930-1970 data. Several adjustments were necessary to present them in a single format. The Brissenden and Frankel study calculated the separate components of turnover (quits and layoffs) from only a subsample of their data. The subsample data were used to calculate the percentage of total separations accounted for by quits and layoffs and these percentages were applied to the total separations data from the full sample to estimate the quit and layoff components. The 1930-1970 data reported in Historical Statistics of the United States were collected by the U.S. Bureau of Labor Statistics and originally reported in Employment and Earning, U.S., 1909-1971. Unlike the earlier series, these data were originally reported as average monthly rates and have been converted into annualized figures by multiplying times 12.

In addition to the adjustments described above, there are four issues relating to the comparability of these data which should be noted. First, the turnover data for the 1919 to 1929 period are median rates, whereas the data from before and after that period were compiled as weighted averages of the rates of all firms surveyed. If larger firms have lower turnover rates (as Arthur Ross 1958 notes), medians will be higher than weighted averages. The data for the one year covered by both studies (1919) confirm this difference: the median turnover rates from Berridge (1920s data) exceed the weighted average turnover rates from Brissenden and Frankel (1910s data). Brissenden and Frankel suggested that the actual turnover of labor in manufacturing may have been much higher than their sample statistics suggest:

The establishments from which the Bureau of Labor Statistics has secured labor mobility figures have necessarily been the concerns which had the figures to give, that is to say, concerns which had given rather more attention than most firms to their force-maintenance problems. These firms reporting are chiefly concerns which had more or less centralized employment systems and were relatively more successful in the maintenance of a stable work force (1920, p. 40).

A similar underestimation bias continued with the BLS collection of data because the average firm size in the sample was larger than the average firm size in the whole population of manufacturing firms (U.S. Bureau of the Census, p.160), and larger firms tend to have lower turnover rates.

Second, the data for 1910-1918 (Brissenden and Frankel) includes workers in public utilities and mercantile establishments in addition to workers in manufacturing industries and is therefore not directly comparable to the later series on the turnover of manufacturing workers. However, these non-manufacturing workers had lower turnover rates than the manufacturing workers in both 1913/14 and 1917/18 (the two years for which Brissenden and Frankel present industry-level data). Thus, the decline in turnover of manufacturing workers from the 1910s to the 1920s may actually be underestimated.

Third, the turnover rates for 1910 to 1918 (Brissenden and Frankel) were originally calculated per labor hour. The number of employees was estimated at one worker per 3,000 labor hours – the number of hours in a typical work year. This conversion generates the number of full-year workers, not allowing for any procyclicality of labor hours. If labor hours are procyclical, this calculation overstates (understates) the number of workers during an upswing (downswing), thus dampening the response of turnover rates to economic cycles.

Fourth, total separations are broken down into quits, layoffs, discharges and other (including military enlistment, death and retirement). Prior to 1940, the “other” separations were included in quits.

References

Berridge, William A. “Labor Turnover in American Factories.” Monthly Labor Review 29 (July 1929): 62-65.
Brissenden, Paul F. and Emil Frankel. “Mobility of Labor in American Industry.” Monthly Labor Review 10 (June 1920): 1342-62.
Carter, Susan B. “The Changing Importance of Lifetime Jobs, 1892-1978.”Industrial Relations 27, no. 3 (1988): 287-300.
Carter, Susan B. and Elizabeth Savoca. “The ‘Teaching Procession’? Another Look at Teacher Tenure, 1845-1925.” Explorations in Economic History 29, no. 4 (1992): 401-16.
Carter, Susan B. and Elizabeth Savoca. “Labor Mobility and Lengthy Jobs in Nineteenth-Century America.” Journal of Economic History 50, no. 1 (1990): 1-16.
Douglas, Paul H. “The Problem of Labor Turnover.” American Economic Review 8, no. 2 (1918): 306-16.
Goldin, Claudia. “Labor Markets in the Twentieth Century.” In The Cambridge Economic History of the United States, III, edited by Stanley L. Engerman and Robert E. Gallman, 549-623. Cambridge: Cambridge University Press, 2000.
Hall, Robert E. “The Importance of Lifetime Jobs in the U.S. Economy.” American Economic Review 72, no. 4 (1982): 716-24.
Jacoby, Sanford M. “Industrial Labor Mobility in Historical Perspective.” Industrial Relations 22, no. 2 (1983): 261-82.
Jacoby, Sanford M. Employing Bureaucracy: Managers, Unions, and the Transformation of Work in American Industry, 1900-1945. New York: Columbia University Press, 1985.
Lazear, Edward. P. “Agency, Earnings Profiles, Productivity, and Hours Reduction.” American Economic Review 71, no. 4 (1981): 606-19.
Lescohier, Don D. The Labor Market. New York: Macmillan, 1923.
Maloney, Thomas N. “Racial Segregation, Working Conditions and Workers’ Health: Evidence from the A.M. Byers Company, 1916-1930.” Explorations in Economic History 35, no. 3 (1998): 272-95.
Owen, Laura .J. “Worker Turnover in the 1920s: What Labor Supply Arguments Don’t Tell Us.” Journal of Economic History 55, no.4 (1995a): 822-41.
Owen, Laura J. “Worker Turnover in the 1920s: The Role of Changing Employment Policies.” Industrial and Corporate Change 4 (1995b): 499-530.
Ozanne, Robert. A Century of Labor-Management Relations at McCormick and International Harvester. Madison: University of Wisconsin Press, 1967.
Parsons, Donald O. “Quit Rates Over Time: A Search and Information Approach.” American Economic Review 63, no.3 (1973): 390-401.
Ross, Arthur M. “Do We Have a New Industrial Feudalism?” American Economic Review 48 (1958): 903-20.
Slichter, Sumner. The Turnover of Factory Labor. New York: Appleton, 1921.
Stone, Katherine. “The Origins of Job Structures in the Steel Industry.” Review of Radical Political Economy 6, no.2 (1974): 113-73.
Sundstrom, William A. “Internal Labor Markets before World War I: On-the-Job Training and Employee Promotion.” Explorations in Economic History 25 (October 1988): 424-45.
U.S. Bureau of the Census. Historical Statistics of the United States, Colonial Times to 1970. Washington, D.C., 1975.
Whatley, Warren C. and Stan Sedo. “Quit Behavior as a Measure of Worker Opportunity: Black Workers in the Interwar Industrial North.” American Economic Revie w 88, no. 2 (1998): 363-67.

Citation: Owen, Laura. “History of Labor Turnover in the U.S.”. EH.Net Encyclopedia, edited by Robert Whaples. April 29, 2004. URL http://eh.net/encyclopedia/history-of-labor-turnover-in-the-u-s/

The History of American Labor Market Institutions and Outcomes

Joshua Rosenbloom, University of Kansas

One of the most important implications of modern microeconomic theory is that perfectly competitive markets produce an efficient allocation of resources. Historically, however, most markets have not approached the level of organization of this theoretical ideal. Instead of the costless and instantaneous communication envisioned in theory, market participants must rely on a set of incomplete and often costly channels of communication to learn about conditions of supply and demand; and they may face significant transaction costs to act on the information that they have acquired through these channels.

The economic history of labor market institutions is concerned with identifying the mechanisms that have facilitated the allocation of labor effort in the economy at different times, tracing the historical processes by which they have responded to shifting circumstances, and understanding how these mechanisms affected the allocation of labor as well as the distribution of labor’s products in different epochs.

Labor market institutions include both formal organizations (such as union hiring halls, government labor exchanges, and third party intermediaries such as employment agents), and informal mechanisms of communication such as word-of-mouth about employment opportunities passed between family and friends. The impact of these institutions is broad ranging. It includes the geographic allocation of labor (migration and urbanization), decisions about education and training of workers (investment in human capital), inequality (relative wages), the allocation of time between paid work and other activities such as household production, education, and leisure, and fertility (the allocation of time between production and reproduction).

Because each worker possesses a unique bundle of skills and attributes and each job is different, labor market transactions require the communication of a relatively large amount of information. In other words, the transactions costs involved in the exchange of labor are relatively high. The result is that the barriers separating different labor markets have sometimes been quite high, and these markets are relatively poorly integrated with one another.

The frictions inherent in the labor market mean that even during macroeconomic expansions there may be both a significant number of unemployed workers and a large number of unfilled vacancies. When viewed from some distance and looked at in the long-run, however, what is most striking is how effective labor market institutions have been in adapting to the shifting patterns of supply and demand in the economy. Over the past two centuries American labor markets have accomplished a massive redistribution of labor out of agriculture into manufacturing, and then from manufacturing into services. At the same time they have accomplished a huge geographic reallocation of labor between the United States and other parts of the world as well as within the United States itself, both across states and regions and from rural locations to urban areas.

This essay is organized topically, beginning with a discussion of the evolution of institutions involved in the allocation of labor across space and then taking up the development of institutions that fostered the allocation of labor across industries and sectors. The third section considers issues related to labor market performance.

The Geographic Distribution of Labor

One of the dominant themes of American history is the process of European settlement (and the concomitant displacement of the native population). This movement of population is in essence a labor market phenomenon. From the beginning of European settlement in what became the United States, labor markets were characterized by the scarcity of labor in relation to abundant land and natural resources. Labor scarcity raised labor productivity and enabled ordinary Americans to enjoy a higher standard of living than comparable Europeans. Counterbalancing these inducements to migration, however, were the high costs of travel across the Atlantic and the significant risks posed by settlement in frontier regions. Over time, technological changes lowered the costs of communication and transportation. But exploiting these advantages required the parallel development of new labor market institutions.

Trans-Atlantic Migration in the Colonial Period

During the seventeenth and eighteenth centuries a variety of labor market institutions developed to facilitate the movement of labor in response to the opportunities created by American factor proportions. While some immigrants migrated on their own, the majority of immigrants were either indentured servants or African slaves.

Because of the cost of passage—which exceeded half a year’s income for a typical British immigrant and a full year’s income for a typical German immigrant—only a small portion of European migrants could afford to pay for their passage to the Americas (Grubb 1985a). They did so by signing contracts, or “indentures,” committing themselves to work for a fixed number of years in the future—their labor being their only viable asset—with British merchants, who then sold these contracts to colonists after their ship reached America. Indentured servitude was introduced by the Virginia Company in 1619 and appears to have arisen from a combination of the terms of two other types of labor contract widely used in England at the time: service in husbandry and apprenticeship (Galenson 1981). In other cases, migrants borrowed money for their passage and committed to repay merchants by pledging to sell themselves as servants in America, a practice known as “redemptioner servitude (Grubb 1986). Redemptioners bore increased risk because they could not predict in advance what terms they might be able to negotiate for their labor, but presumably they did so because of other benefits, such as the opportunity to choose their own master, and to select where they would be employed.

Although data on immigration for the colonial period are scattered and incomplete a number of scholars have estimated that between half and three quarters of European immigrants arriving in the colonies came as indentured or redemptioner servants. Using data for the end of the colonial period Grubb (1985b) found that close to three-quarters of English immigrants to Pennsylvania and nearly 60 percent of German immigrants arrived as servants.

A number of scholars have examined the terms of indenture and redemptioner contracts in some detail (see, e.g., Galenson 1981; Grubb 1985a). They find that consistent with the existence of a well-functioning market, the terms of service varied in response to differences in individual productivity, employment conditions, and the balance of supply and demand in different locations.

The other major source of labor for the colonies was the forced migration of African slaves. Slavery had been introduced in the West Indies at an early date, but it was not until the late seventeenth century that significant numbers of slaves began to be imported into the mainland colonies. From 1700 to 1780 the proportion of blacks in the Chesapeake region grew from 13 percent to around 40 percent. In South Carolina and Georgia, the black share of the population climbed from 18 percent to 41 percent in the same period (McCusker and Menard, 1985, p. 222). Galenson (1984) explains the transition from indentured European to enslaved African labor as the result of shifts in supply and demand conditions in England and the trans-Atlantic slave market. Conditions in Europe improved after 1650, reducing the supply of indentured servants, while at the same time increased competition in the slave trade was lowering the price of slaves (Dunn 1984). In some sense the colonies’ early experience with indentured servants paved the way for the transition to slavery. Like slaves, indentured servants were unfree, and ownership of their labor could be freely transferred from one owner to another. Unlike slaves, however, they could look forward to eventually becoming free (Morgan 1971).

Over time a marked regional division in labor market institutions emerged in colonial America. The use of slaves was concentrated in the Chesapeake and Lower South, where the presence of staple export crops (rice, indigo and tobacco) provided economic rewards for expanding the scale of cultivation beyond the size achievable with family labor. European immigrants (primarily indentured servants) tended to concentrate in the Chesapeake and Middle Colonies, where servants could expect to find the greatest opportunities to enter agriculture once they had completed their term of service. While New England was able to support self-sufficient farmers, its climate and soil were not conducive to the expansion of commercial agriculture, with the result that it attracted relatively few slaves, indentured servants, or free immigrants. These patterns are illustrated in Table 1, which summarizes the composition and destinations of English emigrants in the years 1773 to 1776.

Table 1

English Emigration to the American Colonies, by Destination and Type, 1773-76

Total Emigration
Destination Number Percentage Percent listed as servants
New England 54 1.20 1.85
Middle Colonies 1,162 25.78 61.27
New York 303 6.72 11.55
Pennsylvania 859 19.06 78.81
Chesapeake 2,984 66.21 96.28
Maryland 2,217 49.19 98.33
Virginia 767 17.02 90.35
Lower South 307 6.81 19.54
Carolinas 106 2.35 23.58
Georgia 196 4.35 17.86
Florida 5 0.11 0.00
Total 4,507 80.90

Source: Grubb (1985b, p. 334).

International Migration in the Nineteenth and Twentieth Centuries

American independence marks a turning point in the development of labor market institutions. In 1808 Congress prohibited the importation of slaves. Meanwhile, the use of indentured servitude to finance the migration of European immigrants fell into disuse. As a result, most subsequent migration was at least nominally free migration.

The high cost of migration and the economic uncertainties of the new nation help to explain the relatively low level of immigration in the early years of the nineteenth century. But as the costs of transportation fell, the volume of immigration rose dramatically over the course of the century. Transportation costs were of course only one of the obstacles to international population movements. At least as important were problems of communication. Potential migrants might know in a general way that the United States offered greater economic opportunities than were available at home, but acting on this information required the development of labor market institutions that could effectively link job-seekers with employers.

For the most part, the labor market institutions that emerged in the nineteenth century to direct international migration were “informal” and thus difficult to document. As Rosenbloom (2002, ch. 2) describes, however, word-of-mouth played an important role in labor markets at this time. Many immigrants were following in the footsteps of friends or relatives already in the United States. Often these initial pioneers provided material assistance—helping to purchase ship and train tickets, providing housing—as well as information. The consequences of this so-called “chain migration” are readily reflected in a variety of kinds of evidence. Numerous studies of specific migration streams have documented the role of a small group of initial migrants in facilitating subsequent migration (for example, Barton 1975; Kamphoefner 1987; Gjerde 1985). At a more aggregate level, settlement patterns confirm the tendency of immigrants from different countries to concentrate in different cities (Ward 1971, p. 77; Galloway, Vedder and Shukla 1974).

Informal word-of-mouth communication was an effective labor market institution because it served both employers and job-seekers. For job-seekers the recommendations of friends and relatives were more reliable than those of third parties and often came with additional assistance. For employers the recommendations of current employees served as a kind of screening mechanism, since their employees were unlikely to encourage the immigration of unreliable workers.

While chain migration can explain a quantitatively large part of the redistribution of labor in the nineteenth century it is still necessary to explain how these chains came into existence in the first place. Chain migration always coexisted with another set of more formal labor market institutions that grew up largely to serve employers who could not rely on their existing labor force to recruit new hires (such as railroad construction companies). Labor agents, often themselves immigrants, acted as intermediaries between these employers and job-seekers, providing labor market information and frequently acting as translators for immigrants who could not speak English. Steamship companies operating between Europe and the United States also employed agents to help recruit potential migrants (Rosenbloom 2002, ch. 3).

By the 1840s networks of labor agents along with boarding houses serving immigrants and other similar support networks were well established in New York, Boston, and other major immigrant destinations. The services of these agents were well documented in published guides and most Europeans considering immigration must have known that they could turn to these commercial intermediaries if they lacked friends and family to guide them. After some time working in America these immigrants, if they were successful, would find steadier employment and begin to direct subsequent migration, thus establishing a new link in the stream of chain migration.

The economic impacts of immigration are theoretically ambiguous. Increased labor supply, by itself, would tend to lower wages—benefiting employers and hurting workers. But because immigrants are also consumers, the resulting increase in demand for goods and services will increase the demand for labor, partially offsetting the depressing effect of immigration on wages. As long as the labor to capital ratio rises, however, immigration will necessarily lower wages. But if, as was true in the late nineteenth century, foreign lending follows foreign labor, then there may be no negative impact on wages (Carter and Sutch 1999). Whatever the theoretical considerations, however, immigration became an increasingly controversial political issue during the late nineteenth and early twentieth centuries. While employers and some immigrant groups supported continued immigration, there was a growing nativist sentiment among other segments of the population. Anti-immigrant sentiments appear to have arisen out of a mix of perceived economic effects and concern about the implications of the ethnic, religious and cultural differences between immigrants and the native born.

In 1882, Congress passed the Chinese Exclusion Act. Subsequent legislative efforts to impose further restrictions on immigration passed Congress but foundered on presidential vetoes. The balance of political forces shifted, however, in the wake of World War I. In 1917 a literacy requirement was imposed for the first time, and in 1921 an Emergency Quota Act was passed (Goldin 1994).

With the passage of the Emergency Quota Act in 1921 and subsequent legislation culminating in the National Origins Act, the volume of immigration dropped sharply. Since this time international migration into the United States has been controlled to varying degrees by legal restrictions. Variations in the rules have produced variations in the volume of legal immigration. Meanwhile the persistence of large wage gaps between the United States and Mexico and other developing countries has encouraged a substantial volume of illegal immigration. It remains the case, however, that most of this migration—both legal and illegal—continues to be directed by chains of friends and relatives.

Recent trends in outsourcing and off-shoring have begun to create a new channel by which lower-wage workers outside the United States can respond to the country’s high wages without physically relocating. Workers in India, China, and elsewhere possessing technical skills can now provide services such as data entry or technical support by phone and over the internet. While the novelty of this phenomenon has attracted considerable attention, the actual volume of jobs moved off-shore remains limited, and there are important obstacles to overcome before more jobs can be carried out remotely (Edwards 2004).

Internal Migration in the Nineteenth and Twentieth Centuries

At the same time that American economic development created international imbalances between labor supply and demand it also created internal disequilibrium. Fertile land and abundant natural resources drew population toward less densely settled regions in the West. Over the course of the century, advances in transportation technologies lowered the cost of shipping goods from interior regions, vastly expanding the area available for settlement. Meanwhile transportation advances and technological innovations encouraged the growth of manufacturing and fueled increased urbanization. The movement of population and economic activity from the Eastern Seaboard into the interior of the continent and from rural to urban areas in response to these incentives is an important element of U.S. economic history in the nineteenth century.

In the pre-Civil War era, the labor market response to frontier expansion differed substantially between North and South, with profound effects on patterns of settlement and regional development. Much of the cost of migration is a result of the need to gather information about opportunities in potential destinations. In the South, plantation owners could spread these costs over a relatively large number of potential migrants—i.e., their slaves. Plantations were also relatively self-sufficient, requiring little urban or commercial infrastructure to make them economically viable. Moreover, the existence of well-established markets for slaves allowed western planters to expand their labor force by purchasing additional labor from eastern plantations.

In the North, on the other hand, migration took place through the relocation of small, family farms. Fixed costs of gathering information and the risks of migration loomed larger in these farmers’ calculations than they did for slaveholders, and they were more dependent on the presence of urban merchants to supply them with inputs and market their products. Consequently the task of mobilizing labor fell to promoters who bought up large tracts of land at low prices and then subdivided them into individual lots. To increase the value of these lands promoters offered loans, actively encourage the development of urban services such as blacksmith shops, grain merchants, wagon builders and general stores, and recruited settlers. With the spread of railroads, railroad construction companies also played a role in encouraging settlement along their routes to speed the development of traffic.

The differences in processes of westward migration in the North and South were reflected in the divergence of rates of urbanization, transportation infrastructure investment, manufacturing employment, and population density, all of which were higher in the North than in the South in 1860 (Wright 1986, pp. 19-29).

The Distribution of Labor among Economic Activities

Over the course of U.S. economic development technological changes and shifting consumption patterns have caused the demand for labor to increase in manufacturing and services and decline in agriculture and other extractive activities. These broad changes are illustrated in Table 2. As technological changes have increased the advantages of specialization and the division of labor, more and more economic activity has moved outside the scope of the household, and the boundaries of the labor market have been enlarged. As a result more and more women have moved into the paid labor force. On the other hand, with the increasing importance of formal education, there has been a decline in the number of children in the labor force (Whaples 2005).

Table 2

Sectoral Distribution of the Labor Force, 1800-1999

Share in
Non-Agriculture
Year Total Labor Force (1000s) Agriculture Total Manufacturing Services
1800 1,658 76.2 23.8
1850 8,199 53.6 46.4
1900 29,031 37.5 59.4 35.8 23.6
1950 57,860 11.9 88.1 41.0 47.1
1999 133,489 2.3 97.7 24.7 73.0

Notes and Sources: 1800 and 1850 from Weiss (1986), pp. 646-49; remaining years from Hughes and Cain (2003), 547-48. For 1900-1999 Forestry and Fishing are included in the Agricultural labor force.

As these changes have taken place they have placed strains on existing labor market institutions and encouraged the development of new mechanisms to facilitate the distribution of labor. Over the course of the last century and a half the tendency has been a movement away from something approximating a “spot” market characterized by short-term employment relationships in which wages are equated to the marginal product of labor, and toward a much more complex and rule-bound set of long-term transactions (Goldin 2000, p. 586) While certain segments of the labor market still involve relatively anonymous and short-lived transactions, workers and employers are much more likely today to enter into long-term employment relationships that are expected to last for many years.

The evolution of labor market institutions in response to these shifting demands has been anything but smooth. During the late nineteenth century the expansion of organized labor was accompanied by often violent labor-management conflict (Friedman 2002). Not until the New Deal did unions gain widespread acceptance and a legal right to bargain. Yet even today, union organizing efforts are often met with considerable hostility.

Conflicts over union organizing efforts inevitably involved state and federal governments because the legal environment directly affected the bargaining power of both sides, and shifting legal opinions and legislative changes played an important part in determining the outcome of these contests. State and federal governments were also drawn into labor markets as various groups sought to limit hours of work, set minimum wages, provide support for disabled workers, and respond to other perceived shortcomings of existing arrangements. It would be wrong, however, to see the growth of government regulation as simply a movement from freer to more regulated markets. The ability to exchange goods and services rests ultimately on the legal system, and to this extent there has never been an entirely unregulated market. In addition, labor market transactions are never as simple as the anonymous exchange of other goods or services. Because the identities of individual buyers and sellers matter and the long-term nature of many employment relationships, adjustments can occur along other margins besides wages, and many of these dimensions involve externalities that affect all workers at a particular establishment, or possibly workers in an entire industry or sector.

Government regulations have responded in many cases to needs voiced by participants on both sides of the labor market for assistance to achieve desired ends. That has not, of course, prevented both workers and employers from seeking to use government to alter the way in which the gains from trade are distributed within the market.

The Agricultural Labor Market

At the beginning of the nineteenth century most labor was employed in agriculture, and, with the exception of large slave plantations, most agricultural labor was performed on small, family-run farms. There were markets for temporary and seasonal agricultural laborers to supplement family labor supply, but in most parts of the country outside the South, families remained the dominant institution directing the allocation of farm labor. Reliable estimates of the number of farm workers are not readily available before 1860, when the federal Census first enumerated “farm laborers.” At this time census enumerators found about 800 thousand such workers, implying an average of less than one-half farm worker per farm. Interpretation of this figure is complicated, however, and it may either overstate the amount of hired help—since farm laborers included unpaid family workers—or understate it—since it excluded those who reported their occupation simply as “laborer” and may have spent some of their time working in agriculture (Wright 1988, p. 193). A possibly more reliable indicator is provided by the percentage of gross value of farm output spent on wage labor. This figure fell from 11.4 percent in 1870 to around 8 percent by 1900, indicating that hired labor was on average becoming even less important (Wright 1988, pp. 194-95).

In the South, after the Civil War, arrangements were more complicated. Former plantation owners continued to own large tracts of land that required labor if they were to be made productive. Meanwhile former slaves needed access to land and capital if they were to support themselves. While some land owners turned to wage labor to work their land, most relied heavily on institutions like sharecropping. On the supply side, croppers viewed this form of employment as a rung on the “agricultural ladder” that would lead eventually to tenancy and possibly ownership. Because climbing the agricultural ladder meant establishing one’s credit-worthiness with local lenders, southern farm laborers tended to sort themselves into two categories: locally established (mostly older, married men) croppers and renters on the one hand, and mobile wage laborers (mostly younger and unmarried) on the other. While the labor market for each of these types of workers appears to have been relatively competitive, the barriers between the two markets remained relatively high (Wright 1987, p. 111).

While the predominant pattern in agriculture then was one of small, family-operated units, there was an important countervailing trend toward specialization that both depended on, and encouraged the emergence of a more specialized market for farm labor. Because specialization in a single crop increased the seasonality of labor demand, farmers could not afford to employ labor year-round, but had to depend on migrant workers. The use of seasonal gangs of migrant wage laborers developed earliest in California in the 1870s and 1880s, where employers relied heavily on Chinese immigrants. Following restrictions on Chinese entry, they were replaced first by Japanese, and later by Mexican workers (Wright 1988, pp. 201-204).

The Emergence of Internal Labor Markets

Outside of agriculture, at the beginning of the nineteenth century most manufacturing took place in small establishments. Hired labor might consist of a small number of apprentices, or, as in the early New England textile mills, a few child laborers hired from nearby farms (Ware 1931). As a result labor market institutions remained small-scale and informal, and institutions for training and skill acquisition remained correspondingly limited. Workers learned on the job as apprentices or helpers; advancement came through establishing themselves as independent producers rather than through internal promotion.

With the growth of manufacturing, and the spread of factory methods of production, especially in the years after the end of the Civil War, an increasing number of people could expect to spend their working-lives as employees. One reflection of this change was the emergence in the 1870s of the problem of unemployment. During the depression of 1873 for the first time cities throughout the country had to contend with large masses of industrial workers thrown out of work and unable to support themselves through, in the language of the time, “no fault of their own” (Keyssar 1986, ch. 2).

The growth of large factories and the creation of new kinds of labor skills specific to a particular employer created returns to sustaining long-term employment relationships. As workers acquired job- and employer-specific skills their productivity increased giving rise to gains that were available only so long as the employment relationship persisted. Employers did little, however, to encourage long-term employment relationships. Instead authority over hiring, promotion and retention was commonly delegated to foremen or inside contractors (Nelson 1975, pp. 34-54). In the latter case, skilled craftsmen operated in effect as their own bosses contracting with the firm to supply components or finished products for an agreed price, and taking responsibility for hiring and managing their own assistants.

These arrangements were well suited to promoting external mobility. Foremen were often drawn from the immigrant community and could easily tap into word-of-mouth channels of recruitment. But these benefits came increasingly into conflict with rising costs of hiring and training workers.

The informality of personnel policies prior to World War I seems likely to have discouraged lasting employment relationships, and it is true that rates of labor turnover at the beginning of the twentieth century were considerably higher than they were to be later (Owen, 2004). Scattered evidence on the duration of employment relationships gathered by various state labor bureaus at the end of the century suggests, however, at least some workers did establish lasting employment relationship (Carter 1988; Carter and Savocca 1990; Jacoby and Sharma 1992; James 1994).

The growing awareness of the costs of labor-turnover and informal, casual labor relations led reformers to advocate the establishment of more centralized and formal processes of hiring, firing and promotion, along with the establishment of internal job-ladders, and deferred payment plans to help bind workers and employers. The implementation of these reforms did not make significant headway, however, until the 1920s (Slichter 1929). Why employers began to establish internal labor markets in the 1920s remains in dispute. While some scholars emphasize pressure from workers (Jacoby 1984; 1985) others have stressed that it was largely a response to the rising costs of labor turnover (Edwards 1979).

The Government and the Labor Market

The growth of large factories contributed to rising labor tensions in the late nineteenth- and early twentieth-centuries. Issues like hours of work, safety, and working conditions all have a significant public goods aspect. While market forces of entry and exit will force employers to adopt policies that are sufficient to attract the marginal worker (the one just indifferent between staying and leaving), less mobile workers may find that their interests are not adequately represented (Freeman and Medoff 1984). One solution is to establish mechanisms for collective bargaining, and the years after the American Civil War were characterized by significant progress in the growth of organized labor (Friedman 2002). Unionization efforts, however, met strong opposition from employers, and suffered from the obstacles created by the American legal system’s bias toward protecting property and the freedom of contract. Under prevailing legal interpretation, strikes were often found by the courts to be conspiracies in restraint of trade with the result that the apparatus of government was often arrayed against labor.

Although efforts to win significant improvements in working conditions were rarely successful, there were still areas where there was room for mutually beneficial change. One such area involved the provision of disability insurance for workers injured on the job. Traditionally, injured workers had turned to the courts to adjudicate liability for industrial accidents. Legal proceedings were costly and their outcome unpredictable. By the early 1910s it became clear to all sides that a system of disability insurance was preferable to reliance on the courts. Resolution of this problem, however, required the intervention of state legislatures to establish mandatory state workers compensation insurance schemes and remove the issue from the courts. Once introduced workers compensation schemes spread quickly: nine states passed legislation in 1911; 13 more had joined the bandwagon by 1913, and by 1920 44 states had such legislation (Fishback 2001).

Along with workers compensation state legislatures in the late nineteenth century also considered legislation restricting hours of work. Prevailing legal interpretations limited the effectiveness of such efforts for adult males. But rules restricting hours for women and children were found to be acceptable. The federal government passed legislation restricting the employment of children under 14 in 1916, but this law was found unconstitutional in 1916 (Goldin 2000, p. 612-13).

The economic crisis of the 1930s triggered a new wave of government interventions in the labor market. During the 1930s the federal government granted unions the right to organize legally, established a system of unemployment, disability and old age insurance, and established minimum wage and overtime pay provisions.

In 1933 the National Industrial Recovery Act included provisions legalizing unions’ right to bargain collectively. Although the NIRA was eventually ruled to be unconstitutional, the key labor provisions of the Act were reinstated in the Wagner Act of 1935. While some of the provisions of the Wagner Act were modified in 1947 by the Taft-Hartley Act, its passage marks the beginning of the golden age of organized labor. Union membership jumped very quickly after 1935 from around 12 percent of the non-agricultural labor force to nearly 30 percent, and by the late 1940s had attained a peak of 35 percent, where it stabilized. Since the 1960s, however, union membership has declined steadily, to the point where it is now back at pre-Wagner Act levels.

The Social Security Act of 1935 introduced a federal unemployment insurance scheme that was operated in partnership with state governments and financed through a tax on employers. It also created government old age and disability insurance. In 1938, the federal Fair Labor Standards Act provided for minimum wages and for overtime pay. At first the coverage of these provisions was limited, but it has been steadily increased in subsequent years to cover most industries today.

In the post-war era, the federal government has expanded its role in managing labor markets both directly—through the establishment of occupational safety regulations, and anti-discrimination laws, for example—and indirectly—through its efforts to manage the macroeconomy to insure maximum employment.

A further expansion of federal involvement in labor markets began in 1964 with passage of the Civil Rights Act, which prohibited employment discrimination against both minorities and women. In 1967 the Age Discrimination and Employment Act was passed prohibiting discrimination against people aged 40 to 70 in regard to hiring, firing, working conditions and pay. The Family and Medical Leave Act of 1994 allows for unpaid leave to care for infants, children and other sick relatives (Goldin 2000, p. 614).

Whether state and federal legislation has significantly affected labor market outcomes remains unclear. Most economists would argue that the majority of labor’s gains in the past century would have occurred even in the absence of government intervention. Rather than shaping market outcomes, many legislative initiatives emerged as a result of underlying changes that were making advances possible. According to Claudia Goldin (2000, p. 553) “government intervention often reinforced existing trends, as in the decline of child labor, the narrowing of the wage structure, and the decrease in hours of work.” In other cases, such as Workers Compensation and pensions, legislation helped to establish the basis for markets.

The Changing Boundaries of the Labor Market

The rise of factories and urban employment had implications that went far beyond the labor market itself. On farms women and children had found ready employment (Craig 1993, ch. 4). But when the male household head worked for wages, employment opportunities for other family members were more limited. Late nineteenth-century convention largely dictated that married women did not work outside the home unless their husband was dead or incapacitated (Goldin 1990, p. 119-20). Children, on the other hand, were often viewed as supplementary earners in blue-collar households at this time.

Since 1900 changes in relative earnings power related to shifts in technology have encouraged women to enter the paid labor market while purchasing more of the goods and services that were previously produced within the home. At the same time, the rising value of formal education has lead to the withdrawal of child labor from the market and increased investment in formal education (Whaples 2005). During the first half of the twentieth century high school education became nearly universal. And since World War II, there has been a rapid increase in the number of college educated workers in the U.S. economy (Goldin 2000, p. 609-12).

Assessing the Efficiency of Labor Market Institutions

The function of labor markets is to match workers and jobs. As this essay has described the mechanisms by which labor markets have accomplished this task have changed considerably as the American economy has developed. A central issue for economic historians is to assess how changing labor market institutions have affected the efficiency of labor markets. This leads to three sets of questions. The first concerns the long-run efficiency of market processes in allocating labor across space and economic activities. The second involves the response of labor markets to short-run macroeconomic fluctuations. The third deals with wage determination and the distribution of income.

Long-Run Efficiency and Wage Gaps

Efforts to evaluate the efficiency of market allocation begin with what is commonly know as the “law of one price,” which states that within an efficient market the wage of similar workers doing similar work under similar circumstances should be equalized. The ideal of complete equalization is, of course, unlikely to be achieved given the high information and transactions costs that characterize labor markets. Thus, conclusions are usually couched in relative terms, comparing the efficiency of one market at one point in time with those of some other markets at other points in time. A further complication in measuring wage equalization is the need to compare homogeneous workers and to control for other differences (such as cost of living and non-pecuniary amenities).

Falling transportation and communications costs have encouraged a trend toward diminishing wage gaps over time, but this trend has not always been consistent over time, nor has it applied to all markets in equal measure. That said, what stands out is in fact the relative strength of forces of market arbitrage that have operated in many contexts to promote wage convergence.

At the beginning of the nineteenth century, the costs of trans-Atlantic migration were still quite high and international wage gaps large. By the 1840s, however, vast improvements in shipping cut the costs of migration, and gave rise to an era of dramatic international wage equalization (O’Rourke and Williamson 1999, ch. 2; Williamson 1995). Figure 1 shows the movement of real wages relative to the United States in a selection of European countries. After the beginning of mass immigration wage differentials began to fall substantially in one country after another. International wage convergence continued up until the 1880s, when it appears that the accelerating growth of the American economy outstripped European labor supply responses and reversed wage convergence briefly. World War I and subsequent immigration restrictions caused a sharper break, and contributed to widening international wage differences during the middle portion of the twentieth century. From World War II until about 1980, European wage levels once again began to converge toward the U.S., but this convergence reflected largely internally-generated improvements in European living standards rather then labor market pressures.

Figure 1

Relative Real Wages of Selected European Countries, 1830-1980 (US = 100)

Source: Williamson (1995), Tables A2.1-A2.3.

Wage convergence also took place within some parts of the United States during the nineteenth century. Figure 2 traces wages in the North Central and Southern regions of the U.S relative to those in the Northeast across the period from 1820 to the early twentieth century. Within the United States, wages in the North Central region of the country were 30 to 40 percent higher than in the East in the 1820s (Margo 2000a, ch. 5). Thereafter, wage gaps declined substantially, falling to the 10-20 percent range before the Civil War. Despite some temporary divergence during the war, wage gaps had fallen to 5 to 10 percent by the 1880s and 1890s. Much of this decline was made possible by faster and less expensive means of transportation, but it was also dependent on the development of labor market institutions linking the two regions, for while transportation improvements helped to link East and West, there was no corresponding North-South integration. While southern wages hovered near levels in the Northeast prior to the Civil War, they fell substantially below northern levels after the Civil War, as Figure 2 illustrates.

Figure 2

Relative Regional Real Wage Rates in the United States, 1825-1984

(Northeast = 100 in each year)

Notes and sources: Rosenbloom (2002, p. 133); Montgomery (1992). It is not possible to assemble entirely consistent data on regional wage variations over such an extended period. The nature of the wage data, the precise geographic coverage of the data, and the estimates of regional cost-of-living indices are all different. The earliest wage data—Margo (2000); Sundstrom and Rosenbloom (1993) and Coelho and Shepherd (1976) are all based on occupational wage rates from payroll records for specific occupations; Rosenbloom (1996) uses average earnings across all manufacturing workers; while Montgomery (1992) uses individual level wage data drawn from the Current Population Survey, and calculates geographic variations using a regression technique to control for individual differences in human capital and industry of employment. I used the relative real wages that Montgomery (1992) reported for workers in manufacturing, and used an unweighted average of wages across the cities in each region to arrive at relative regional real wages. Interested readers should consult the various underlying sources for further details.

Despite the large North-South wage gap Table 3 shows there was relatively little migration out of the South until large-scale foreign immigration came to an end. Migration from the South during World War I and the 1920s created a basis for future chain migration, but the Great Depression of the 1930s interrupted this process of adjustment. Not until the 1940s did the North-South wage gap begin to decline substantially (Wright 1986, pp. 71-80). By the 1970s the southern wage disadvantage had largely disappeared, and because of the decline fortunes of older manufacturing districts and the rise of Sunbelt cities, wages in the South now exceed those in the Northeast (Coelho and Ghali 1971; Bellante 1979; Sahling and Smith 1983; Montgomery 1992). Despite these shocks, however, the overall variation in wages appears comparable to levels attained by the end of the nineteenth century. Montgomery (1992), for example finds that from 1974 to 1984 the standard deviation of wages across SMSAs was only about 10 percent of the average wage.

Table 3

Net Migration by Region, and Race, 1870-1950

South Northeast North Central West
Period White Black White Black White Black White Black
Number (in 1,000s)
1870-80 91 -68 -374 26 26 42 257 0
1880-90 -271 -88 -240 61 -43 28 554 0
1890-00 -30 -185 101 136 -445 49 374 0
1900-10 -69 -194 -196 109 -1,110 63 1,375 22
1910-20 -663 -555 -74 242 -145 281 880 32
1920-30 -704 -903 -177 435 -464 426 1,345 42
1930-40 -558 -480 55 273 -747 152 1,250 55
1940-50 -866 -1581 -659 599 -1,296 626 2,822 356
Rate (migrants/1,000 Population)
1870-80 11 -14 -33 55 2 124 274 0
1880-90 -26 -15 -18 107 -3 65 325 0
1890-00 -2 -26 6 200 -23 104 141 0
1900-10 -4 -24 -11 137 -48 122 329 542
1910-20 -33 -66 -3 254 -5 421 143 491
1920-30 -30 -103 -7 328 -15 415 160 421
1930-40 -20 -52 2 157 -22 113 116 378
1940-50 -28 -167 -20 259 -35 344 195 964

Note: Net migration is calculated as the difference between the actual increase in population over each decade and the predicted increase based on age and sex specific mortality rates and the demographic structure of the region’s population at the beginning of the decade. If the actual increase exceeds the predicted increase this implies a net migration into the region; if the actual increase is less than predicted this implies net migration out of the region. The states included in the Southern region are Oklahoma, Texas, Arkansas, Louisiana, Mississippi, Alabama, Tennessee, Kentucky, West Virginia, Virginia, North Carolina, South Carolina, Georgia, and Florida.

Source: Eldridge and Thomas (1964, pp. 90, 99).

In addition to geographic wage gaps economists have considered gaps between farm and city, between black and white workers, between men and women, and between different industries. The literature on these topics is quite extensive and this essay can only touch on a few of the more general themes raised here as they relate to U.S. economic history.

Studies of farm-city wage gaps are a variant of the broader literature on geographic wage variation, related to the general movement of labor from farms to urban manufacturing and services. Here comparisons are complicated by the need to adjust for the non-wage perquisites that farm laborers typically received, which could be almost as large as cash wages. The issue of whether such gaps existed in the nineteenth century has important implications for whether the pace of industrialization was impeded by the lack of adequate labor supply responses. By the second half of the nineteenth century at least, it appears that farm-manufacturing wage gaps were small and markets were relatively integrated (Wright 1988, pp. 204-5). Margo (2000, ch. 4) offers evidence of a high degree of equalization within local labor markets between farm and urban wages as early as 1860. Making comparisons within counties and states, he reports that farm wages were within 10 percent of urban wages in eight states. Analyzing data from the late nineteenth century through the 1930s, Hatton and Williamson (1991) find that farm and city wages were nearly equal within U.S. regions by the 1890s. It appears, however that during the Great Depression farm wages were much more flexible than urban wages causing a large gap to emerge at this time (Alston and Williamson 1991).

Much attention has been focused on trends in wage gaps by race and sex. The twentieth century has seen a substantial convergence in both of these differentials. Table 4 displays comparisons of earnings of black males relative to white males for full time workers. In 1940, full-time black male workers earned only about 43 percent of what white male full-time workers did. By 1980 the racial pay ratio had risen to nearly 73 percent, but there has been little subsequent progress. Until the mid-1960s these gains can be attributed primarily to migration from the low-wage South to higher paying areas in the North, and to increases in the quantity and quality of black education over time (Margo 1995; Smith and Welch 1990). Since then, however, most gains have been due to shifts in relative pay within regions. Although it is clear that discrimination was a key factor in limiting access to education, the role of discrimination within the labor market in contributing to these differentials has been a more controversial topic (see Wright 1986, pp. 127-34). But the episodic nature of black wage gains, especially after 1964 is compelling evidence that discrimination has played a role historically in earnings differences and that federal anti-discrimination legislation was a crucial factor in reducing its effects (Donohue and Heckman 1991).

Table 4

Black Male Wages as a Percentage of White Male Wages, 1940-2004

Date Black Relative Wage
1940 43.4
1950 55.2
1960 57.5
1970 64.4
1980 72.6
1990 70.0
2004 77.0

Notes and Sources: Data for 1940 through 1980 are based on Census data as reported in Smith and Welch (1989, Table 8). Data for 1990 are from Ehrenberg and Smith (2000, Table 12.4) and refer to earnings of full time, full year workers. Data from 2004 are for median weekly earnings of full-time wage and salary workers derived from data in the Current Population Survey accessed on-line from the Bureau of Labor Statistic on 13 December 2005; URL ftp://ftp.bls.gov/pub/special.requests/lf/aat37.txt.

Male-Female wage gaps have also narrowed substantially over time. In the 1820s women’s earnings in manufacturing were a little less than 40 percent of those of men, but this ratio rose over time reaching about 55 percent by the 1920s. Across all sectors women’s relative pay rose during the first half of the twentieth century, but gains in female wages stalled during the 1950s and 1960s at the time when female labor force participation began to increase rapidly. Beginning in the late 1970s or early 1980s, relative female pay began to rise again, and today women earn about 80 percent what men do (Goldin 1990, table 3.2; Goldin 2000, pp. 606-8). Part of this remaining difference is explained by differences in the occupational distribution of men and women, with women tending to be concentrated in lower paying jobs. Whether these differences are the result of persistent discrimination or arise because of differences in productivity or a choice by women to trade off greater flexibility in terms of labor market commitment for lower pay remains controversial.

In addition to locational, sectoral, racial and gender wage differentials, economists have also documented and analyzed differences by industry. Krueger and Summers (1987) find that there are pronounced differences in wages by industry within well-specified occupational classes, and that these differentials have remained relatively stable over several decades. One interpretation of this phenomenon is that in industries with substantial market power workers are able to extract some of the monopoly rents as higher pay. An alternative view is that workers are in fact heterogeneous, and differences in wages reflect a process of sorting in which higher paying industries attract more able workers.

The Response to Short-run Macroeconomic Fluctuations

The existence of unemployment is one of the clearest indications of the persistent frictions that characterize labor markets. As described earlier, the concept of unemployment first entered common discussion with the growth of the factory labor force in the 1870s. Unemployment was not a visible social phenomenon in an agricultural economy, although there was undoubtedly a great deal of hidden underemployment.

Although one might have expected that the shift from spot toward more contractual labor markets would have increased rigidities in the employment relationship that would result in higher levels of unemployment there is in fact no evidence of any long-run increase in the level of unemployment.

Contemporaneous measurements of the rate of unemployment only began in 1940. Prior to this date, economic historians have had to estimate unemployment levels from a variety of other sources. Decennial censuses provide benchmark levels, but it is necessary to interpolate between these benchmarks based on other series. Conclusions about long-run changes in unemployment behavior depend to a large extent on the method used to interpolate between benchmark dates. Estimates prepared by Stanley Lebergott (1964) suggest that the average level of unemployment and its volatility have declined between the pre-1930 and post-World War II periods. Christina Romer (1986a, 1986b), however, has argued that there was no decline in volatility. Rather, she argues that the apparent change in behavior is the result of Lebergott’s interpolation procedure.

While the aggregate behavior of unemployment has changed surprisingly little over the past century, the changing nature of employment relationships has been reflected much more clearly in changes in the distribution of the burden of unemployment (Goldin 2000, pp. 591-97). At the beginning of the twentieth century, unemployment was relatively widespread, and largely unrelated to personal characteristics. Thus many employees faced great uncertainty about the permanence of their employment relationship. Today, on the other hand, unemployment is highly concentrated: falling heavily on the least skilled, the youngest, and the non-white segments of the labor force. Thus, the movement away from spot markets has tended to create a two-tier labor market in which some workers are highly vulnerable to economic fluctuations, while others remain largely insulated from economic shocks.

Wage Determination and Distributional Issues

American economic growth has generated vast increases in the material standard of living. Real gross domestic product per capita, for example, has increased more than twenty-fold since 1820 (Steckel 2002). This growth in total output has in large part been passed on to labor in the form of higher wages. Although labor’s share of national output has fluctuated somewhat, in the long-run it has remained surprisingly stable. According to Abramovitz and David (2000, p. 20), labor received 65 percent of national income in the years 1800-1855. Labor’s share dropped in the late nineteenth and early twentieth centuries, falling to a low of 54 percent of national income between 1890 and 1927, but has since risen, reaching 65 percent again in 1966-1989. Thus, over the long term, labor income has grown at the same rate as total output in the economy.

The distribution of labor’s gains across different groups in the labor force has also varied over time. I have already discussed patterns of wage variation by race and gender, but another important issue revolves around the overall level of inequality of pay, and differences in pay between groups of skilled and unskilled workers. Careful research by Picketty and Saez (2003) using individual income tax returns has documented changes in the overall distribution of income in the United States since 1913. They find that inequality has followed a U-shaped pattern over the course of the twentieth century. Inequality was relatively high at the beginning of the period they consider, fell sharply during World War II, held steady until the early 1970s and then began to increase, reaching levels comparable to those in the early twentieth century by the 1990s.

An important factor in the rising inequality of income since 1970 has been growing dispersion in wage rates. The wage differential between workers in the 90th percentile of the wage distribution and those in the 10th percentile increased by 49 percent between 1969 and 1995 (Plotnick et al 2000, pp. 357-58). These shifts are mirrored in increased premiums earned by college graduates relative to high school graduates. Two primary explanations have been advanced for these trends. First, there is evidence that technological changes—especially those associated with the increased use of information technology—has increased relative demand for more educated workers (Murnane, Willett and Levy (1995). Second, increased global integration has allowed low-wage manufacturing industries overseas to compete more effectively with U.S. manufacturers, thus depressing wages in what have traditionally been high-paying blue collar jobs.

Efforts to expand the scope of analysis over a longer-run encounter problems with more limited data. Based on selected wage ratios of skilled and unskilled workers Willamson and Lindert (1980) have argued that there was an increase in wage inequality over the course of the nineteenth century. But other scholars have argued that the wage series that Williamson and Lindert used are unreliable (Margo 2000b, pp. 224-28).

Conclusions

The history of labor market institutions in the United States illustrates the point that real world economies are substantially more complex than the simplest textbook models. Instead of a disinterested and omniscient auctioneer, the process of matching buyers and sellers takes place through the actions of self-interested market participants. The resulting labor market institutions do not respond immediately and precisely to shifting patterns of incentives. Rather they are subject to historical forces of increasing-returns and lock-in that cause them to change gradually and along path-dependent trajectories.

For all of these departures from the theoretically ideal market, however, the history of labor markets in the United States can also be seen as a confirmation of the remarkable power of market processes of allocation. From the beginning of European settlement in mainland North America, labor markets have done a remarkable job of responding to shifting patterns of demand and supply. Not only have they accomplished the massive geographic shifts associated with the settlement of the United States, but they have also dealt with huge structural changes induced by the sustained pace of technological change.

References

Abramovitz, Moses and Paul A. David. “American Macroeconomic Growth in the Era of Knowledge-Based Progress: The Long-Run Perspective.” In The Cambridge Economic History of the United States, Volume 3: The Twentieth Century, edited by Stanley L. Engerman and Robert Gallman. New York: Cambridge University Press, 2000.

Alston, Lee J. and Jeffery G. Williamson. “The Earnings Gap between Agricultural and Manufacturing Laborers, 1925-1941. Journal of Economic History 51, no. 1 (1991): 83-99.

Barton, Josef J. Peasants and Strangers: Italians, Rumanians, and Slovaks in an American City, 1890-1950. Cambridge, MA: Harvard University Press, 1975.

Bellante, Don. “The North-South Differential and the Migration of Heterogeneous Labor.” American Economic Review 69, no. 1 (1979): 166-75.

Carter, Susan B. “The Changing Importance of Lifetime Jobs in the U.S. Economy, 1892-1978.” Industrial Relations 27 (1988): 287-300.

Carter, Susan B. and Elizabeth Savoca. “Labor Mobility and Lengthy Jobs in Nineteenth-Century America.” Journal of Economic History 50, no. 1 (1990): 1-16.

Carter, Susan B. and Richard Sutch. “Historical Perspectives on the Economic Consequences of Immigration into the United States.” In The Handbook of International Migration: The American Experience, edited by Charles Hirschman, Philip Kasinitz and Josh DeWind. New York: Russell Sage Foundation, 1999.

Coelho, Philip R.P. and Moheb A. Ghali. “The End of the North-South Wage Differential.” American Economic Review 61, no. 5 (1971): 932-37.

Coelho, Philip R.P. and James F. Shepherd. “Regional Differences in Real Wages: The United States in 1851-1880.” Explorations in Economic History 13 (1976): 203-30.

Craig, Lee A. To Sow One Acre More: Childbearing and Farm Productivity in the Antebellum North. Baltimore: Johns Hopkins University Press, 1993.

Donahue, John H. III and James J. Heckman. “Continuous versus Episodic Change: The Impact of Civil Rights Policy on the Economic Status of Blacks.” Journal of Economic Literature 29, no. 4 (1991): 1603-43.

Dunn, Richard S. “Servants and Slaves: The Recruitment and Employment of Labor.” In Colonial British America: Essays in the New History of the Early Modern Era, edited by Jack P. Greene and J.R. Pole. Baltimore: Johns Hopkins University Press, 1984.

Edwards, B. “A World of Work: A Survey of Outsourcing.” Economist 13 November 2004.

Edwards, Richard. Contested Terrain: The Transformation of the Workplace in the Twentieth Century. New York: Basic Books, 1979.

Ehrenberg, Ronald G. and Robert S. Smith. Modern Labor Economics: Theory and Public Policy, seventh edition. Reading, MA; Addison-Wesley, 2000.

Eldridge, Hope T. and Dorothy Swaine Thomas. Population Redistribution and Economic Growth, United States 1870-1950, vol. 3: Demographic Analyses and Interrelations. Philadelphia: American Philosophical Society, 1964.

Fishback, Price V. “Workers’ Compensation.” EH.Net Encyclopedia, edited by Robert Whaples. August 15, 2001. URL http://www.eh.net/encyclopedia/articles/fishback.workers.compensation.

Freeman, Richard and James Medoff. What Do Unions Do? New York: Basic Books, 1984.

Friedman, Gerald (2002). “Labor Unions in the United States.” EH.Net Encyclopedia, edited by Robert Whaples. May 8, 2002. URL http://www.eh.net/encyclopedia/articles/friedman.unions.us.

Galenson, David W. White Servitude in Colonial America. New York: Cambridge University Press, 1981.

Galenson, David W. “The Rise and Fall of Indentured Servitude in the Americas: An Economic Analysis.” Journal of Economic History 44, no. 1 (1984): 1-26.

Galloway, Lowell E., Richard K. Vedder and Vishwa Shukla. “The Distribution of the Immigrant Population in the United States: An Econometric Analysis.” Explorations in Economic History 11 (1974): 213-26.

Gjerde, John. From Peasants to Farmers: Migration from Balestrand, Norway to the Upper Middle West. New York: Cambridge University Press, 1985.

Goldin, Claudia. “The Political Economy of Immigration Restriction in the United States, 1890 to 1921.” In The Regulated Economy: A Historical Approach to Political Economy, edited by Claudia Goldin and Gary Libecap. Chicago: University of Chicago Press, 1994.

Goldin, Claudia. “Labor Markets in the Twentieth Century.” In The Cambridge Economic History of the United States, Volume 3: The Twentieth Century, edited by Stanley L. Engerman and Robert Gallman. Cambridge: Cambridge University Press, 2000.

Grubb, Farley. “The Market for Indentured Immigrants: Evidence on the Efficiency of Forward Labor Contracting in Philadelphia, 1745-1773.” Journal of Economic History 45, no. 4 (1985a): 855-68.

Grubb, Farley. “The Incidence of Servitude in Trans-Atlantic Migration, 1771-1804.” Explorations in Economic History 22 (1985b): 316-39.

Grubb, Farley. “Redemptioner Immigration to Pennsylvania: Evidence on Contract Choice and Profitability.” Journal of Economic History 46, no. 2 (1986): 407-18.

Hatton, Timothy J. and Jeffrey G. Williamson (1991). “Integrated and Segmented Labor Markets: Thinking in Two Sectors.” Journal of Economic History 51, no. 2 (1991): 413-25.

Hughes, Jonathan and Louis Cain. American Economic History, sixth edition. Boston: Addison-Wesley, 2003.

Jacoby, Sanford M. “The Development of Internal Labor markets in American Manufacturing Firms.” In Internal Labor Markets, edited by Paul Osterman, 23-69. Cambridge, MA: MIT Press, 1984

Jacoby, Sanford M. Employing Bureaucracy: Managers, Unions, and the Transformation of Work in American Industry, 1900-1945. New York: Columbia University Press, 1985.

Jacoby, Sanford M. and Sunil Sharma. “Employment Duration and Industrial Labor Mobility in the United States, 1880-1980.” Journal of Economic History 52, no. 1 (1992): 161-79.

James, John A. “Job Tenure in the Gilded Age.” In Labour Market Evolution: The Economic History of Market Integration, Wage Flexibility, and the Employment Relation, edited by George Grantham and Mary MacKinnon. New York: Routledge, 1994.

Kamphoefner, Walter D. The Westfalians: From Germany to Missouri. Princeton, NJ: Princeton University Press, 1987.

Keyssar, Alexander. Out of Work: The First Century of Unemployment in Massachusetts. New York: Cambridge University Press, 1986.

Krueger, Alan B. and Lawrence H. Summers. “Reflections on the Inter-Industry Wage Structure.” In Unemployment and the Structure of Labor Markets, edited by Kevin Lang and Jonathan Leonard, 17-47. Oxford: Blackwell, 1987.

Lebergott, Stanley. Manpower in Economic Growth: The American Record since 1800. New York: McGraw-Hill, 1964.

Margo, Robert. “Explaining Black-White Wage Convergence, 1940-1950: The Role of the Great Compression.” Industrial and Labor Relations Review 48 (1995): 470-81.

Margo, Robert. Wages and Labor Markets in the United States, 1820-1860. Chicago: University of Chicago Press, 2000a.

Margo, Robert. “The Labor Force in the Nineteenth Century.” In The Cambridge Economic History of the United States, Volume 2: The Long Nineteenth Century, edited by Stanley L. Engerman and Robert E. Gallman, 207-44. New York: Cambridge University Press, 2000b.

McCusker, John J. and Russell R. Menard. The Economy of British America: 1607-1789. Chapel Hill: University of North Carolina Press, 1985.

Montgomery, Edward. “Evidence on Metropolitan Wage Differences across Industries and over Time.” Journal of Urban Economics 31 (1992): 69-83.

Morgan, Edmund S. “The Labor Problem at Jamestown, 1607-18.” American Historical Review 76 (1971): 595-611.

Murnane, Richard J., John B. Willett and Frank Levy. “The Growing Importance of Cognitive Skills in Wage Determination.” Review of Economics and Statistics 77 (1995): 251-66

Nelson, Daniel. Managers and Workers: Origins of the New Factory System in the United States, 1880-1920. Madison: University of Wisconsin Press, 1975.

O’Rourke, Kevin H. and Jeffrey G. Williamson. Globalization and History: The Evolution of a Nineteenth-Century Atlantic Economy. Cambridge, MA: MIT Press, 1999.

Owen, Laura. “History of Labor Turnover in the U.S.” EH.Net Encyclopedia, edited by Robert Whaples. April 30, 2004. URL http://www.eh.net/encyclopedia/articles/owen.turnover.

Piketty, Thomas and Emmanuel Saez. “Income Inequality in the United States, 1913-1998.” Quarterly Journal of Economics 118 (2003): 1-39.

Plotnick, Robert D. et al. “The Twentieth-Century Record of Inequality and Poverty in the United States” In The Cambridge Economic History of the United States, Volume 3: The Twentieth Century, edited by Stanley L. Engerman and Robert Gallman. New York: Cambridge University Press, 2000.

Romer, Christina. “New Estimates of Prewar Gross National Product and Unemployment.” Journal of Economic History 46, no. 2 (1986a): 341-52.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” Journal of Political Economy 94 (1986b): 1-37.

Rosenbloom, Joshua L. “Was There a National Labor Market at the End of the Nineteenth Century? New Evidence on Earnings in Manufacturing.” Journal of Economic History 56, no. 3 (1996): 626-56.

Rosenbloom, Joshua L. Looking for Work, Searching for Workers: American Labor Markets during Industrialization. New York: Cambridge University Press, 2002.

Slichter, Sumner H. “The Current Labor Policies of American Industries.” Quarterly Journal of Economics 43 (1929): 393-435.

Sahling, Leonard G. and Sharon P. Smith. “Regional Wage Differentials: Has the South Risen Again?” Review of Economics and Statistics 65 (1983): 131-35.

Smith, James P. and Finis R. Welch. “Black Economic Progress after Myrdal.” Journal of Economic Literature 27 (1989): 519-64.

Steckel, Richard. “A History of the Standard of Living in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. July 22, 2002. URL http://eh.net/encyclopedia/article/steckel.standard.living.us

Sundstrom, William A. and Joshua L. Rosenbloom. “Occupational Differences in the Dispersion of Wages and Working Hours: Labor Market Integration in the United States, 1890-1903.” Explorations in Economic History 30 (1993): 379-408.

Ward, David. Cities and Immigrants: A Geography of Change in Nineteenth-Century America. New York: Oxford University Press, 1971.

Ware, Caroline F. The Early New England Cotton Manufacture: A Study in Industrial Beginnings. Boston: Houghton Mifflin, 1931.

Weiss, Thomas. “Revised Estimates of the United States Workforce, 1800-1860.” In Long Term Factors in American Economic Growth, edited by Stanley L. Engerman and Robert E. Gallman, 641-78. Chicago: University of Chicago, 1986.

Whaples, Robert. “Child Labor in the United States.” EH.Net Encyclopedia, edited by Robert Whaples. October 8, 2005. URL http://eh.net/encyclopedia/article/whaples.childlabor.

Williamson, Jeffrey G. “The Evolution of Global Labor Markets since 1830: Background Evidence and Hypotheses.” Explorations in Economic History 32 (1995): 141-96.

Williamson, Jeffrey G. and Peter H. Lindert. American Inequality: A Macroeconomic History. New York: Academic Press, 1980.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy since the Civil War. New York: Basic Books, 1986.

Wright, Gavin. “Postbellum Southern Labor Markets.” In Quantity and Quiddity: Essays in U.S. Economic History, edited by Peter Kilby. Middletown, CT: Wesleyan University Press, 1987.

Wright, Gavin. “American Agriculture and the Labor Market: What Happened to Proletarianization?” Agricultural History 62 (1988): 182-209.

Citation: Rosenbloom, Joshua. “The History of American Labor Market Institutions and Outcomes”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-history-of-american-labor-market-institutions-and-outcomes/

The Economic History of Korea

The Economic History of Korea

Myung Soo Cha, Yeungnam University

Three Periods

Two regime shifts divide the economic history of Korea during the past six centuries into three distinct periods: 1) the period of Malthusian stagnation up to 1910, when Japan annexed Korea; 2) the colonial period from 1910-45, when the country embarked upon modern economic growth; and 3) the post colonial decades, when living standards improved rapidly in South Korea, while North Korea returned to the world of disease and starvation. The dramatic history of living standards in Korea presents one of the most convincing pieces of evidence to show that institutions — particularly the government — matter for economic growth.

Dynastic Degeneration

The founders of the Chosôn dynasty (1392-1910) imposed a tribute system on a little-commercialized peasant economy, collecting taxes in the form of a wide variety of products and mobilizing labor to obtain the handicrafts and services it needed. From the late sixteenth to the early seventeenth century, invading armies from Japan and China shattered the command system and forced a transition to a market economy. The damaged bureaucracy started to receive taxes in money commodities — rice and cotton textiles — and eventually began to mint copper coins and lifted restrictions on trade. The wars also dealt a serious blow to slavery and the pre-war system of forced labor, allowing labor markets to emerge.

Markets were slow to develop: grain markets in agricultural regions of Korea appeared less integrated than those in comparable parts of China and Japan. Population and acreage, however, recovered quickly from the adverse impact of the wars. Population growth came to a halt around 1800, and a century of demographic stagnation followed due to a higher level of mortality. During the nineteenth century, living standards appeared to deteriorate. Both wages and rents fell, tax receipts shrank, and budget deficits expanded, forcing the government to resort to debasement. Peasant rebellions occurred more frequently, and poor peasants left Korea for northern China.

Given that both acreage and population remained stable during the nineteenth century, the worsening living standards imply that the aggregate output contracted, because land and labor were being used in an ever more inefficient way. The decline in efficiency appeared to have much to do with disintegrating system of water control, which included flood control and irrigation.

The water control problem had institutional roots, as in Q’ing China. Population growth caused rapid deforestation, as peasants were able to readily obtain farmlands by burning off forests, where property rights usually remained ill-defined. (This contrasts with Tokugawa Japan, where conflicts and litigation following competitive exploitation of forests led to forest regulation.) While the deforestation wrought havoc on reservoirs by increasing the incidence and intensity of flooding, private individuals had little incentives to repair the damages, as they expected others to free-ride on the benefits of their efforts. Keeping the system of water control in good condition required public initiatives, which the dynastic government could not undertake. During the nineteenth century, powerful landowning families took turns controlling minor or ailing kings, reducing the state to an instrument serving private interests. Failing to take measures to maintain irrigation, provincial officials accelerated its decay by taking bribes in return for conniving at the practice of farming on the rich soil alongside reservoirs. Peasants responded to the decaying irrigation by developing new rice seed varieties, which could better resist droughts but yielded less. They also tried to counter the increasingly unstable water supply by building waterways linking farmlands with rivers, which frequently met opposition from people farming further downstream. Not only did provincial administrators fail to settle the water disputes, but also some of them became central causes of clashes. In 1894 peasants protested against a local administrator’s attempts to generate private income by collecting fees for using waterways, which had been built by peasants. The uprising quickly developed into a nationwide peasant rebellion, which the crumbling government could suppress only by calling in military forces from China and Japan. An unforeseen consequence of the rebellion was the Sino-Japanese war fought on the Korean soil, where Japan defeated China, tipping the balance of power in Korea critically in her favor.

The water control problem affected primarily rice farming productivity: during the nineteenth century paddy land prices (as measured by the amount of rice) fell, while dry farm prices (as measured by the amount of dry farm products) rose. Peasants and landlords converted paddy lands into dry farms during the nineteenth century, and there occurred an exodus of workers out of agriculture into handicraft and commerce. Despite the proto-industrialization, late dynastic Korea remained less urbanized than Q’ing China, not to mention Tokugawa Japan. Seasonal fluctuations in rice prices in the main agricultural regions of Korea were far wider than those observed in Japan during the nineteenth century, implying a significantly higher interest rate, a lower level of capital per person, and therefore lower living standards for Korea. In the mid-nineteenth century paddy land productivity in Korea was about half of that in Japan.

Colonial Transition to Modern Economic Growth

Less than two decades after having been opened by Commodore Perry, Japan first made its ambitions about Korea known by forcing the country open to trade in 1876. Defeating Russia in the war of 1905, Japan virtually annexed Korea, which was made official five years later. What replaced the feeble and predatory bureaucracy of the ChosǑn dynasty was a developmental state. Drawing on the Meiji government’s experience, the colonial state introduced a set of expensive policy measures to modernize Korea. One important project was to improve infrastructure: railway lines were extended, and roads and harbors and communication networks were improved, which rapidly integrated goods and factor markets both nationally and internationally. Another project was a vigorous health campaign: the colonial government improved public hygiene, introduced modern medicine, and built hospitals, significantly accelerating the mortality decline set in motion around 1890, apparently by the introduction of the smallpox vaccination. The mortality transition resulted in a population expanding 1.4% per year during the colonial period. The third project was to revamp education. As modern teaching institutions quickly replaced traditional schools teaching Chinese classics, primary school enrollment ration rose from 1 percent in 1910 to 47 percent in 1943. Finally, the cadastral survey (1910-18) modernized and legalized property rights to land, which boosted not only the efficiency in land use, but also tax revenue from landowners. These modernization efforts generated sizable public deficits, which the colonial government could finance partly by floating bonds in Japan and partly by unilateral transfers from the Japanese government.

The colonial government implemented industrial policy as well. The Rice Production Development Program (1920-1933), a policy response to the Rice Riots in Japan in 1918, was aimed at increasing rice supply within the Japanese empire. In colonial Korea, the program placed particular emphasis upon reversing the decay in water control. The colonial government provided subsidies for irrigation projects, and set up institutions to lower information, negotiation, and enforcement costs in building new waterways and reservoirs. Improved irrigation made it possible for peasants to grow high yielding rice seed varieties. Completion of a chemical fertilizer factory in 1927 increased the use of fertilizer, further boosting the yields from the new type of rice seeds. Rice prices fell rapidly in the late 1920s and early 1930s in the wake of the world agricultural depression, leading to the suspension of the program in 1933.

Despite the Rice Program, the structure of the colonial economy has been shifting away from agriculture towards manufacturing ever since the beginning of the colonial rule at a consistent pace. From 1911-40 the share of manufacturing in GDP increased from 6 percent to 28 percent, and the share of agriculture fell from 76 percent to 41 percent. Major causes of the structural change included diffusion of modern manufacturing technology, the world agricultural depression shifting the terms of trade in favor of manufacturing, and Japan’s early recovery from the Great Depression generating an investment boom in the colony. Also Korea’s cheap labor and natural resources and the introduction of controls on output and investment in Japan to mitigate the impact of the Depression helped attract direct investment in the colony. Finally, subjugating party politicians and pushing Japan into the Second World War with the invasion of China in 1937, the Japanese military began to develop northern parts of Korea peninsula as an industrial base producing munitions.

The institutional modernization, technological diffusion, and the inflow of Japanese capital put an end to the Malthusian degeneration and pushed Korea onto the path of modern economic growth. Both rents and wages stopped falling and started to rise from the early twentieth century. As the population explosion made labor increasingly abundant vis-a-vis land, rents increased more rapidly than wages, suggesting that income distribution became less equal during the colonial period. Per capita output rose faster than one percent per year from 1911-38.

Per capita grain consumption declined during the colonial period, providing grounds for traditional criticism of the Japanese colonialism exploiting Korea. However, per capita real consumption increased, due to rising non-grain and non-good consumption, and Koreans were also getting better education and living longer. In the late 1920s, life expectancy at birth was 37 years, an estimate several years longer than in China and almost ten years shorter than in Japan. Life expectancy increased to 43 years at the end of the colonial period. Male mean stature was slightly higher than 160 centimeters at the end of the 1920s, a number not significantly different from the Chinese or Japanese height, and appeared to become shorter during the latter half of the colonial period.

South Korean Prosperity

With the end of the Second World War in 1945, two separate regimes emerged on the Korean peninsula to replace the colonial government. The U.S. military government took over the southern half, while communist Russia set up a Korean leadership in the northern half. The de-colonization and political division meant sudden disruption of trade both with Japan and within Korea, causing serious economic turmoil. Dealing with the post-colonial chaos with economic aid, the U.S. military government privatized properties previously owned by the Japanese government and civilians. The first South Korean government, established in 1948, carried out a land reform, making land distribution more egalitarian. Then the Korean War broke out in 1950, killing one and half million people and destroying about a quarter of capital stock during its three year duration.

After the war, South Korean policymakers set upon stimulating economic growth by promoting indigenous industrial firms, following the example of many other post-World War II developing countries. The government selected firms in targeted industries and gave them privileges to buy foreign currencies and to borrow funds from banks at preferential rates. It also erected tariff barriers and imposed a prohibition on manufacturing imports, hoping that the protection would give domestic firms a chance to improve productivity through learning-by-doing and importing advanced technologies. Under the policy, known as import-substitution industrialization (ISI), entrepreneurs seemed more interested in maximizing and perpetuating favors by bribing bureaucrats and politicians, however. This behavior, dubbed as directly unproductive profit-seeking activities (DUP), caused efficiency to falter and living standards to stagnate, providing a background to the collapse of the First Republic in April 1960.

The military coup led by General Park Chung Hee overthrew the short-lived Second Republic in May 1961, making a shift to a strategy of stimulating growth through export promotion (EP hereafter), although ISI was not altogether abandoned. Under EP, policymakers gave various types of favors — low interest loans being the most important — to exporting firms according to their export performance. As the qualification for the special treatment was quantifiable and objective, the room for DUP became significantly smaller. Another advantage of EP over ISI was that it accelerated productivity advances by placing firms under the discipline of export markets and by widening the contact with the developed world: efficiency growth was significantly faster in export industries than in the rest of the economy. In the decade following the shift to EP, per capita output doubled, and South Korea became an industrialized country: from 1960/62 to 1973/75 the share of agriculture in GDP fell from 45 percent to 25 percent, while the share of manufacturing rose from 9 percent to 27 percent. One important factor contributing to the achievement was that the authoritarian government could enjoy relative independence from and avoid capture by special interests.

The withdrawal of U.S. troops from Vietnam in the early 1970s and the subsequent conquest of the region by the communist regime alarmed the South Korean leadership, which has been coping with the threat of North Korea with the help of the U.S. military presence. Park Chung Hee’s reaction was to reduce the level of reliance on the U.S. armed support by expanding capability to produce munitions, which required returning to ISI to build heavy and chemical industries (HCI). The government intervened heavily in the financial markets, directing banks to provide low interest loans to chaebols — conglomerates of businesses owned by a single family — selected for the task of developing different sectors of HCI. Successfully expanding the capital-intensive industries more rapidly than the rest of the economy, the HCI drive generated multiple symptoms of distortion, including rapidly slowing growth, worsening inflation and accumulation of non-performing loans.

Again the ISI ended with a regime shift, triggered by Park Chung Hee’s assassination in 1979. In the 1980s, the succeeding leadership made systematic attempts to sort out the unwelcome legacy of the HCI drive by de-regulating trade and financial sectors. In the 1990s, liberalization of capital account followed, causing rapid accumulation of short-term external debts. This, together with a highly leveraged corporate sector and the banking sector destabilized by the financial repression, provided the background to the contagion of financial crisis from Southeast Asia in 1997. The crisis provided a strong momentum for corporate and financial sector reform.

In the quarter century following the policy shift in the early 1960s, the South Korean per capita output grew at an unusually rapid rate of 7 percent per year, a growth performance paralleled only by Taiwan and two city-states, Hong Kong and Singapore. The portion of South Koreans enjoying the benefits of the growth increased more rapidly from the end of 1970s, when the rising trend in the Gini coefficient (which measures the inequality of income distribution) since the colonial period was reversed. The growth was attributable far more to increased use of productive inputs — physical capital in particular — than to productivity advances. The rapid capital accumulation was driven by an increasingly high savings rate due to a falling dependency ratio, a lagged outcome of rapidly falling mortality during the colonial period. The high growth was also aided by accumulation of human capital, which started with the introduction of modern education under the Japanese rule. Finally, the South Korean developmental state, as symbolized by Park Chung Hee, a former officer of the Japanese Imperial army serving in wartime Manchuria, was closely modeled upon the colonial system of government. In short, South Korea grew on the shoulders of the colonial achievement, rather than emerging out of the ashes left by the Korean War, as is sometimes asserted.

North Korean Starvation

Neither did the North Korean economy emerge out of a void. Founders of the regime took over the system of command set up by the Japanese rulers to invade China. They also benefited from the colonial industrialization concentrated in the north, which had raised the standard of living in the north above that in the south at the end of the colonial rule. While the economic advantage led the North Korean leadership to feel confident enough to invade the South in 1950, it could not sustain the lead: North Korea started to lag behind the fast growing South from the late 1960s, and then suffered a tragic decline in living standards in the 1990s.

After the conclusion of the Korean War, the North Korean power elites adopted a strategy of driving growth through forced saving, which went quickly to the wall for several reasons. First, managers and workers in collective farms and state enterprises had little incentive to improve productivity to counter the falling marginal productivity of capital. Second, the country’s self-imposed isolation made it difficult for it to benefit from the advanced technologies of the developed world through trade and foreign investment. Finally, the despotic and militaristic rule diverted resources to unproductive purposes and disturbed the consistency of planning.

The economic stalemate forced the ruling elites to experiment with the introduction of material incentives and independent accounting of state enterprises. However, they could not push the institutional reform far enough, for fear that it might destabilize their totalitarian rule. Efforts were also made to attract foreign capital, which ended in failure too. Having spent the funds lent by western banks in the early 1970s largely for military purposes, North Korea defaulted on the loans. Laws introduced in the 1980s to draw foreign direct investment had little effect.

The collapse of centrally planned economies in the late 1980s virtually ended energy and capital goods imports at subsidized prices, dealing a serious blow to the wobbly regime. Desperate efforts to resolve chronic food shortages by expanding acreage through deforestation made the country vulnerable to climatic shocks in the 1990s. The end result was a disastrous subsistence crisis, to which the militarist regime responded by extorting concessions from the rest of the world through brinkmanship diplomacy.

Further Reading

Amsden, Alice. Asia’s Next Giant: South Korea and Late Industrialization. Oxford: Oxford University Press, 1989.

Ban, Sung Hwan. “Agricultural Growth in Korea.” In Agricultural Growth in Japan, Taiwan, Korea, and the Philippines, edited by Yujiro Hayami, Vernon W. Ruttan, and Herman M. Southworth, 96-116. Honolulu: University Press of Hawaii, 1979.

Cha, Myung Soo. “Imperial Policy or World Price Shocks? Explaining Interwar Korean Consumption Trend.” Journal of Economic History 58, no. 3 (1998): 731-754.

Cha, Myung Soo. “The Colonial Origins of Korea’s Market Economy.” In Asia-Pacific Dynamism, 1550-2000, edited by A.J.H. Latham and H. Kawakatsu, 86-103. London: Routledge, 2000.

Cha, Myung Soo. “Facts and Myths about Korea’s Economic Past.” Forthcoming in Australian Review of Economic History 44 (2004).

Cole, David C. and Yung Chul Park. Financial Development in Korea, 1945-1978. Cambridge: Harvard University Press, 1983.

Dollar, David and Kenneth Sokoloff. “Patterns of Productivity Growth in South Korean Manufacturing Industries, 1963-1979.” Journal of Development Economics 33, no. 2 (1990): 390-27.

Eckert, Carter J. Offspring of Empire: The Koch’ang Kims and the Colonial Origins of Korean Capitalism, 1876-1945. Seattle: Washington University Press, 1991.

Gill, Insong. “Stature, Consumption, and the Standard of Living in Colonial Korea.” In The Biological Standard of Living in Comparative Perspective, edited by John Komlos and Joerg Baten, 122-138. Stuttgart: Franz Steiner Verlag, 1998.

Gragert, Edwin H. Landownership under Colonial Rule: Korea’s Japanese Experience, 1900-1935. Honolulu: University Press of Hawaii, 1994.

Haggard, Stephan. The Political Economy of the Asian Financial Crisis. Washington: Institute of International Economics, 2000.

Haggard, Stephan, D. Kang and C. Moon. “Japanese Colonialism and Korean Development: A Critique.” World Development 25 (1997): 867-81.

Haggard, Stephan, Byung-kook Kim and Chung-in Moon. “The Transition to Export-led Growth in South Korea: 1954-1966.” Journal of Asian Studies 50, no. 4 (1991): 850-73.

Kang, Kenneth H. “Why Did Koreans Save So Little and Why Do They Now Save So Much?” International Economic Journal 8 (1994): 99-111.

Kang, Kenneth H, and Vijaya Ramachandran. “Economic Transformation in Korea: Rapid Growth without an Agricultural Revolution?” Economic Development and Cultural Change 47, no. 4 (1999): 783-801.

Kim, Kwang Suk and Michael Roemer. Growth and Structural Transformation. Cambridge, MA: Harvard University Press, 1979.

Kimura, Mitsuhiko. “From Fascism to Communism: Continuity and Development of Collectivist Economic Policy in North Korea.” Economic History Review 52, no.1 (1999): 69-86.

Kimura, Mitsuhiko. “Standards of Living in Colonial Korea: Did the Masses Become Worse Off or Better Off under Japanese Rule?” Journal of Economic History 53, no. 3 (1993): 629-652.

Kohli, Atul. “Where Do High Growth Political Economies Come From? The Japanese Lineage of Korea’s ‘Developmental State’.” World Development 9: 1269-93.

Krueger, Anne. The Developmental Role of the Foreign Sector and Aid. Cambridge: Harvard University Press, 1982.

Kwon, Tai Hwan. Demography of Korea: Population Change and Its Components, 1925-66. Seoul: Seoul National University Press, 1977.

Noland, Marcus. Avoiding the Apocalypse: The Future of the Two Koreas. Washington: Institute for International Economics, 2000.

Palais, James B. Politics and Policy in Traditional Korea. Cambridge: Harvard University Press, 1975.

Stern, Joseph J, Ji-hong Kim, Dwight H. Perkins and Jung-ho Yoo, editors. Industrialization and the State: The Korean Heavy and Chemical Industry Drive. Cambridge: Harvard University Press, 1995.

Woo, Jung-en. Race to the Swift: State and Finance in Korean Industrialization. New York: Columbia University Press, 1991.

Young, Alwyn. “The Tyranny of Numbers: Confronting the Statistical Realities of the East Asian Growth Experience.” Quarterly Journal of Economics 110, no. 3 (1995): 641-80.

Citation: Cha, Myung. “The Economic History of Korea”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-korea/

The Roots of American Industrialization, 1790-1860

David R. Meyer, Brown University

The Puzzle of Industrialization

In a society which is predominantly agricultural, how is it possible for industrialization to gain a foothold? One view is that the demand of farm households for manufactures spurs industrialization, but such an outcome is not guaranteed. What if farm households can meet their own food requirements, and they choose to supply some of their needs for manufactures by engaging in small-scale craft production in the home? They might supplement this production with limited purchases of goods from local craftworkers and purchases of luxuries from other countries. This local economy would be relatively self-sufficient, and there is no apparent impetus to alter it significantly through industrialization, that is, the growth of workshop and factory production for larger markets. Others would claim that limited gains might come from specialization, once demand passed some small threshold. Finally, it has been argued that if the farmers are impoverished, some of them would be available for manufacturing and this would provide an incentive to industrialize. However, this argument begs the question as to who would purchase the manufactures. One possibility is that non-farm rural dwellers, such as trade people, innkeepers, and professionals, as well as a small urban population, might provide an impetus to limited industrialization.

The problem with the “impoverished agriculture” theory

The industrialization of the eastern United States from 1790 to 1860 raises similar conundrums. For a long time, scholars thought that the agriculture was mostly poor quality. Thus, the farm labor force left agriculture for workshops, such as those which produced shoes, or for factories, such as the cotton textile mills of New England. These manufactures provided employment for women and children, who otherwise had limited productive possibilities because the farms were not economical. Yet, the market for manufactures remained mostly in the East prior to 1860. Consequently, it is unclear who would have purchased the products to support the growth of manufactures before 1820, as well as to undergird the large-scale industrialization of the East during the two decades following 1840. Even if the impoverished-agriculture explanation of the East’s industrialization is rejected, we are still left with the curiosity that as late as 1840, about eighty percent of the population lived in rural areas, though some of them were in nonfarm occupations.

In brief, the puzzle of eastern industrialization between 1790 and 1860 can be resolved – the East had a prosperous agriculture. Farmers supplied low-cost agricultural products to rural and urban dwellers, and this population demanded manufactures, which were supplied by vigorous local and subregional manufacturing sectors. Some entrepreneurs shifted into production for larger market areas, and this transformation occurred especially in sectors such as shoes, selected light manufactures produced in Connecticut (such as buttons, tinware, and wooden clocks), and cotton textiles. Transportation improvements exerted little impact on these agricultural and industrial developments, primarily because the lowly wagon served effectively as a transport medium and much of the East’s most prosperous areas were accessible to cheap waterway transportation. The metropolises of Boston, New York, Philadelphia, and, to a lesser extent, Baltimore, and the satellites of each (together, each metropolis and its satellites is called a metropolitan industrial complex), became leading manufacturing centers, and other industrial centers emerged in prosperous agricultural areas distant from these complexes. The East industrialized first, and, subsequently, the Midwest began an agricultural and industrial growth process which was underway by the 1840s. Together, the East and the Midwest constituted the American Manufacturing Belt, which was formed by the 1870s, whereas the South failed to industrialize commensurately.

Synergy between Agriculture and Manufacturing

The solution to the puzzle of how industrialization can occur in a predominantly agricultural economy recognizes the possibility of synergy between agriculture and manufacturing. During the first three decades following 1790, prosperous agricultural areas emerged in the eastern United States. Initially, these areas were concentrated near the small metropolises of Boston, New York, and Philadelphia, and in river valleys such as the Connecticut Valley in Connecticut and Massachusetts, the Hudson and Mohawk Valleys in New York, the Delaware Valley bordering Pennsylvania and New Jersey, and the Susquehanna Valley in eastern Pennsylvania. These agricultural areas had access to cheap, convenient transport which could be used to reach markets; the farms supplied the growing urban populations in the cities and some of the products were exported. Furthermore, the farmers supplied the nearby, growing non-farm populations in the villages and small towns who provided goods and services to farmers. These non-farm consumers included retailers, small mill owners, teamsters, craftspeople, and professionals (clergy, physicians, and lawyers).

Across every decade from 1800 to 1860, the number of farm laborers grew, thus testifying to the robustness of eastern agriculture (see Table 1). And, this increase occurred in the face of an expanding manufacturing sector, as increasing numbers of rural dwellers left the farms to work in the factories, especially after 1840. Even New England, the region which presumably was the epitome of declining agriculture, witnessed a rise in the number of farm laborers all the way up to 1840, and, as of 1860, the drop off from the peak was small. Massachusetts and Connecticut, which had vigorous small workshops and increasing numbers of small factories before 1840, followed by a surge in manufacturing after 1840, matched the trajectory of farm laborers in New England as a whole. The numbers in these two states peaked in 1840 and fell off only modestly over the next twenty years. The Middle Atlantic region witnessed an uninterrupted rise in the number of farm laborers over the sixty-year period. New York and Pennsylvania, the largest states, followed slightly different paths. In New York, the number of farm laborers peaked around 1840 and then stabilized near that level for the next two decades, whereas in Pennsylvania the number of farm laborers rose in an uninterrupted fashion.

Table 1
Number of Farm Laborers by Region and Selected States, 1800-1860

Year 1800 1810 1820 1830 1840 1850 1860
New England 228,100 257,700 303,400 353,800 389,100 367,400 348,100
Massachusetts 73,200 72,500 73,400 78,500 87,900 80,800 77,700
Connecticut 50,400 49,300 51,500 55,900 57,000 51,400 51,800
Middle Atlantic 375,700 471,400 571,700 715,000 852,800 910,400 966,600
New York 111,800 170,100 256,000 356,300 456,000 437,100 449,100
Pennsylvania 112,600 141,000 164,900 195,200 239,000 296,300 329,000
East 831,900 986,800 1,178,500 1,422,600 1,631,000 1,645,200 1,662,800

Source: Thomas Weiss, “U.S. Labor Force Estimates and Economic Growth, 1800-1860,”American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis (Chicago, IL: University of Chicago Press, 1992), table 1A.9, p. 51.

The farmers, retailers, professionals, and others in these prosperous agricultural areas accumulated capital which became available for other economic sectors, and manufacturing was one of the most important to receive this capital. Entrepreneurs who owned small workshops and factories obtained capital to turn out a wide range of goods such as boards, boxes, utensils, building hardware, furniture, and wagons, which were in demand in the agricultural areas. And, some of these workshops and factories enlarged their market areas to a subregion as they gained production efficiencies; but, this did not account for all industrial development. Selected manufactures such as shoes, tinware, buttons, and cotton textiles were widely demanded by urban and rural residents of prosperous agricultural areas and by residents of the large cities. These products were high value relative to their weight; thus, the cost to ship them long distances was low. Astute entrepreneurs devised production methods and marketing approaches to sell these goods in large market areas, including New England and the Middle Atlantic regions of the East.

Manufactures Which Were Produced for Large Market Areas

Shoes and Tinware

Small workshops turned out shoes. Massachusetts entrepreneurs devised an integrated shoe production complex based on a division of labor among shops, and they established a marketing arm of wholesalers, principally in Boston, who sold the shoes throughout New England, to the Middle Atlantic, and to the South (particularly, to slave plantations). Businesses in Connecticut drew on the extensive capital accumulated by the well-to-do rural and urban dwellers of that state and moved into tinware, plated ware, buttons, and wooden clocks. These products, like shoes, also were manufactured in small workshops, but a division of labor among shops was less important than the organization of production within shops. Firms producing each good tended to agglomerate in a small subregion of the state. These clusters arose because entrepreneurs shared information about production techniques and specialized skills which they developed, and this knowledge was communicated as workers moved among shops. Initially, a marketing system of peddlers emerged in the tinware sector, and they sold the goods, first throughout Connecticut, and then they extended their travels to the rest of New England and to the Middle Atlantic. Workshops which made other types of light, high-value goods soon took advantage of the peddler distribution system to enlarge their market areas. At first, these peddlers operated part-time during the year, but as the supply of goods increased and market demand grew, peddlers operated for longer periods of the year and they traveled farther.

Cotton Textiles

Cotton textile manufacturing was an industry built on low-wage, especially female, labor; presumably, this industry offered opportunities in areas where farmers were unsuccessful. Yet, similar to the other manufactures which enlarged their market areas to the entire East before 1820, cotton textile production emerged in prosperous agricultural areas. That is not surprising, because this industry required substantial capital, technical skills, and, initially, nearby markets. These requirements were met in rich farming areas, which also could draw on wealthy merchants in large cities who contributed capital and provided sale outlets beyond nearby markets as output grew. The production processes in cotton textile manufacturing, however, diverged from the approaches to making shoes and small metal and wooden products. From the start, production processes included textile machinery, which initially consisted of spinning machines to make yarn, and later (after 1815), weaving machines and other mechanical equipment were added. Highly skilled mechanics were required to build the machines and to maintain them. The greater capital requirements for cotton mills, compared to shoes and small goods’ manufactures in Connecticut, meant that merchant wholesalers and wealthy retailers, professionals, mill owners, and others, were important underwriters of the factories.

Starting in the 1790s, New England, and, especially, Rhode Island, housed the leaders in early cotton textile manufacturing. Providence merchants funded some of the first successful cotton spinning mills, and they drew on the talents of Samuel Slater, an immigrant British machinist. He trained many of the first important textile mechanics, and investors in various parts of Rhode Island, Connecticut, Massachusetts, New Hampshire, and New York hired them to build mills. Between 1815 and 1820, power-loom weaving began to be commercially feasible, and this effort was led by firms in Rhode Island and, especially, in Massachusetts. Boston merchants, starting with the Boston Manufacturing Company at Waltham, devised a business plan which targeted large-scale, integrated cotton textile manufacturing, with a marketing/sales arm housed in a separate firm. They enlarged their effort significantly after 1820, and much of the impetus to the growth of the cotton textile industry came from the success entrepreneurs had in lowering the cost of production.

The Impact of Transportation Improvements

Following 1820, government and private sources invested substantial sums in canals, and after 1835, railroad investment increased rapidly. Canals required huge volumes of low-value commodities in order to pay operating expenses, cover interest on the bonds which were issued for construction, and retire the bonds at maturity. These conditions were only met in the richest agricultural and resource (lumbering and coal mining, for example) areas traversed by the Erie and Champlain Canals in New York and the coal canals in eastern Pennsylvania and New Jersey. The vast majority of the other canals failed to yield benefits for agriculture and industry, and most were costly debacles. Early railroads mainly carried passengers, especially within fifty to one hundred miles of the largest cities – Boston, New York, Philadelphia, and Baltimore. Industrial products were not carried in large volumes until after 1850; consequently, railroads built before that time had little impact on industrialization in the East.

Canals and railroads had minor impacts on agricultural and industrial development because the lowly wagon provided withering competition. Wagons offered flexible, direct connections between origins and destinations, without the need to transship goods, as was the case with canals and railroads; these modes required wagons at their end points. Within a distance of about fifty miles, the cost of wagon transport was competitive with alternative transport modes, so long as the commodities were high value relative to their weight. And, infrequent transport of these goods could occur over distances of as much as one hundred miles. This applied to many manufactures, and agricultural commodities could be raised to high value by processing prior to shipment. Thus, wheat was turned into flour, corn and other grains were fed to cattle and pigs and these were processed into beef and pork prior to shipment, and milk was converted into butter and cheese. Most of the richest agricultural and industrial areas of the East were less than one hundred miles from the largest cities or these areas were near low-cost waterway transport along rivers, bays, and the Atlantic Coast. Therefore, canals and railroads in these areas had difficulty competing for freight, and outside these areas the limited production generated little demand for long distant transport services.

Agricultural Prosperity Continues

After 1820, eastern farmers seized the increasing market opportunities in the prosperous rural areas as nonfarm processing expanded and village and small town populations demanded greater amounts of farm products. The large number of farmers who were concentrated around the rapidly growing metropolises (Boston, New York, Philadelphia, and Baltimore) and near urban agglomerations such as Albany-Troy, New York, developed increasing specialization in urban market goods such as fluid milk, fresh vegetables, fruit, butter, and hay (for horse transport). Farmers farther away responded to competition by shifting into products which could be transported long distances to market, including wheat into flour, cattle which walked to market, or pigs which were converted into pork. During the winter these farms sent butter, and cheese was a specialty which could be lucrative for long periods of the year when temperatures were cool.

These changes swept across the East, and, after 1840, farmers increasingly adjusted their production to compete with cheap wheat, cattle, and pork arriving over the Erie Canal from the Midwest. Wheat growing became less profitable, and specialized agriculture expanded, such as potatoes, barley, and hops in central New York and cigar tobacco in the Connecticut Valley. Farmers near the largest cities intensified their specialization in urban market products, and as the railroads expanded, fluid milk was shipped longer distances to these cities. Farmers in less accessible areas and on poor agricultural land which was infertile or too hilly, became less competitive. If these farmers and their children stayed, their incomes declined relative to others in the East, but if they moved to the Midwest or to the burgeoning industrial cities of the East, they had the chance of participating in the rising prosperity.

Metropolitan Industrial Complexes

The metropolises of Boston, New York, Philadelphia, and, to a lesser extent, Baltimore, led the industrial expansion after 1820, because they were the greatest concentrated markets, they had the most capital, and their wholesalers provided access to subregional and regional markets outside the metropolises. By 1840, each of them was surrounded by industrial satellites – manufacturing centers in close proximity to, and economically integrated with, the metropolis. Together, these metropolises and their satellites formed metropolitan industrial complexes, which accounted for almost one-quarter of the nation’s manufacturing (see Table 2). For example, metropolises and satellites included Boston and Lowell, New York and Paterson (New Jersey), Philadelphia and Reading (Pennsylvania), and Baltimore and Wilmington (Delaware), which also was a satellite of Philadelphia. Among the four leading metropolises, New York and Philadelphia housed, by far, the largest share of the nation’s manufacturing workers, and their satellites had large numbers of industrial workers. Yet, Boston’s satellites contained the greatest concentration of industrial workers in the nation, with almost seven percent of the national total. The New York, Philadelphia, and Boston metropolitan industrial complexes each had approximately the same share of the nation’s manufacturing workers. These complexes housed a disproportionate share of the nation’s commerce-serving manufactures such as printing-publishing and paper and of local, regional, and national market manufactures such as glass, drugs and paints, textiles, musical instruments, furniture, hardware, and machinery.

Table 2
Manufacturing Employment in the Metropolitan Industrial Complexes
of New York, Philadelphia, Boston, and Baltimore
as a Percentage of National Manufacturing Employment in 1840

Metropolis Satellites Complex
New York 4.1% 3.4% 7.4%
Philadelphia 3.9 2.9 6.7
Boston 0.5 6.6 7.1
Baltimore 2.0 0.2 2.3
Four Complexes 10.5 13.1 23.5

Note: Metropolitan county is defined as the metropolis for each complex and “outside” comprises nearby counties; those included in each complex were the following. New York: metropolis (New York, Kings, Queens, Richmond); outside (Connecticut: Fairfield; New York: Westchester, Putnam, Rockland, Orange; New Jersey: Bergen, Essex, Hudson, Middlesex, Morris, Passaic, Somerset). Philadelphia: metropolis (Philadelphia); outside (Pennsylvania: Bucks, Chester, Delaware, Montgomery; New Jersey: Burlington, Gloucester, Mercer; Delaware: New Castle). Boston: metropolis (Suffolk); outside (Essex, Middlesex, Norfolk, Plymouth). Baltimore: metropolis (Baltimore); outside (Anne Arundel, Harford).

Source: U.S. Bureau of the Census, Compendium of the Sixth Census, 1840 (Washington, D.C.: Blair and Rives, 1841).

Also, by 1840, prosperous agricultural areas farther from these complexes, such as the Connecticut Valley in New England, the Hudson Valley, the Erie Canal Corridor across New York state, and southeastern Pennsylvania, housed significant amounts of manufacturing in urban places. At the intersection of the Hudson and Mohawk rivers, the Albany-Troy agglomeration contained one of the largest concentrations of manufacturing outside the metropolitan complexes. And, industrial towns such as Utica, Syracuse, Rochester, and Buffalo were strung along the Erie Canal Corridor. Many of the manufactures (such as furniture, wagons, and machinery) served subregional markets in the areas of prosperous agriculture, but some places also developed specialization in manufactures (textiles and hardware) for larger regional and interregional market areas (the East as a whole). The Connecticut Valley, for example, housed many firms which produced cotton textiles, hardware, and cutlery.

Manufactures for Eastern and National Markets

Shoes

In several industrial sectors whose firms had expanded before 1820 to regional, and even, multiregional markets, in the East, firms intensified their penetration of eastern markets and reached to markets in the rapidly growing Midwest between 1820 and 1860. In eastern Massachusetts, a production complex of shoe firms innovated methods of organizing output within and among firms, and they developed a wide array of specialized tools and components to increase productivity and to lower manufacturing costs. In addition, a formidable wholesaling, marketing, and distribution complex, headed by Boston wholesalers, pushed the ever-growing volume of shoes into sales channels which reached throughout the nation. Machinery did not come into use until the 1850s, and, by 1860, Massachusetts accounted for half of the value of the nation’s shoe production.

Cotton Textiles

In contrast, machinery constituted an important factor of production which drove down the price of cotton textile goods, substantially enlarging the quantity consumers demanded. Before 1820, most of the machinery innovations improved the spinning process for making yarn, and in the five years following 1815, innovations in mechanized weaving generated an initial substantial drop in the cost of production as the first integrated spinning-weaving mills emerged. During the next decade and a half the price of cotton goods collapsed by over fifty percent as large integrated spinning-weaving mills became the norm for the production of most cotton goods. Therefore, by the mid-1830s vast volumes of cotton goods were pouring out of textile mills, and a sophisticated set of specialized wholesaling firms, mostly concentrated in Boston, and secondarily, in New York and Philadelphia, channeled these items into the national market.

Prior to 1820, the cotton textile industry was organized into three cores. The Providence core dominated and the Boston core occupied second place; both of these were based mostly on mechanized spinning. A third core in the city of Philadelphia was based on hand spinning and weaving. Within about fifteen years after 1820, the Boston core soared to a commanding position in cotton textile production as a group of Boston merchants and their allies relentlessly replicated their business plan at various sites in New England, including at Lowell, Chicopee, and Taunton in Massachusetts, at Nashua, Manchester, and Dover in New Hampshire, and at Saco in Maine. The Providence core continued to grow, but its investors did not seem to fully grasp the strategic, multi-faceted business plan which the Boston merchants implemented. Similarly, investors in an emerging core within about fifty to seventy-five miles of New York City in the Hudson Valley and northern New Jersey likewise did not seem to fully understand the Boston merchants’ plan, and these New York City area firms never reached the scale of the firms of the Boston Core. The Philadelphia core enlarged to nearby areas southwest of the city and in Delaware, but these firms stayed small, and the Philadelphia firms created a small-scale, flexible production system which turned out specialized goods, not the mass-market commodity textiles of the other cores.

Capital Investment in Cotton Textiles

The distribution of capital investment in cotton textiles across the regions and states of the East between 1820 and 1860 capture the changing prominence of the cores of cotton textile production (see Table 3). The New England and the Middle Atlantic regions contained approximately similar shares (almost half each) of the nation’s capital investment. However, during the 1820s the cotton textile industry restructured to a form which was maintained for the next three decades. New England’s share of capital investment surged to about seventy percent, and it maintained that share until 1860, whereas the Middle Atlantic region’s share fell to around twenty percent by 1840 and remained near that until 1860. The rest of the nation, primarily the South, reached about ten percent of total capital investment around 1840 and continued at that level for the next two decades. Massachusetts became the leading cotton textile state by 1831 and Rhode Island, the early leader, gradually slipped to a level of about ten percent by the 1850s; New Hampshire and Pennsylvania housed approximately similar shares as Rhode Island by that time.

Table 3
Capital Invested in Cotton Textiles
by Region and State as a Percentage of the Nation
1820-1860

Region/state 1820 1831 1840 1850 1860
New England 49.6% 69.8% 68.4% 72.3% 70.3%
Maine 1.6 1.9 2.7 4.5 6.1
New Hampshire 5.6 13.1 10.8 14.7 12.8
Vermont 1.0 0.7 0.2 0.3 0.3
Massachusetts 14.3 31.7 34.1 38.2 34.2
Connecticut 11.6 7.0 6.2 5.7 6.7
Rhode Island 15.4 15.4 14.3 9.0 10.2
Middle Atlantic 46.2 29.5 22.7 17.3 19.0
New York 18.8 9.0 9.6 5.6 5.5
New Jersey 4.7 5.0 3.4 2.0 1.3
Pennsylvania 6.3 9.3 6.5 6.1 9.3
Delaware 4.0 0.9 0.6 0.6 0.6
Maryland 12.4 5.3 2.6 3.0 2.3
Rest of nation 4.3 0.7 9.0 10.4 10.7
Nation 100.0% 100.0% 100.0% 100.0% 100.0%
Total capital (thousands) $10,783 $40,613 $51,102 $74,501 $98,585

Sources: David J. Jeremy, Transatlantic Industrial Revolution: The Diffusion of Textile Technologies Between Britain and America, 1790-1830s (Cambridge, MA: MIT Press, 1981), appendix D, table D.1, p. 276; U.S. Bureau of the Census, Compendium of the Sixth Census, 1840 (Washington, D.C.: Blair and Rives, 1841); U.S. Bureau of the Census, Report on the Manufactures of the United States at the Tenth Census, 1880 (Washington, D.C.: Government Printing Office, 1883).

Connecticut’s Industries

In Connecticut, industrialists built on their successful production and sales prior to 1820 and expanded into a wider array of products which they sold in the East and South, and, after 1840, they acquired more sales in the Midwest. This success was not based on a mythical “Yankee ingenuity,” which, typically, has been framed in terms of character. Instead, this ingenuity rested on fundamental assets: a highly educated population linked through wide-ranging social networks which communicated information about technology, labor opportunities, and markets; and the abundant supplies of capital in the state supported the entrepreneurs. The peddler distribution system provided efficient sales channels into the mid-1830s, but, after that, firms took advantage of more traditional wholesaling channels. In some sectors, such as the brass industry, firms followed the example of the large Boston-core textile firms, and the brass companies founded their own wholesale distribution agencies in Boston and New York City. The achievements of Connecticut’s firms were evident by 1850. As a share of the nation’s value of production, they accounted for virtually all of the clocks, pins, and suspenders, close to half of the buttons and rubber goods, and about one-third of the brass foundry products, Britannia and plated ware, and hardware.

Difficulty of Duplicating Eastern Methods in the Midwest

The East industrialized first, based on a prosperous agricultural and industrialization process, as some of its entrepreneurs shifted into the national market manufactures of shoes, cotton textiles, and diverse goods turned out in Connecticut. These industrialists made this shift prior to 1820, and they enhanced their dominance of these products during the subsequent two decades. Manufacturers in the Midwest did not have sufficient intraregional markets to begin producing these goods before 1840; therefore, they could not compete in these national market manufactures. Eastern firms had developed technologies and organizations of production and created sales channels which could not be readily duplicated, and these light, high-value goods were transported cheaply to the Midwest. When midwestern industrialists faced choices about which manufactures to enter, the eastern light, high-value goods were being sold in the Midwest at prices which were so low that it was too risky for midwestern firms to attempt to compete. Instead, these firms moved into a wide range of local and regional market manufactures which also existed in the East, but which cost too much to transport to the Midwest. These goods included lumber and food products (e.g., flour and whiskey), bricks, chemicals, machinery, and wagons.

The American Manufacturing Belt

The Midwest Joins the American Manufacturing Belt after 1860

Between 1840 and 1860, Midwestern manufacturers made strides in building an industrial infrastructure, and they were positioned to join with the East to constitute the American Manufacturing Belt, the great concentration of manufacturing which would sprawl from the East Coast to the edge of the Great Plains. This Belt became mostly set within a decade or so after 1860, because technologies and organizations of production and of sales channels had lowered costs across a wide array of manufactures, and improvements in transportation (such as an integrated railroad system) and communication (such as the telegraph) reduced distribution costs. Thus, increasing shares of industrial production were sold in interregional markets.

Lack of Industrialization in the South

Although the South had prosperous farms, it failed to build a deep and broad industrial infrastructure prior to 1860, because much of its economy rested on a slave agricultural system. In this economy, investments were heavily concentrated in slaves rather than in an urban and industrial infrastructure. Local and regional demand remained low across much of the South, because slaves were not able to freely express their consumption demands and population densities remained low, except in a few agricultural areas. Thus, the market thresholds for many manufactures were not met, and, if thresholds were met, the demand was insufficient to support more than a few factories. By the 1870s, when the South had recovered from the Civil War and its economy was reconstructed, eastern and midwestern industrialists had built strong positions in many manufactures. And, as new industries emerged, the northern manufacturers had the technological and organizational infrastructure and distribution channels to capture dominance in the new industries.

In a similar fashion, the Great Plains, the Southwest, and the West were settled too late for their industrialists to be major producers of national market goods. Manufacturers in these regions focused on local and regional market manufactures. Some low wage industries (such as textiles) began to move to the South in significant numbers after 1900, and the emergence of industries based on high technology after 1950 led to new manufacturing concentrations which rested on different technologies. Nonetheless, the American Manufacturing Belt housed the majority of the nation’s industry until the middle of the twentieth century.

This essay is based on David R. Meyer, The Roots of American Industrialization, Baltimore: Johns Hopkins University Press, 2003.

Additional Readings

Atack, Jeremy, and Fred Bateman. To Their Own Soil: Agriculture in the Antebellum North. Ames, IA: Iowa State University Press, 1987.

Baker, Andrew H., and Holly V. Izard. “New England Farmers and the Marketplace, 1780-1865: A Case Study.” Agricultural History 65 (1991): 29-52.

Barker, Theo, and Dorian Gerhold. The Rise and Rise of Road Transport, 1700-1990. New York: Cambridge University Press, 1995.

Bodenhorn, Howard. A History of Banking in Antebellum America: Financial Markets and Economic Development in an Era of Nation-Building. New York: Cambridge University Press, 2000.

Brown, Richard D. Knowledge is Power: The Diffusion of Information in Early America, 1700-1865. New York: Oxford University Press, 1989.

Clark, Christopher. The Roots of Rural Capitalism: Western Massachusetts, 1780-1860. Ithaca, NY: Cornell University Press, 1990.

Dalzell, Robert F., Jr. Enterprising Elite: The Boston Associates and the World They Made. Cambridge, MA: Harvard University Press, 1987.

Durrenberger, Joseph A. Turnpikes: A Study of the Toll Road Movement in the Middle Atlantic States and Maryland. Cos Cob, CT: John E. Edwards, 1968.

Field, Alexander J. “On the Unimportance of Machinery.” Explorations in Economic History 22 (1985): 378-401.

Fishlow, Albert. American Railroads and the Transformation of the Ante-Bellum Economy. Cambridge, MA: Harvard University Press, 1965.

Fishlow, Albert. “Antebellum Interregional Trade Reconsidered.” American Economic Review 54 (1964): 352-64.

Goodrich, Carter, ed. Canals and American Economic Development. New York: Columbia University Press, 1961.

Gross, Robert A. “Culture and Cultivation: Agriculture and Society in Thoreau’s Concord.” Journal of American History 69 (1982): 42-61.

Hoke, Donald R. Ingenious Yankees: The Rise of the American System of Manufactures in the Private Sector. New York: Columbia University Press, 1990.

Hounshell, David A. From the American System to Mass Production, 1800-1932: The Development of Manufacturing Technology in the United States. Baltimore: Johns Hopkins University Press, 1984.

Jeremy, David J. Transatlantic Industrial Revolution: The Diffusion of Textile Technologies between Britain and America, 1790-1830s. Cambridge, MA: MIT Press, 1981.

Jones, Chester L. The Economic History of the Anthracite-Tidewater Canals. University of Pennsylvania Series on Political Economy and Public Law, no. 22. Philadelphia: John C. Winston, 1908.

Karr, Ronald D. “The Transformation of Agriculture in Brookline, 1770-1885.” Historical Journal of Massachusetts 15 (1987): 33-49.

Lindstrom, Diane. Economic Development in the Philadelphia Region, 1810-1850. New York: Columbia University Press, 1978.

McClelland, Peter D. Sowing Modernity: America’s First Agricultural Revolution. Ithaca, NY: Cornell University Press, 1997.

McMurry, Sally. Transforming Rural Life: Dairying Families and Agricultural Change, 1820-1885. Baltimore: Johns Hopkins University Press, 1995.

McNall, Neil A. An Agricultural History of the Genesee Valley, 1790-1860. Philadelphia: University of Pennsylvania Press, 1952.

Majewski, John. A House Dividing: Economic Development in Pennsylvania and Virginia Before the Civil War. New York: Cambridge University Press, 2000.

Mancall, Peter C. Valley of Opportunity: Economic Culture along the Upper Susquehanna, 1700-1800. Ithaca, NY: Cornell University Press, 1991.

Margo, Robert A. Wages and Labor Markets in the United States, 1820-1860. Chicago: University of Chicago Press, 2000.

Meyer, David R. “The Division of Labor and the Market Areas of Manufacturing Firms.” Sociological Forum 3 (1988): 433-53.

Meyer, David R. “Emergence of the American Manufacturing Belt: An Interpretation.” Journal of Historical Geography 9 (1983): 145-74.

Meyer, David R. “The Industrial Retardation of Southern Cities, 1860-1880.” Explorations in Economic History 25 (1988): 366-86.

Meyer, David R. “Midwestern Industrialization and the American Manufacturing Belt in the Nineteenth Century.” Journal of Economic History 49 (1989): 921-37.

Ransom, Roger L. “Interregional Canals and Economic Specialization in the Antebellum United States.” Explorations in Entrepreneurial History 5, no. 1 (1967-68): 12-35.

Roberts, Christopher. The Middlesex Canal, 1793-1860. Cambridge, MA: Harvard University Press, 1938.

Rothenberg, Winifred B. From Market-Places to a Market Economy: The Transformation of Rural Massachusetts, 1750-1850. Chicago: University of Chicago Press, 1992.

Scranton, Philip. Proprietary Capitalism: The Textile Manufacture at Philadelphia, 1800-1885. New York: Cambridge University Press, 1983.

Shlakman, Vera. “Economic History of a Factory Town: A Study of Chicopee, Massachusetts.” Smith College Studies in History 20, nos. 1-4 (1934-35): 1-264.

Sokoloff, Kenneth L. “Invention, Innovation, and Manufacturing Productivity Growth in the Antebellum Northeast.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John J. Wallis, 345-78. Chicago: University of Chicago Press, 1992.

Sokoloff, Kenneth L. “Inventive Activity in Early Industrial America: Evidence from Patent Records, 1790-1846.” Journal of Economic History 48 (1988): 813-50.

Sokoloff, Kenneth L. “Productivity Growth in Manufacturing during Early Industrialization: Evidence from the American Northeast, 1820-1860.” In Long-Term Factors in American Economic Growth, edited by Stanley L. Engerman and Robert E. Gallman, 679-729. Chicago: University of Chicago Press, 1986.

Ware, Caroline F. The Early New England Cotton Manufacture: A Study in Industrial Beginnings. Boston: Houghton Mifflin, 1931.

Weiss, Thomas. “Economic Growth before 1860: Revised Conjectures.” In American Economic Development in Historical Perspective, edited by Thomas Weiss and Donald Schaefer, 11-27. Stanford, CA: Stanford University Press, 1994.

Weiss, Thomas. “Long-Term Changes in U.S. Agricultural Output per Worker, 1800-1900.” Economic History Review 46 (1993): 324-41.

Weiss, Thomas. “U.S. Labor Force Estimates and Economic Growth, 1800-1860.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 19-75. Chicago University of Chicago Press, 1992.

Wood, Frederic J. The Turnpikes of New England. Boston: Marshall Jones, 1919.

Wood, Gordon S. The Radicalism of the American Revolution. New York: Alfred A. Knopf, 1992.

Zevin, Robert B. “The Growth of Cotton Textile Production after 1815.” In The Reinterpretation of American Economic History, edited by Robert W. Fogel and Stanley L. Engerman, 122-47. New York: Harper & Row, 1971.

Citation: Meyer, David. “American Industrialization”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-roots-of-american-industrialization-1790-1860/

The Economic History of Indonesia

Jeroen Touwen, Leiden University, Netherlands

Introduction

In recent decades, Indonesia has been viewed as one of Southeast Asia’s successful highly performing and newly industrializing economies, following the trail of the Asian tigers (Hong Kong, Singapore, South Korea, and Taiwan) (see Table 1). Although Indonesia’s economy grew with impressive speed during the 1980s and 1990s, it experienced considerable trouble after the financial crisis of 1997, which led to significant political reforms. Today Indonesia’s economy is recovering but it is difficult to say when all its problems will be solved. Even though Indonesia can still be considered part of the developing world, it has a rich and versatile past, in the economic as well as the cultural and political sense.

Basic Facts

Indonesia is situated in Southeastern Asia and consists of a large archipelago between the Indian Ocean and the Pacific Ocean, with more than 13.000 islands. The largest islands are Java, Kalimantan (the southern part of the island Borneo), Sumatra, Sulawesi, and Papua (formerly Irian Jaya, which is the western part of New Guinea). Indonesia’s total land area measures 1.9 million square kilometers (750,000 square miles). This is three times the area of Texas, almost eight times the area of the United Kingdom and roughly fifty times the area of the Netherlands. Indonesia has a tropical climate, but since there are large stretches of lowland and numerous mountainous areas, the climate varies from hot and humid to more moderate in the highlands. Apart from fertile land suitable for agriculture, Indonesia is rich in a range of natural resources, varying from petroleum, natural gas, and coal, to metals such as tin, bauxite, nickel, copper, gold, and silver. The size of Indonesia’s population is about 230 million (2002), of which the largest share (roughly 60%) live in Java.

Table 1

Indonesia’s Gross Domestic Product per Capita

Compared with Several Other Asian Countries (in 1990 dollars)

Indonesia Philippines Thailand Japan
1900 745 1 033 812 1 180
1913 904 1 066 835 1 385
1950 840 1 070 817 1 926
1973 1 504 1 959 1 874 11 439
1990 2 516 2 199 4 645 18 789
2000 3 041 2 385 6 335 20 084

Source: Angus Maddison, The World Economy: A Millennial Perspective, Paris: OECD Development Centre Studies 2001, 206, 214-215. For year 2000: University of Groningen and the Conference Board, GGDC Total Economy Database, 2003, http://www.eco.rug.nl/ggdc.

Important Aspects of Indonesian Economic History

“Missed Opportunities”

Anne Booth has characterized the economic history of Indonesia with the somewhat melancholy phrase “a history of missed opportunities” (Booth 1998). One may compare this with J. Pluvier’s history of Southeast Asia in the twentieth century, which is entitled A Century of Unfulfilled Expectations (Breda 1999). The missed opportunities refer to the fact that despite its rich natural resources and great variety of cultural traditions, the Indonesian economy has been underperforming for large periods of its history. A more cyclical view would lead one to speak of several ‘reversals of fortune.’ Several times the Indonesian economy seemed to promise a continuation of favorable economic development and ongoing modernization (for example, Java in the late nineteenth century, Indonesia in the late 1930s or in the early 1990s). But for various reasons Indonesia time and again suffered from severe incidents that prohibited further expansion. These incidents often originated in the internal institutional or political spheres (either after independence or in colonial times), although external influences such as the 1930s Depression also had their ill-fated impact on the vulnerable export-economy.

“Unity in Diversity”

In addition, one often reads about “unity in diversity.” This is not only a political slogan repeated at various times by the Indonesian government itself, but it also can be applied to the heterogeneity in the national features of this very large and diverse country. Logically, the political problems that arise from such a heterogeneous nation state have had their (negative) effects on the development of the national economy. The most striking difference is between densely populated Java, which has a long tradition of politically and economically dominating the sparsely populated Outer Islands. But also within Java and within the various Outer Islands, one encounters a rich cultural diversity. Economic differences between the islands persist. Nevertheless, for centuries, the flourishing and enterprising interregional trade has benefited regional integration within the archipelago.

Economic Development and State Formation

State formation can be viewed as a condition for an emerging national economy. This process essentially started in Indonesia in the nineteenth century, when the Dutch colonized an area largely similar to present-day Indonesia. Colonial Indonesia was called ‘the Netherlands Indies.’ The term ‘(Dutch) East Indies’ was mainly used in the seventeenth and eighteenth centuries and included trading posts outside the Indonesian archipelago.

Although Indonesian national historiography sometimes refers to a presumed 350 years of colonial domination, it is exaggerated to interpret the arrival of the Dutch in Bantam in 1596 as the starting point of Dutch colonization. It is more reasonable to say that colonization started in 1830, when the Java War (1825-1830) was ended and the Dutch initiated a bureaucratic, centralizing polity in Java without further restraint. From the mid-nineteenth century onward, Dutch colonization did shape the borders of the Indonesian nation state, even though it also incorporated weaknesses in the state: ethnic segmentation of economic roles, unequal spatial distribution of power, and a political system that was largely based on oppression and violence. This, among other things, repeatedly led to political trouble, before and after independence. Indonesia ceased being a colony on 17 August 1945 when Sukarno and Hatta proclaimed independence, although full independence was acknowledged by the Netherlands only after four years of violent conflict, on 27 December 1949.

The Evolution of Methodological Approaches to Indonesian Economic History

The economic history of Indonesia analyzes a range of topics, varying from the characteristics of the dynamic exports of raw materials, the dualist economy in which both Western and Indonesian entrepreneurs participated, and the strong measure of regional variation in the economy. While in the past Dutch historians traditionally focused on the colonial era (inspired by the rich colonial archives), from the 1960s and 1970s onward an increasing number of scholars (among which also many Indonesians, but also Australian and American scholars) started to study post-war Indonesian events in connection with the colonial past. In the course of the 1990s attention gradually shifted from the identification and exploration of new research themes towards synthesis and attempts to link economic development with broader historical issues. In 1998 the excellent first book-length survey of Indonesia’s modern economic history was published (Booth 1998). The stress on synthesis and lessons is also present in a new textbook on the modern economic history of Indonesia (Dick et al 2002). This highly recommended textbook aims at a juxtaposition of three themes: globalization, economic integration and state formation. Globalization affected the Indonesian archipelago even before the arrival of the Dutch. The period of the centralized, military-bureaucratic state of Soeharto’s New Order (1966-1998) was only the most recent wave of globalization. A national economy emerged gradually from the 1930s as the Outer Islands (a collective name which refers to all islands outside Java and Madura) reoriented towards industrializing Java.

Two research traditions have become especially important in the study of Indonesian economic history during the past decade. One is a highly quantitative approach, culminating in reconstructions of Indonesia’s national income and national accounts over a long period of time, from the late nineteenth century up to today (Van der Eng 1992, 2001). The other research tradition highlights the institutional framework of economic development in Indonesia, both as a colonial legacy and as it has evolved since independence. There is a growing appreciation among scholars that these two approaches complement each other.

A Chronological Survey of Indonesian Economic History

The precolonial economy

There were several influential kingdoms in the Indonesian archipelago during the pre-colonial era (e.g. Srivijaya, Mataram, Majapahit) (see further Reid 1988,1993; Ricklefs 1993). Much debate centers on whether this heyday of indigenous Asian trade was effectively disrupted by the arrival of western traders in the late fifteenth century

Sixteenth and seventeenth century

Present-day research by scholars in pre-colonial economic history focuses on the dynamics of early-modern trade and pays specific attention to the role of different ethnic groups such as the Arabs, the Chinese and the various indigenous groups of traders and entrepreneurs. During the sixteenth to the nineteenth century the western colonizers only had little grip on a limited number of spots in the Indonesian archipelago. As a consequence much of the economic history of these islands escapes the attention of the economic historian. Most data on economic matters is handed down by western observers with their limited view. A large part of the area remained engaged in its own economic activities, including subsistence agriculture (of which the results were not necessarily very meager) and local and regional trade.

An older research literature has extensively covered the role of the Dutch in the Indonesian archipelago, which began in 1596 when the first expedition of Dutch sailing ships arrived in Bantam. In the seventeenth and eighteenth centuries the Dutch overseas trade in the Far East, which focused on high-value goods, was in the hands of the powerful Dutch East India Company (in full: the United East Indies Trading Company, or Vereenigde Oost-Indische Compagnie [VOC], 1602-1795). However, the region was still fragmented and Dutch presence was only concentrated in a limited number of trading posts.

During the eighteenth century, coffee and sugar became the most important products and Java became the most important area. The VOC gradually took over power from the Javanese rulers and held a firm grip on the productive parts of Java. The VOC was also actively engaged in the intra-Asian trade. For example, cotton from Bengal was sold in the pepper growing areas. The VOC was a successful enterprise and made large dividend payments to its shareholders. Corruption, lack of investment capital, and increasing competition from England led to its demise and in 1799 the VOC came to an end (Gaastra 2002, Jacobs 2000).

The nineteenth century

In the nineteenth century a process of more intensive colonization started, predominantly in Java, where the Cultivation System (1830-1870) was based (Elson 1994; Fasseur 1975).

During the Napoleonic era the VOC trading posts in the archipelago had been under British rule, but in 1814 they came under Dutch authority again. During the Java War (1825-1830), Dutch rule on Java was challenged by an uprising led by Javanese prince Diponegoro. To repress this revolt and establish firm rule in Java, colonial expenses increased, which in turn led to a stronger emphasis on economic exploitation of the colony. The Cultivation System, initiated by Johannes van den Bosch, was a state-governed system for the production of agricultural products such as sugar and coffee. In return for a fixed compensation (planting wage), the Javanese were forced to cultivate export crops. Supervisors, such as civil servants and Javanese district heads, were paid generous ‘cultivation percentages’ in order to stimulate production. The exports of the products were consigned to a Dutch state-owned trading firm (the Nederlandsche Handel-Maatschappij, NHM, established in 1824) and sold profitably abroad.

Although the profits (‘batig slot’) for the Dutch state of the period 1830-1870 were considerable, various reasons can be mentioned for the change to a liberal system: (a) the emergence of new liberal political ideology; (b) the gradual demise of the Cultivation System during the 1840s and 1850s because internal reforms were necessary; and (c) growth of private (European) entrepreneurship with know-how and interest in the exploitation of natural resources, which took away the need for government management (Van Zanden and Van Riel 2000: 226).

Table 2

Financial Results of Government Cultivation, 1840-1849 (‘Cultivation System’) (in thousands of guilders in current values)

1840-1844 1845-1849
Coffee 40 278 24 549
Sugar 8 218 4 136
Indigo, 7 836 7 726
Pepper, Tea 647 1 725
Total net profits 39 341 35 057

Source: Fasseur 1975: 20.

Table 3

Estimates of Total Profits (‘batig slot’) during the Cultivation System,

1831/40 – 1861/70 (in millions of guilders)

1831/40 1841/50 1851/60 1861/70
Gross revenues of sale of colonial products 227.0 473.9 652.7 641.8
Costs of transport etc (NHM) 88.0 165.4 138.7 114.7
Sum of expenses 59.2 175.1 275.3 276.6
Total net profits* 150.6 215.6 289.4 276.7

Source: Van Zanden and Van Riel 2000: 223.

* Recalculated by Van Zanden and Van Riel to include subsidies for the NHM and other costs that in fact benefited the Dutch economy.

The heyday of the colonial export economy (1900-1942)

After 1870, private enterprise was promoted but the exports of raw materials gained decisive momentum after 1900. Sugar, coffee, pepper and tobacco, the old export products, were increasingly supplemented with highly profitable exports of petroleum, rubber, copra, palm oil and fibers. The Outer Islands supplied an increasing share in these foreign exports, which were accompanied by an intensifying internal trade within the archipelago and generated an increasing flow of foreign imports. Agricultural exports were cultivated both in large-scale European agricultural plantations (usually called agricultural estates) and by indigenous smallholders. When the exploitation of oil became profitable in the late nineteenth century, petroleum earned a respectable position in the total export package. In the early twentieth century, the production of oil was increasingly concentrated in the hands of the Koninklijke/Shell Group.


Figure 1

Foreign Exports from the Netherlands-Indies, 1870-1940

(in millions of guilders, current values)

Source: Trade statistics

The momentum of profitable exports led to a broad expansion of economic activity in the Indonesian archipelago. Integration with the world market also led to internal economic integration when the road system, railroad system (in Java and Sumatra) and port system were improved. In shipping lines, an important contribution was made by the KPM (Koninklijke Paketvaart-Maatschappij, Royal Packet boat Company) that served economic integration as well as imperialist expansion. Subsidized shipping lines into remote corners of the vast archipelago carried off export goods (forest products), supplied import goods and transported civil servants and military.

The Depression of the 1930s hit the export economy severely. The sugar industry in Java collapsed and could not really recover from the crisis. In some products, such as rubber and copra, production was stepped up to compensate for lower prices. In the rubber exports indigenous producers for this reason evaded the international restriction agreements. The Depression precipitated the introduction of protectionist measures, which ended the liberal period that had started in 1870. Various import restrictions were launched, making the economy more self-sufficient, as for example in the production of rice, and stimulating domestic integration. Due to the strong Dutch guilder (the Netherlands adhered to the gold standard until 1936), it took relatively long before economic recovery took place. The outbreak of World War II disrupted international trade, and the Japanese occupation (1942-1945) seriously disturbed and dislocated the economic order.

Table 4

Annual Average Growth in Economic Key Aggregates 1830-1990

GDP per capita Export volume Export

Prices

Government Expenditure
Cultivation System 1830-1840 n.a. 13.5 5.0 8.5
Cultivation System 1840-1848 n.a. 1.5 - 4.5 [very low]
Cultivation System 1849-1873 n.a. 1.5 1.5 2.6
Liberal Period 1874-1900 [very low] 3.1 - 1.9 2.3
Ethical Period 1901-1928 1.7 5.8 17.4 4.1
Great Depression 1929-1934 -3.4 -3.9 -19.7 0.4
Prewar Recovery 1934-1940 2.5 2.2 7.8 3.4
Old Order 1950-1965 1.0 0.8 - 2.1 1.8
New Order 1966-1990 4.4 5.4 11.6 10.6

Source: Booth 1998: 18.

Note: These average annual growth percentages were calculated by Booth by fitting an exponential curve to the data for the years indicated. Up to 1873 data refer only to Java.

The post-1945 period

After independence, the Indonesian economy had to recover from the hardships of the Japanese occupation and the war for independence (1945-1949), on top of the slow recovery from the 1930s Depression. During the period 1949-1965, there was little economic growth, predominantly in the years from 1950 to 1957. In 1958-1965, growth rates dwindled, largely due to political instability and inappropriate economic policy measures. The hesitant start of democracy was characterized by a power struggle between the president, the army, the communist party and other political groups. Exchange rate problems and absence of foreign capital were detrimental to economic development, after the government had eliminated all foreign economic control in the private sector in 1957/58. Sukarno aimed at self-sufficiency and import substitution and estranged the suppliers of western capital even more when he developed communist sympathies.

After 1966, the second president, general Soeharto, restored the inflow of western capital, brought back political stability with a strong role for the army, and led Indonesia into a period of economic expansion under his authoritarian New Order (Orde Baru) regime which lasted until 1997 (see below for the three phases in New Order). In this period industrial output quickly increased, including steel, aluminum, and cement but also products such as food, textiles and cigarettes. From the 1970s onward the increased oil price on the world market provided Indonesia with a massive income from oil and gas exports. Wood exports shifted from logs to plywood, pulp, and paper, at the price of large stretches of environmentally valuable rainforest.

Soeharto managed to apply part of these revenues to the development of technologically advanced manufacturing industry. Referring to this period of stable economic growth, the World Bank Report of 1993 speaks of an ‘East Asian Miracle’ emphasizing the macroeconomic stability and the investments in human capital (World Bank 1993: vi).

The financial crisis in 1997 revealed a number of hidden weaknesses in the economy such as a feeble financial system (with a lack of transparency), unprofitable investments in real estate, and shortcomings in the legal system. The burgeoning corruption at all levels of the government bureaucracy became widely known as KKN (korupsi, kolusi, nepotisme). These practices characterize the coming-of-age of the 32-year old, strongly centralized, autocratic Soeharto regime.

From 1998 until present

Today, the Indonesian economy still suffers from severe economic development problems following the financial crisis of 1997 and the subsequent political reforms after Soeharto stepped down in 1998. Secessionist movements and the low level of security in the provincial regions, as well as relatively unstable political policies, form some of its present-day problems. Additional problems include the lack of reliable legal recourse in contract disputes, corruption, weaknesses in the banking system, and strained relations with the International Monetary Fund. The confidence of investors remains low, and in order to achieve future growth, internal reform will be essential to build up confidence of international donors and investors.

An important issue on the reform agenda is regional autonomy, bringing a larger share of export profits to the areas of production instead of to metropolitan Java. However, decentralization policies do not necessarily improve national coherence or increase efficiency in governance.

A strong comeback in the global economy may be at hand, but has not as yet fully taken place by the summer of 2003 when this was written.

Additional Themes in the Indonesian Historiography

Indonesia is such a large and multi-faceted country that many different aspects have been the focus of research (for example, ethnic groups, trade networks, shipping, colonialism and imperialism). One can focus on smaller regions (provinces, islands), as well as on larger regions (the western archipelago, the eastern archipelago, the Outer Islands as a whole, or Indonesia within Southeast Asia). Without trying to be exhaustive, eleven themes which have been subject of debate in Indonesian economic history are examined here (on other debates see also Houben 2002: 53-55; Lindblad 2002b: 145-152; Dick 2002: 191-193; Thee 2002: 242-243).

The indigenous economy and the dualist economy

Although western entrepreneurs had an advantage in technological know-how and supply of investment capital during the late-colonial period, there has been a traditionally strong and dynamic class of entrepreneurs (traders and peasants) in many regions of Indonesia. Resilient in times of economic malaise, cunning in symbiosis with traders of other Asian nationalities (particularly Chinese), the Indonesian entrepreneur has been rehabilitated after the relatively disparaging manner in which he was often pictured in the pre-1945 literature. One of these early writers, J.H. Boeke, initiated a school of thought centering on the idea of ‘economic dualism’ (referring to a modern western and a stagnant eastern sector). As a consequence, the term ‘dualism’ was often used to indicate western superiority. From the 1960s onward such ideas have been replaced by a more objective analysis of the dualist economy that is not so judgmental about the characteristics of economic development in the Asian sector. Some focused on technological dualism (such as B. Higgins) others on ethnic specialization in different branches of production (see also Lindblad 2002b: 148, Touwen 2001: 316-317).

The characteristics of Dutch imperialism

Another vigorous debate concerns the character of and the motives for Dutch colonial expansion. Dutch imperialism can be viewed as having a rather complex mix of political, economic and military motives which influenced decisions about colonial borders, establishing political control in order to exploit oil and other natural resources, and preventing local uprisings. Three imperialist phases can be distinguished (Lindblad 2002a: 95-99). The first phase of imperialist expansion was from 1825-1870. During this phase interference with economic matters outside Java increased slowly but military intervention was occasional. The second phase started with the outbreak of the Aceh War in 1873 and lasted until 1896. During this phase initiatives in trade and foreign investment taken by the colonial government and by private businessmen were accompanied by extension of colonial (military) control in the regions concerned. The third and final phase was characterized by full-scale aggressive imperialism (often known as ‘pacification’) and lasted from 1896 until 1907.

The impact of the cultivation system on the indigenous economy

The thesis of ‘agricultural involution’ was advocated by Clifford Geertz (1963) and states that a process of stagnation characterized the rural economy of Java in the nineteenth century. After extensive research, this view has generally been discarded. Colonial economic growth was stimulated first by the Cultivation System, later by the promotion of private enterprise. Non-farm employment and purchasing power increased in the indigenous economy, although there was much regional inequality (Lindblad 2002a: 80; 2002b:149-150).

Regional diversity in export-led economic expansion

The contrast between densely populated Java, which had been dominant in economic and political regard for a long time, and the Outer Islands, which were a large, sparsely populated area, is obvious. Among the Outer Islands we can distinguish between areas which were propelled forward by export trade, either from Indonesian or European origin (examples are Palembang, East Sumatra, Southeast Kalimantan) and areas which stayed behind and only slowly picked the fruits of the modernization that took place elsewhere (as for example Benkulu, Timor, Maluku) (Touwen 2001).

The development of the colonial state and the role of Ethical Policy

Well into the second half of the nineteenth century, the official Dutch policy was to abstain from interference with local affairs. The scarce resources of the Dutch colonial administrators should be reserved for Java. When the Aceh War initiated a period of imperialist expansion and consolidation of colonial power, a call for more concern with indigenous affairs was heard in Dutch politics, which resulted in the official Ethical Policy which was launched in 1901 and had the threefold aim of improving indigenous welfare, expanding the educational system, and allowing for some indigenous participation in the government (resulting in the People’s Council (Volksraad) that was installed in 1918 but only had an advisory role). The results of the Ethical Policy, as for example measured in improvements in agricultural technology, education, or welfare services, are still subject to debate (Lindblad 2002b: 149).

Living conditions of coolies at the agricultural estates

The plantation economy, which developed in the sparsely populated Outer Islands (predominantly in Sumatra) between 1870 and 1942, was in bad need of labor. The labor shortage was solved by recruiting contract laborers (coolies) in China, and later in Java. The Coolie Ordinance was a government regulation that included the penal clause (which allowed for punishment by plantation owners). In response to reported abuse, the colonial government established the Labor Inspectorate (1908), which aimed at preventing abuse of coolies on the estates. The living circumstances and treatment of the coolies has been subject of debate, particularly regarding the question whether the government put enough effort in protecting the interests of the workers or allowed abuse to persist (Lindblad 2002b: 150).

Colonial drain

How large of a proportion of economic profits was drained away from the colony to the mother country? The detrimental effects of the drain of capital, in return for which European entrepreneurial initiatives were received, have been debated, as well as the exact methods of its measurement. There was also a second drain to the home countries of other immigrant ethnic groups, mainly to China (Van der Eng 1998; Lindblad 2002b: 151).

The position of the Chinese in the Indonesian economy

In the colonial economy, the Chinese intermediary trader or middleman played a vital role in supplying credit and stimulating the cultivation of export crops such as rattan, rubber and copra. The colonial legal system made an explicit distinction between Europeans, Chinese and Indonesians. This formed the roots of later ethnic problems, since the Chinese minority population in Indonesia has gained an important (and sometimes envied) position as capital owners and entrepreneurs. When threatened by political and social turmoil, Chinese business networks may have sometimes channel capital funds to overseas deposits.

Economic chaos during the ‘Old Order’

The ‘Old Order’-period, 1945-1965, was characterized by economic (and political) chaos although some economic growth undeniably did take place during these years. However, macroeconomic instability, lack of foreign investment and structural rigidity formed economic problems that were closely connected with the political power struggle. Sukarno, the first president of the Indonesian republic, had an outspoken dislike of colonialism. His efforts to eliminate foreign economic control were not always supportive of the struggling economy of the new sovereign state. The ‘Old Order’ has for long been a ‘lost area’ in Indonesian economic history, but the establishment of the unitary state and the settlement of major political issues, including some degree of territorial consolidation (as well as the consolidation of the role of the army) were essential for the development of a national economy (Dick 2002: 190; Mackie 1967).

Development policy and economic planning during the ‘New Order’ period

The ‘New Order’ (Orde Baru) of Soeharto rejected political mobilization and socialist ideology, and established a tightly controlled regime that discouraged intellectual enquiry, but did put Indonesia’s economy back on the rails. New flows of foreign investment and foreign aid programs were attracted, the unbridled population growth was reduced due to family planning programs, and a transformation took place from a predominantly agricultural economy to an industrializing economy. Thee Kian Wie distinguishes three phases within this period, each of which deserve further study:

(a) 1966-1973: stabilization, rehabilitation, partial liberalization and economic recovery;

(b) 1974-1982: oil booms, rapid economic growth, and increasing government intervention;

(c) 1983-1996: post-oil boom, deregulation, renewed liberalization (in reaction to falling oil-prices), and rapid export-led growth. During this last phase, commentators (including academic economists) were increasingly concerned about the thriving corruption at all levels of the government bureaucracy: KKN (korupsi, kolusi, nepotisme) practices, as they later became known (Thee 2002: 203-215).

Financial, economic and political crisis: KRISMON, KRISTAL

The financial crisis of 1997 started with a crisis of confidence following the depreciation of the Thai baht in July 1997. Core factors causing the ensuing economic crisis in Indonesia were the quasi-fixed exchange rate of the rupiah, quickly rising short-term foreign debt and the weak financial system. Its severity had to be attributed to political factors as well: the monetary crisis (KRISMON) led to a total crisis (KRISTAL) because of the failing policy response of the Soeharto regime. Soeharto had been in power for 32 years and his government had become heavily centralized and corrupt and was not able to cope with the crisis in a credible manner. The origins, economic consequences, and socio-economic impact of the crisis are still under discussion. (Thee 2003: 231-237; Arndt and Hill 1999).

(Note: I want to thank Dr. F. Colombijn and Dr. J.Th Lindblad at Leiden University for their useful comments on the draft version of this article.)

Selected Bibliography

In addition to the works cited in the text above, a small selection of recent books is mentioned here, which will allow the reader to quickly grasp the most recent insights and find useful further references.

General textbooks or periodicals on Indonesia’s (economic) history:

Booth, Anne. The Indonesian Economy in the Nineteenth and Twentieth Centuries: A History of Missed Opportunities. London: Macmillan, 1998.

Bulletin of Indonesian Economic Studies.

Dick, H.W., V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie. The Emergence of a National Economy in Indonesia, 1800-2000. Sydney: Allen & Unwin, 2002.

Itinerario “Economic Growth and Institutional Change in Indonesia in the 19th and 20th centuries” [special issue] 26 no. 3-4 (2002).

Reid, Anthony. Southeast Asia in the Age of Commerce, 1450-1680, Vol. I: The Lands below the Winds. New Haven: Yale University Press, 1988.

Reid, Anthony. Southeast Asia in the Age of Commerce, 1450-1680, Vol. II: Expansion and Crisis. New Haven: Yale University Press, 1993.

Ricklefs, M.C. A History of Modern Indonesia since ca. 1300. Basingstoke/Londen: Macmillan, 1993.

On the VOC:

Gaastra, F.S. De Geschiedenis van de VOC. Zutphen: Walburg Pers, 1991 (1st edition), 2002 (4th edition).

Jacobs, Els M. Koopman in Azië: de Handel van de Verenigde Oost-Indische Compagnie tijdens de 18de Eeuw. Zutphen: Walburg Pers, 2000.

Nagtegaal, Lucas. Riding the Dutch Tiger: The Dutch East Indies Company and the Northeast Coast of Java 1680-1743. Leiden: KITLV Press, 1996.

On the Cultivation System:

Elson, R.E. Village Java under the Cultivation System, 1830-1870. Sydney: Allen and Unwin, 1994.

Fasseur, C. Kultuurstelsel en Koloniale Baten. De Nederlandse Exploitatie van Java, 1840-1860. Leiden, Universitaire Pers, 1975. (Translated as: The Politics of Colonial Exploitation: Java, the Dutch and the Cultivation System. Ithaca, NY: Southeast Asia Program, Cornell University Press 1992.)

Geertz, Clifford. Agricultural Involution: The Processes of Ecological Change in Indonesia. Berkeley: University of California Press, 1963.

Houben, V.J.H. “Java in the Nineteenth Century: Consolidation of a Territorial State.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 56-81. Sydney: Allen & Unwin, 2002.

On the Late-Colonial Period:

Dick, H.W. “Formation of the Nation-state, 1930s-1966.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 153-193. Sydney: Allen & Unwin, 2002.

Lembaran Sejarah, “Crisis and Continuity: Indonesian Economy in the Twentieth Century” [special issue] 3 no. 1 (2000).

Lindblad, J.Th., editor. New Challenges in the Modern Economic History of Indonesia. Leiden: PRIS, 1993. Translated as: Sejarah Ekonomi Modern Indonesia. Berbagai Tantangan Baru. Jakarta: LP3ES, 2002.

Lindblad, J.Th., editor. The Historical Foundations of a National Economy in Indonesia, 1890s-1990s. Amsterdam: North-Holland, 1996.

Lindblad, J.Th. “The Outer Islands in the Nineteenthh Century: Contest for the Periphery.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 82-110. Sydney: Allen & Unwin, 2002a.

Lindblad, J.Th. “The Late Colonial State and Economic Expansion, 1900-1930s.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 111-152. Sydney: Allen & Unwin, 2002b.

Touwen, L.J. Extremes in the Archipelago: Trade and Economic Development in the Outer Islands of Indonesia, 1900‑1942. Leiden: KITLV Press, 2001.

Van der Eng, Pierre. “Exploring Exploitation: The Netherlands and Colonial Indonesia, 1870-1940.” Revista de Historia Económica 16 (1998): 291-321.

Zanden, J.L. van, and A. van Riel. Nederland, 1780-1914: Staat, instituties en economische ontwikkeling. Amsterdam: Balans, 2000. (On the Netherlands in the nineteenth century.)

Independent Indonesia:

Arndt, H.W. and Hal Hill, editors. Southeast Asia’s Economic Crisis: Origins, Lessons and the Way forward. Singapore: Institute of Southeast Asian Studies, 1999.

Cribb, R. and C. Brown. Modern Indonesia: A History since 1945. Londen/New York: Longman, 1995.

Feith, H. The Decline of Constitutional Democracy in Indonesia. Ithaca, New York: Cornell University Press, 1962.

Hill, Hal. The Indonesian Economy. Cambridge: Cambridge University Press, 2000. (This is the extended second edition of Hill, H., The Indonesian Economy since 1966. Southeast Asia’s Emerging Giant. Cambridge: Cambridge University Press, 1996.)

Hill, Hal, editor. Unity and Diversity: Regional Economic Development in Indonesia since 1970. Singapore: Oxford University Press, 1989.

Mackie, J.A.C. “The Indonesian Economy, 1950-1960.” In The Economy of Indonesia: Selected Readings, edited by B. Glassburner, 16-69. Ithaca NY: Cornell University Press 1967.

Robison, Richard. Indonesia: The Rise of Capital. Sydney: Allen and Unwin, 1986.

Thee Kian Wie. “The Soeharto Era and After: Stability, Development and Crisis, 1966-2000.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 194-243. Sydney: Allen & Unwin, 2002.

World Bank. The East Asian Miracle: Economic Growth and Public Policy. Oxford: World Bank /Oxford University Press, 1993.

On economic growth:

Booth, Anne. The Indonesian Economy in the Nineteenth and Twentieth Centuries. A History of Missed Opportunities. London: Macmillan, 1998.

Van der Eng, Pierre. “The Real Domestic Product of Indonesia, 1880-1989.” Explorations in Economic History 39 (1992): 343-373.

Van der Eng, Pierre. “Indonesia’s Growth Performance in the Twentieth Century.” In The Asian Economies in the Twentieth Century, edited by Angus Maddison, D.S. Prasada Rao and W. Shepherd, 143-179. Cheltenham: Edward Elgar, 2002.

Van der Eng, Pierre. “Indonesia’s Economy and Standard of Living in the Twentieth Century.” In Indonesia Today: Challenges of History, edited by G. Lloyd and S. Smith, 181-199. Singapore: Institute of Southeast Asian Studies, 2001.

Citation: Touwen, Jeroen. “The Economic History of Indonesia”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-indonesia/