EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

History of Workplace Safety in the United States, 1880-1970

Mark Aldrich, Smith College

The dangers of work are usually measured by the number of injuries or fatalities occurring to a group of workers, usually over a period of one year. 1 Over the past century such measures reveal a striking improvement in the safety of work in all the advanced countries. In part this has been the result of the gradual shift of jobs from relatively dangerous goods production such as farming, fishing, logging, mining, and manufacturing into such comparatively safe work as retail trade and services. But even the dangerous trades are now far safer than they were in 1900. To take but one example, mining today remains a comparatively risky activity. Its annual fatality rate is about nine for every one hundred thousand miners employed. A century ago in 1900 about three hundred out of every one hundred thousand miners were killed on the job each year. 2

The Nineteenth Century

Before the late nineteenth century we know little about the safety of American workplaces because contemporaries cared little about it. As a result, only fragmentary information exists prior to the 1880s. Pre-industrial laborers faced risks from animals and hand tools, ladders and stairs. Industrialization substituted steam engines for animals, machines for hand tools, and elevators for ladders. But whether these new technologies generally worsened the dangers of work is unclear. What is clear is that nowhere was the new work associated with the industrial revolution more dangerous than in America.

US Was Unusually Dangerous

Americans modified the path of industrialization that had been pioneered in Britain to fit the particular geographic and economic circumstances of the American continent. Reflecting the high wages and vast natural resources of a new continent, this American system encouraged use of labor saving machines and processes. These developments occurred within a legal and regulatory climate that diminished employer’s interest in safety. As a result, Americans developed production methods that were both highly productive and often very dangerous. 3

Accidents Were “Cheap”

While workers injured on the job or their heirs might sue employers for damages, winning proved difficult. Where employers could show that the worker had assumed the risk, or had been injured by the actions of a fellow employee, or had himself been partly at fault, courts would usually deny liability. A number or surveys taken about 1900 showed that only about half of all workers fatally injured recovered anything and their average compensation only amounted to about half a year’s pay. Because accidents were so cheap, American industrial methods developed with little reference to their safety. 4

Mining

Nowhere was the American system more dangerous than in early mining. In Britain, coal seams were deep and coal expensive. As a result, British mines used mining methods that recovered nearly all of the coal because they used waste rock to hold up the roof. British methods also concentrated the working, making supervision easy, and required little blasting. American coal deposits by contrast, were both vast and near the surface; they could be tapped cheaply using techniques known as “room and pillar” mining. Such methods used coal pillars and timber to hold up the roof, because timber and coal were cheap. Since miners worked in separate rooms, labor supervision was difficult and much blasting was required to bring down the coal. Miners themselves were by no means blameless; most were paid by the ton, and when safety interfered with production, safety often took a back seat. For such reasons, American methods yielded more coal per worker than did European techniques, but they were far more dangerous, and toward the end of the nineteenth century, the dangers worsened (see Table 1).5

Table 1
British and American Mine Safety, 1890 -1904
(Fatality rates per Thousand Workers per Year)

Years American Anthracite American Bituminous Great Britain
1890-1894 3.29 2.52 1.61
1900-1904 3.13 3.53 1.28

Source: British data from Great Britain, General Report. Other data from Aldrich, Safety First.

Railroads

Nineteenth century American railroads were also comparatively dangerous to their workers – and their passengers as well – and for similar reasons. Vast North American distances and low population density turned American carriers into predominantly freight haulers – and freight was far more dangerous to workers than passenger traffic, for men had to go in between moving cars for coupling and uncoupling and ride the cars to work brakes. The thin traffic and high wages also forced American carriers to economize on both capital and labor. Accordingly, American carriers were poorly built and used few signals, both of which resulted in many derailments and collisions. Such conditions made American railroad work far more dangerous than that in Britain (see Table 2).6

Table 2
Comparative Safety of British and American Railroad Workers, 1889 – 1901
(Fatality Rates per Thousand Workers per Year)

1889 1895 1901
British railroad workers
All causes
1.14 0.95 0.89
British trainmena
All causes
4.26 3.22 2.21
Coupling 0.94 0.83 0.74
American Railroad workers
All causes
2.67 2.31 2.50
American trainmen
All causes
8.52 6.45 7.35
Coupling 1.73c 1.20 0.78
Brakingb 3.25c 2.44 2.03

Source: Aldrich, Safety First, Table 1 and Great Britain Board of Trade, General Report.

1

Note: Death rates are per thousand employees.
a. Guards, brakemen, and shunters.
b. Deaths from falls from cars and striking overhead obstructions.

Manufacturing

American manufacturing also developed in a distinctively American fashion that substituted power and machinery for labor and manufactured products with interchangeable arts for ease in mass production. Whether American methods were less safe than those in Europe is unclear but by 1900 they were extraordinarily risky by modern standards, for machines and power sources were largely unguarded. And while competition encouraged factory managers to strive for ever-increased output, they showed little interest in improving safety.7

Worker and Employer Responses

Workers and firms responded to these dangers in a number of ways. Some workers simply left jobs they felt were too dangerous, and risky jobs may have had to offer higher pay to attract workers. After the Civil War life and accident insurance companies expanded, and some workers purchased insurance or set aside savings to offset the income risks from death or injury. Some unions and fraternal organizations also offered their members insurance. Railroads and some mines also developed hospital and insurance plans to care for injured workers while many carriers provided jobs for all their injured men. 8

Improving safety, 1910-1939

Public efforts to improve safety date from the very beginnings of industrialization. States established railroad regulatory commissions as early as the 1840s. But while most of the commissions were intended to improve safety, they had few powers and were rarely able to exert much influence on working conditions. Similarly, the first state mining commission began in Pennsylvania in 1869, and other states soon followed. Yet most of the early commissions were ineffectual and as noted safety actually deteriorated after the Civil War. Factory commissions also dated from but most were understaffed and they too had little power.9

Railroads

The most successful effort to improve work safety during the nineteenth century began on the railroads in the 1880s as a small band of railroad regulators, workers, and managers began to campaign for the development of better brakes and couplers for freight cars. In response George Westinghouse modified his passenger train air brake in about 1887 so it would work on long freights, while at roughly the same time Ely Janney developed an automatic car coupler. For the railroads such equipment meant not only better safety, but also higher productivity and after 1888 they began to deploy it. The process was given a boost in 1889-1890 when the newly-formed Interstate Commerce Commission (ICC) published its first accident statistics. They demonstrated conclusively the extraordinary risks to trainmen from coupling and riding freight (Table 2). In 1893 Congress responded, passing the Safety Appliance Act, which mandated use of such equipment. It was the first federal law intended primarily to improve work safety, and by 1900 when the new equipment was widely diffused, risks to trainmen had fallen dramatically.10

Federal Safety Regulation

In the years between 1900 and World War I, a rather strange band of Progressive reformers, muckraking journalists, businessmen, and labor unions pressed for changes in many areas of American life. These years saw the founding of the Federal Food and Drug Administration, the Federal Reserve System and much else. Work safety also became of increased public concern and the first important developments came once again on the railroads. Unions representing trainmen had been impressed by the safety appliance act of 1893 and after 1900 they campaigned for more of the same. In response Congress passed a host of regulations governing the safety of locomotives and freight cars. While most of these specific regulations were probably modestly beneficial, collectively their impact was small because unlike the rules governing automatic couplers and air brakes they addressed rather minor risks.11

In 1910 Congress also established the Bureau of Mines in response to a series of disastrous and increasingly frequent explosions. The Bureau was to be a scientific, not a regulatory body and it was intended to discover and disseminate new knowledge on ways to improve mine safety.12

Workers’ Compensation Laws Enacted

Far more important were new laws that raised the cost of accidents to employers. In 1908 Congress passed a federal employers’ liability law that applied to railroad workers in interstate commerce and sharply limited defenses an employer could claim. Worker fatalities that had once cost the railroads perhaps $200 now cost $2,000. Two years later in 1910, New York became the first state to pass a workmen’s compensation law. This was a European idea. Instead of requiring injured workers to sue for damages in court and prove the employer was negligent, the new law automatically compensated all injuries at a fixed rate. Compensation appealed to businesses because it made costs more predictable and reduced labor strife. To reformers and unions it promised greater and more certain benefits. Samuel Gompers, leader of the American Federation of Labor had studied the effects of compensation in Germany. He was impressed with how it stimulated business interest in safety, he said. Between 1911 and 1921 forty-four states passed compensation laws.13

Employers Become Interested in Safety

The sharp rise in accident costs that resulted from compensation laws and tighter employers’ liability initiated the modern concern with work safety and initiated the long-term decline in work accidents and injuries. Large firms in railroading, mining, manufacturing and elsewhere suddenly became interested in safety. Companies began to guard machines and power sources while machinery makers developed safer designs. Managers began to look for hidden dangers at work, and to require that workers wear hard hats and safety glasses. They also set up safety departments run by engineers and safety committees that included both workers and managers. In 1913 companies founded the National Safety Council to pool information. Government agencies such as the Bureau of Mines and National Bureau of Standards provided scientific support while universities also researched safety problems for firms and industries14

Accident Rates Begin to Fall Steadily

During the years between World War I and World War II the combination of higher accident costs along with the institutionalization of safety concerns in large firms began to show results. Railroad employee fatality rates declined steadily after 1910 and at some large companies such as DuPont and whole industries such as steel making (see Table 3) safety also improved dramatically. Largely independent changes in technology and labor markets also contributed to safety as well. The decline in labor turnover meant fewer new employees who were relatively likely to get hurt, while the spread of factory electrification not only improved lighting but reduced the dangers from power transmission as well. In coal mining the shift from underground work to strip mining also improved safety. Collectively these long-term forces reduced manufacturing injury rates about 38 percent between 1926 and 1939 (see Table 4).15

Table 3
Steel Industry fatality and Injury rates, 1910-1939
(Rates are per million manhours)

Period Fatality rate Injury Rate
1910-1913 0.40 44.1
1937-1939 0.13 11.7

Pattern of Improvement Was Uneven

Yet the pattern of improvement was uneven, both over time and among firms and industries. Safety still deteriorated in times of economic boon when factories mines and railroads were worked to the limit and labor turnover rose. Nor were small companies as successful in reducing risks, for they paid essentially the same compensation insurance premium irrespective of their accident rate, and so the new laws had little effect there. Underground coal mining accidents also showed only modest improvement. Safety was also expensive in coal and many firms were small and saw little payoff from a lower accident rate. The one source of danger that did decline was mine explosions, which diminished in response to technologies developed by the Bureau of Mines. Ironically, however, in 1940 six disastrous blasts that killed 276 men finally led to federal mine inspection in 1941.16

Table 4
Work Injury Rates, Manufacturing and Coal Mining, 1926-1970
(Per Million Manhours)

.

Year Manufacturing Coal Mining
1926 24.2
1931 18.9 89.9
1939 14.9 69.5
1945 18.6 60.7
1950 14.7 53.3
1960 12.0 43.4
1970 15.2 42.6

Source: U.S. Department of Commerce Bureau of the Census, Historical Statistics of the United States, Colonial Times to 1970 (Washington, 1975), Series D-1029 and D-1031.

Postwar Trends, 1945-1970

The economic boon and associated labor turnover during World War II worsened work safety in nearly all areas of the economy, but after 1945 accidents again declined as long-term forces reasserted themselves (Table 4). In addition, after World War II newly powerful labor unions played an increasingly important role in work safety. In the 1960s however economic expansion again led to rising injury rates and the resulting political pressures led Congress to establish the Occupational Safety and Health Administration (OSHA) and the Mine Safety and Health Administration in 1970. The continuing problem of mine explosions also led to the foundation of the Mine Safety and Health Administration (MSHA) that same year. The work of these agencies had been controversial but on balance they have contributed to the continuing reductions in work injuries after 1970.17

References and Further Reading

Aldrich, Mark. Safety First: Technology, Labor and Business in the Building of Work Safety, 1870-1939. Baltimore: Johns Hopkins University Press, 1997.

Aldrich, Mark. “Preventing ‘The Needless Peril of the Coal Mine’: the Bureau of Mines and the Campaign Against Coal Mine Explosions, 1910-1940.” Technology and Culture 36, no. 3 (1995): 483-518.

Aldrich, Mark. “The Peril of the Broken Rail: the Carriers, the Steel Companies, and Rail Technology, 1900-1945.” Technology and Culture 40, no. 2 (1999): 263-291

Aldrich, Mark. “Train Wrecks to Typhoid Fever: The Development of Railroad Medicine Organizations, 1850 -World War I.” Bulletin of the History of Medicine, 75, no. 2 (Summer 2001): 254-89.

Derickson Alan. “Participative Regulation of Hazardous Working Conditions: Safety Committees of the United Mine Workers of America,” Labor Studies Journal 18, no. 2 (1993): 25-38.

Dix, Keith. Work Relations in the Coal Industry: The Hand Loading Era. Morgantown: University of West Virginia Press, 1977. The best discussion of coalmine work for this period.

Dix, Keith. What’s a Coal Miner to Do? Pittsburgh: University of Pittsburgh Press, 1988. The best discussion of coal mine labor during the era of mechanization.

Fairris, David. “From Exit to Voice in Shopfloor Governance: The Case of Company Unions.” Business History Review 69, no. 4 (1995): 494-529.

Fairris, David. “Institutional Change in Shopfloor Governance and the Trajectory of Postwar Injury Rates in U.S. Manufacturing, 1946-1970.” Industrial and Labor Relations Review 51, no. 2 (1998): 187-203.

Fishback, Price. Soft Coal Hard Choices: The Economic Welfare of Bituminous Coal Miners, 1890-1930. New York: Oxford University Press, 1992. The best economic analysis of the labor market for coalmine workers.

Fishback, Price and Shawn Kantor. A Prelude to the Welfare State: The Origins of Workers’ Compensation. Chicago: University of Chicago Press, 2000. The best discussions of how employers’ liability rules worked.

Graebner, William. Coal Mining Safety in the Progressive Period. Lexington: University of Kentucky Press, 1976.

Great Britain Board of Trade. General Report upon the Accidents that Have Occurred on Railways of the United Kingdom during the Year 1901. London, HMSO, 1902.

Great Britain Home Office Chief Inspector of Mines. General Report with Statistics for 1914, Part I. London: HMSO, 1915.

Hounshell, David. From the American System to Mass Production, 1800-1932: The Development of Manufacturing Technology in the United States. Baltimore: Johns Hopkins University Press, 1984.

Humphrey, H. B. “Historical Summary of Coal-Mine Explosions in the United States — 1810-1958.” United States Bureau of Mines Bulletin 586 (1960).

Kirkland, Edward. Men, Cities, and Transportation. 2 vols. Cambridge: Harvard University Press, 1948, Discusses railroad regulation and safety in New England.

Lankton, Larry. Cradle to Grave: Life, Work, and Death in Michigan Copper Mines. New York: Oxford University Press, 1991.

Licht, Walter. Working for the Railroad. Princeton: Princeton University Press, 1983.

Long, Priscilla. Where the Sun Never Shines. New York: Paragon, 1989. Covers coal mine safety at the end of the nineteenth century.

Mendeloff, John. Regulating Safety: An Economic and Political Analysis of Occupational Safety and Health Policy. Cambridge: MIT Press, 1979. An accessible modern discussion of safety under OSHA.

National Academy of Sciences. Toward Safer Underground Coal Mines. Washington, DC: NAS, 1982.

Rogers, Donald. “From Common Law to Factory Laws: The Transformation of Workplace Safety Law in Wisconsin before Progressivism.” American Journal of Legal History (1995): 177-213.

Root, Norman and Daley, Judy. “Are Women Safer Workers? A New Look at the Data.” Monthly Labor Review 103, no. 9 (1980): 3-10.

Rosenberg, Nathan. Technology and American Economic Growth. New York: Harper and Row, 1972. Analyzes the forces shaping American technology.

Rosner, David and Gerald Markowity, editors. Dying for Work. Blomington: Indiana University Press, 1987.

Shaw, Robert. Down Brakes: A History of Railroad Accidents, Safety Precautions, and Operating Practices in the United States of America. London: P. R. Macmillan. 1961.

Trachenberg, Alexander. The History of Legislation for the Protection of Coal Miners in Pennsylvania, 1824 – 1915. New York: International Publishers. 1942.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States, Colonial Times to 1970. Washington, DC, 1975.

Usselman, Steven. “Air Brakes for Freight Trains: Technological Innovation in the American Railroad Industry, 1869-1900.” Business History Review 58 (1984): 30-50.

Viscusi, W. Kip. Risk By Choice: Regulating Health and Safety in the Workplace. Cambridge: Harvard University Press, 1983. The most readable treatment of modern safety issues by a leading scholar.

Wallace, Anthony. Saint Clair. New York: Alfred A. Knopf, 1987. Provides a superb discussion of early anthracite mining and safety.

Whaples, Robert and David Buffum. “Fraternalism, Paternalism, the Family and the Market: Insurance a Century Ago.” Social Science History 15 (1991): 97-122.

White, John. The American Railroad Freight Car. Baltimore: Johns Hopkins University Press, 1993. The definitive history of freight car technology.

Whiteside, James. Regulating Danger: The Struggle for Mine Safety in the Rocky Mountain Coal Industry. Lincoln: University of Nebraska Press, 1990.

Wokutch, Richard. Worker Protection Japanese Style: Occupational Safety and Health in the Auto Industry. Ithaca, NY: ILR, 1992

Worrall, John, editor. Safety and the Work Force: Incentives and Disincentives in Workers’ Compensation. Ithaca, NY: ILR Press, 1983.

1 Injuries or fatalities are expressed as rates. For example, if ten workers are injured out of 450 workers during a year, the rate would be .006666. For readability it might be expressed as 6.67 per thousand or 666.7 per hundred thousand workers. Rates may also be expressed per million workhours. Thus if the average work year is 2000 hours, ten injuries in 450 workers results in [10/450×2000]x1,000,000 = 11.1 injuries per million hours worked.

2 For statistics on work injuries from 1922-1970 see U.S. Department of Commerce, Historical Statistics, Series 1029-1036. For earlier data are in Aldrich, Safety First, Appendix 1-3.

3 Hounshell, American System. Rosenberg, Technology,. Aldrich, Safety First.

4 On the workings of the employers’ liability system see Fishback and Kantor, A Prelude, chapter 2

5 Dix, Work Relations, and his What’s a Coal Miner to Do? Wallace, Saint Clair, is a superb discussion of early anthracite mining and safety. Long, Where the Sun, Fishback, Soft Coal, chapters 1, 2, and 7. Humphrey, “Historical Summary.” Aldrich, Safety First, chapter 2.

6 Aldrich, Safety First chapter 1.

7 Aldrich, Safety First chapter 3

8 Fishback and Kantor, A Prelude, chapter 3, discusses higher pay for risky jobs as well as worker savings and accident insurance See also Whaples and Buffum, “Fraternalism, Paternalism.” Aldrich, ” Train Wrecks to Typhoid Fever.”

9Kirkland, Men, Cities. Trachenberg, The History of Legislation Whiteside, Regulating Danger. An early discussion of factory legislation is in Susan Kingsbury, ed.,xxxxx. Rogers,” From Common Law.”

10 On the evolution of freight car technology see White, American Railroad Freight Car, Usselman “Air Brakes for Freight trains,” and Aldrich, Safety First, chapter 1. Shaw, Down Brakes, discusses causes of train accidents.

11 Details of these regulations may be found in Aldrich, Safety First, chapter 5.

12 Graebner, Coal-Mining Safety, Aldrich, “‘The Needless Peril.”

13 On the origins of these laws see Fishback and Kantor, A Prelude, and the sources cited therein.

14 For assessments of the impact of early compensation laws see Aldrich, Safety First, chapter 5 and Fishback and Kantor, A Prelude, chapter 3. Compensation in the modern economy is discussed in Worrall, Safety and the Work Force. Government and other scientific work that promoted safety on railroads and in coal mining are discussed in Aldrich, “‘The Needless Peril’,” and “The Broken Rail.”

15 Farris, “From Exit to Voice.”

16 Aldrich, “‘Needless Peril,” and Humphrey

17 Derickson, “Participative Regulation” and Fairris, “Institutional Change,” also emphasize the role of union and shop floor issues in shaping safety during these years. Much of the modern literature on safety is highly quantitative. For readable discussions see Mendeloff, Regulating Safety (Cambridge: MIT Press, 1979), and Viscusi, Risk by Choice

An Overview of the Economic History of Uruguay since the 1870s

Luis Bértola, Universidad de la República — Uruguay

Uruguay’s Early History

Without silver or gold, without valuable species, scarcely peopled by gatherers and fishers, the Eastern Strand of the Uruguay River (Banda Oriental was the colonial name; República Oriental del Uruguay is the official name today) was, in the sixteenth and seventeenth centuries, distant and unattractive to the European nations that conquered the region. The major export product was the leather of wild descendants of cattle introduced in the early 1600s by the Spaniards. As cattle preceded humans, the state preceded society: Uruguay’s first settlement was Colonia del Sacramento, a Portuguese military fortress founded in 1680, placed precisely across from Buenos Aires, Argentina. Montevideo, also a fortress, was founded by the Spaniards in 1724. Uruguay was on the border between the Spanish and Portuguese empires, a feature which would be decisive for the creation, with strong British involvement, in 1828-1830, of an independent state.

Montevideo had the best natural harbor in the region, and rapidly became the end-point of the trans-Atlantic routes into the region, the base for a strong commercial elite, and for the Spanish navy in the region. During the first decades after independence, however, Uruguay was plagued by political instability, precarious institution building and economic retardation. Recurrent civil wars with intensive involvement by Britain, France, Portugal-Brazil and Argentina, made Uruguay a center for international conflicts, the most important being the Great War (Guerra Grande), which lasted from 1839 to 1851. At its end Uruguay had only about 130,000 inhabitants.

“Around the middle of the nineteenth century, Uruguay was dominated by the latifundium, with its ill-defined boundaries and enormous herds of native cattle, from which only the hides were exported to Great Britain and part of the meat, as jerky, to Brazil and Cuba. There was a shifting rural population that worked on the large estates and lived largely on the parts of beef carcasses that could not be marketed abroad. Often the landowners were also the caudillos of the Blanco or Colorado political parties, the protagonists of civil wars that a weak government was unable to prevent” (Barrán and Nahum, 1984, 655). This picture still holds, even if it has been excessively stylized, neglecting the importance of subsistence or domestic-market oriented peasant production.

Economic Performance in the Long Run

Despite its precarious beginnings, Uruguay’s per capita gross domestic product (GDP) growth from 1870 to 2002 shows an amazing persistence, with the long-run rate averaging around one percent per year. However, this apparent stability hides some important shifts. As shown in Figure 1, both GDP and population grew much faster before the 1930s; from 1930 to 1960 immigration vanished and population grew much more slowly, while decades of GDP stagnation and fast growth alternated; after the 1960s Uruguay became a net-emigration country, with low natural growth rates and a still spasmodic GDP growth.

GDP growth shows a pattern featured by Kuznets-like swings (Bértola and Lorenzo 2004), with extremely destructive downward phases, as shown in Table 1. This cyclical pattern is correlated with movements of the terms of trade (the relative price of exports versus imports), world demand and international capital flows. In the expansive phases exports performed well, due to increased demand and/or positive terms of trade shocks (1880s, 1900s, 1920s, 1940s and even during the Mercosur years from 1991 to 1998). Capital flows would sometimes follow these booms and prolong the cycle, or even be a decisive force to set the cycle up, as were financial flows in the 1970s and 1990s. The usual outcome, however, has been an overvalued currency, which blurred the debt problem and threatened the balance of trade by overpricing exports. Crises have been the result of a combination of changing trade conditions, devaluation and over-indebtedness, as in the 1880s, early 1910s, late 1920s, 1950s, early 1980s and late 1990s.

Population and per capita GDP of Uruguay, 1870-2002 (1913=100)

Table 1: Swings in the Uruguayan Economy, 1870-2003

Per capita GDP fall (%) Length of recession (years) Time to pre-crisis levels (years) Time to next crisis (years)
1872-1875 26 3 15 16
1888-1890 21 2 19 25
1912-1915 30 3 15 19
1930-1933 36 3 17 24-27
1954/57-59 9 2-5 18-21 27-24
1981-1984 17 3 11 17
1998-2003 21 5

Sources: See Figure 1.

Besides its cyclical movement, the terms of trade showed a sharp positive trend in 1870-1913, a strongly fluctuating pattern around similar levels in 1913-1960 and a deteriorating trend since then. While the volume of exports grew quickly up to the 1920s, it stagnated in 1930-1960 and started to grow again after 1970. As a result, the purchasing power of exports grew fourfold in 1870-1913, fluctuated along with the terms of trade in 1930-1960, and exhibited a moderate growth in 1970-2002.

The Uruguayan economy was very open to trade in the period up to 1913, featuring high export shares, which naturally declined as the rapidly growing population filled in rather empty areas. In 1930-1960 the economy was increasingly and markedly closed to international trade, but since the 1970s the economy opened up to trade again. Nevertheless, exports, which earlier were mainly directed to Europe (beef, wool, leather, linseed, etc.), were increasingly oriented to Argentina and Brazil, in the context of bilateral trade agreements in the 1970s and 1980s and of Mercosur (the trading zone encompassing Argentina, Brazil, Paraguay and Uruguay) in the 1990s.

While industrial output kept pace with agrarian export-led growth during the first globalization boom before World War I, the industrial share in GDP increased in 1930-54, and was mainly domestic-market orientated. Deindustrialization has been profound since the mid-1980s. The service sector was always large: focusing on commerce, transport and traditional state bureaucracy during the first globalization boom; focusing on health care, education and social services, during the import-substituting industrialization (ISI) period in the middle of the twentieth century; and focusing on military expenditure, tourism and finance since the 1970s.

The income distribution changed markedly over time. During the first globalization boom before World War I, an already uneven distribution of income and wealth seems to have worsened, due to massive immigration and increasing demand for land, both rural and urban. However, by the 1920s the relative prices of land and labor changed their previous trend, reducing income inequality. The trend later favored industrialization policies, democratization, introduction of wage councils, and the expansion of the welfare state based on an egalitarian ideology. Inequality diminished in many respects: between sectors, within sectors, between genders and between workers and pensioners. While the military dictatorship and the liberal economic policy implemented since the 1970s initiated a drastic reversal of the trend toward economic equality, the globalizing movements of the 1980s and 1990s under democratic rule didn’t increase equality. Thus, inequality remains at the higher levels reached during the period of dictatorship (1973-85).

Comparative Long-run Performance

If the stable long-run rate of Uruguayan per capita GDP growth hides important internal transformations, Uruguay’s changing position in the international scene is even more remarkable. During the first globalization boom the world became more unequal: the United States forged ahead as the world leader (nearly followed by other settler economies); Asia and Africa lagged far behind. Latin America showed a confusing map, in which countries as Argentina and Uruguay performed rather well, and others, such as the Andean region, lagged far behind (Bértola and Williamson 2003). Uruguay’s strong initial position tended to deteriorate in relation to the successful core countries during the late 1800s, as shown in Figure 2. This trend of negative relative growth was somewhat weak during the first half of the twentieth century, improved significantly during the 1960s, as the import-substituting industrialization model got exhausted, and has continued since the 1970s, despite policies favoring increased integration into the global economy.

 Per capita GDP of Uruguay relative to four core countries,  1870-2002

If school enrollment and literacy rates are reasonable proxies for human capital, in late 1800s both Argentina and Uruguay had a great handicap in relation to the United States, as shown in Table 2. The gap in literacy rates tended to disappear — as well as this proxy’s ability to measure comparative levels of human capital. Nevertheless, school enrollment, which includes college-level and technical education, showed a catching-up trend until the 1960’s, but reverted afterwards.

The gap in life-expectancy at birth has always been much smaller than the other development indicators. Nevertheless, some trends are noticeable: the gap increased in 1900-1930; decreased in 1930-1950; and increased again after the 1970s.

Table 2: Uruguayan Performance in Comparative Perspective, 1870-2000 (US = 100)

1870 1880 1890 1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000
GDP per capita

Uruguay

101 65 63 27 32 27 33 27 26 24 19 18 15 16

Argentina

63 34 38 31 32 29 25 25 24 21 15 16

Brazil

23 8 8 8 8 8 7 9 9 13 11 10
Latin America 13 12 13 10 9 9 9 6 6

USA

100 100 100 100 100 100 100 100 100 100 100 100 100 100
Literacy rates

Uruguay

57 65 72 79 85 91 92 94 95 97 99

Argentina

57 65 72 79 85 91 93 94 94 96 98

Brazil

39 38 37 42 46 51 61 69 76 81 86

Latin America

28 30 34 37 42 47 56 65 71 77 83

USA

100 100 100 100 100 100 100 100 100 100 100
School enrollment

Uruguay

23 31 31 30 34 42 52 46 43

Argentina

28 41 42 36 39 43 55 44 45

Brazil

12 11 12 14 18 22 30 42

Latin America

USA

100 100 100 100 100 100 100 100 100
Life expectancy at birth

Uruguay

102 100 91 85 91 97 97 97 95 96 96

Argentina

81 85 86 90 88 90 93 94 95 96 95
Brazil 60 60 56 58 58 63 79 83 85 88 88
Latin America 65 63 58 58 59 63 71 77 81 88 87
USA 100 100 100 100 100 100 100 100 100 100 100

Sources: Per capita GDP: Maddison (2001) and Astorga, Bergés and FitzGerald (2003). Literacy rates and life expectancy; Astorga, Bergés and FitzGerald (2003). School enrollment; Bértola and Bertoni (1998).

Uruguay during the First Globalization Boom: Challenge and Response

During the post-Great-War reconstruction after 1851, Uruguayan population grew rapidly (fueled by high natural rates and immigration) and so did per capita output. Productivity grew due to several causes including: the steam ship revolution, which critically reduced the price spread between Europe and America and eased access to the European market; railways, which contributed to the unification of domestic markets and reduced domestic transport costs; the diffusion and adaptation to domestic conditions of innovations in cattle-breeding and services; a significant reduction in transaction costs, related to a fluctuating but noticeable process of institutional building and strengthening of the coercive power of the state.

Wool and woolen products, hides and leather were exported mainly to Europe; salted beef (tasajo) to Brazil and Cuba. Livestock-breeding (both cattle and sheep) was intensive in natural resources and dominated by large estates. By the 1880s, the agrarian frontier was exhausted, land properties were fenced and property rights strengthened. Labor became abundant and concentrated in urban areas, especially around Montevideo’s harbor, which played an important role as a regional (supranational) commercial center. By 1908, it contained 40 percent of the nation’s population, which had risen to more than a million inhabitants, and provided the main part of Uruguay’s services, civil servants and the weak and handicraft-dominated manufacturing sector.

By the 1910s, Uruguayan competitiveness started to weaken. As the benefits of the old technological paradigm were eroding, the new one was not particularly beneficial for resource-intensive countries such as Uruguay. International demand shifted away from primary consumption, the population of Europe grew slowly and European countries struggled for self-sufficiency in primary production in a context of soaring world supply. Beginning in the 1920s, the cattle-breeding sector showed a very poor performance, due to lack of innovation away from natural pastures. In the 1930’s, its performance deteriorated mainly due to unfavorable international conditions. Export volumes stagnated until the 1970s, while purchasing power fluctuated strongly following the terms of trade.

Inward-looking Growth and Structural Change

The Uruguayan economy grew inwards until the 1950s. The multiple exchange rate system was the main economic policy tool. Agrarian production was re-oriented towards wool, crops, dairy products and other industrial inputs, away from beef. The manufacturing industry grew rapidly and diversified significantly, with the help of protectionist tariffs. It was light, and lacked capital goods or technology-intensive sectors. Productivity growth hinged upon technology transfers embodied in imported capital goods and an intensive domestic adaptation process of mature technologies. Domestic demand grew also through an expanding public sector and the expansion of a corporate welfare state. The terms of trade substantially impacted protectionism, productivity growth and domestic demand — the government raised money by manipulating exchange rates, so that when export prices rose the state had a greater capacity to protect the manufacturing sector through low exchange rates for capital goods, raw material and fuel imports and to spur productivity increases by imports of capital, while protection allowed industry to pay higher wages and thus expand domestic demand.

However, rent-seeking industries searching for protectionism and a weak clienteslist state, crowded by civil servants recruited in exchange for political favors to the political parties, directed structural change towards a closed economy and inefficient management. The obvious limits to inward looking growth of a country peopled by only about two million inhabitants were exacerbated in the late 1950s as terms of trade deteriorated. The clientelist political system, which was created by both traditional parties while the state was expanding at the national and local level, was now not able to absorb the increasing social conflicts, dyed by stringent ideological confrontation, in a context of stagnation and huge fiscal deficits.

Re-globalization and Regional Integration

The dictatorship (1973-1985) started a period of increasing openness to trade and deregulation which has persisted until the present. Dynamic integration into the world market is still incomplete, however. An attempt to return to cattle-breeding exports, as the engine of growth, was hindered by the oil crises and the ensuing European response, which restricted meat exports to that destination. The export sector was re-orientated towards “non-traditional exports” — i.e., exports of industrial goods made of traditional raw materials, to which low-quality and low-wage labor was added. Exports were also stimulated by means of strong fiscal exemptions and negative real interest rates and were re-orientated to the regional market (Argentina and Brazil) and to other developing regions. At the end of the 1970s, this policy was replaced by the monetarist approach to the balance of payments. The main goal was to defeat inflation (which had continued above 50% since the 1960s) through deregulation of foreign trade and a pre-announced exchange rate, the “tablita.” A strong wave of capital inflows led to a transitory success, but the Uruguayan peso became more and more overvalued, thus limiting exports, encouraging imports and deepening the chronic balance of trade deficit. The “tablita” remained dependent on increasing capital inflows and obviously collapsed when the risk of a huge devaluation became real. Recession and the debt crisis dominated the scene of the early 1980s.

Democratic regimes since 1985 have combined natural resource intensive exports to the region and other emergent markets, with a modest intra-industrial trade mainly with Argentina. In the 1990s, once again, Uruguay was overexposed to financial capital inflows which fueled a rather volatile growth period. However, by the year 2000, Uruguay had a much worse position in relation to the leaders of the world economy as measured by per capita GDP, real wages, equity and education coverage, than it had fifty years earlier.

Medium-run Prospects

In the 1990s Mercosur as a whole and each of its member countries exhibited a strong trade deficit with non-Mercosur countries. This was the result of a growth pattern fueled by and highly dependent on foreign capital inflows, combined with the traditional specialization in commodities. The whole Mercosur project is still mainly oriented toward price competitiveness. Nevertheless, the strongly divergent macroeconomic policies within Mercosur during the deep Argentine and Uruguayan crisis of the beginning of the twenty-first century, seem to have given place to increased coordination between Argentina and Brazil, thus making of the region a more stable environment.

The big question is whether the ongoing political revival of Mercosur will be able to achieve convergent macroeconomic policies, success in international trade negotiations, and, above all, achievements in developing productive networks which may allow Mercosur to compete outside its home market with knowledge-intensive goods and services. Over that hangs Uruguay’s chance to break away from its long-run divergent siesta.

References

Astorga, Pablo, Ame R. Bergés and Valpy FitzGerald. “The Standard of Living in Latin America during the Twentieth Century.” University of Oxford Discussion Papers in Economic and Social History 54 (2004).

Barrán, José P. and Benjamín Nahum. “Uruguayan Rural History.” Latin American Historical Review, 1985.

Bértola, Luis. The Manufacturing Industry of Uruguay, 1913-1961: A Sectoral Approach to Growth, Fluctuations and Crisis. Publications of the Department of Economic History, University of Göteborg, 61; Institute of Latin American Studies of Stockholm University, Monograph No. 20, 1990.

Bértola, Luis and Reto Bertoni. “Educación y aprendizaje en escenarios de convergencia y divergencia.” Documento de Trabajo, no. 46, Unidad Multidisciplinaria, Facultad de Ciencias Sociales, Universidad de la República, 1998.

Bértola, Luis and Fernando Lorenzo. “Witches in the South: Kuznets-like Swings in Argentina, Brazil and Uruguay since the 1870s.” In The Experience of Economic Growth, edited by J.L. van Zanden and S. Heikenen. Amsterdam: Aksant, 2004.

Bértola, Luis and Gabriel Porcile. “Argentina, Brasil, Uruguay y la Economía Mundial: una aproximación a diferentes regímenes de convergencia y divergencia.” In Ensayos de Historia Económica by Luis Bertola. Montevideo: Uruguay en la región y el mundo, 2000.

Bértola, Luis and Jeffrey Williamson. “Globalization in Latin America before 1940.” National Bureau of Economic Research Working Paper, no. 9687 (2003).

Bértola, Luis and others. El PBI uruguayo 1870-1936 y otras estimaciones. Montevideo, 1998.

Maddison, A. Monitoring the World Economy, 1820-1992. Paris: OECD, 1995.

Maddison, A. The World Economy: A Millennial

Citation: Bertola, Luis. “An Overview of the Economic History of Uruguay since the 1870s”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/article/Bertola.Uruguay.final

The Economic History of Taiwan

Kelly Olds, National Taiwan University

Geography

Taiwan is a sub-tropical island, roughly 180 miles long, located less than 100 miles offshore of China’s Fujian province. Most of the island is covered with rugged mountains that rise to over 13,000 feet. These mountains rise directly out of the ocean along the eastern shore facing the Pacific so that this shore, and the central parts of the island are sparsely populated. Throughout its history, most of Taiwan’s people have lived on the Western Coastal Plain that faces China. This plain is crossed by east-west rivers, which occasionally bring floods of water down from the mountains creating broad boulder strewn flood plains. Until modern times, these rivers have made north-south travel costly and limited the island’s economic integration. The most important river is the Chuo Shuei-Hsi (between present-day Changhua and Yunlin counties), which has been an important economic and cultural divide.

Aboriginal Economy

Little is known about Taiwan prior to the seventeenth-century. When the Dutch came to the island in 1622, they found a population of roughly 70,000 Austronesian aborigines, at least 1,000 Chinese and a smaller number of Japanese. The aborigine women practiced subsistence agriculture while aborigine men harvested deer for export. The Chinese and Japanese population was primarily male and transient. Some of the Chinese were fishermen who congregated at the mouths of Taiwanese rivers but most Chinese and Japanese were merchants. Chinese merchants usually lived in aborigine villages and acted as middlemen, exporting deerskins, primarily to Japan, and importing salt and various manufactures. The harbor alongside which the Dutch built their first fort (in present-day Tainan City) was already an established place of rendezvous for Chinese and Japanese trade when the Dutch arrived.

Taiwan under the Dutch and Koxinga

The Dutch took control of most of Taiwan in a series of campaigns that lasted from the mid-1630s to the mid-1640s. The Dutch taxed the deerskin trade, hired aborigine men as soldiers and tried to introduce new forms of agriculture, but otherwise interfered little with the aborigine economy. The Tainan harbor grew in importance as an international entrepot. The most important change in the economy was an influx of about 35,000 Chinese to the island. These Chinese developed land, mainly in southern Taiwan, and specialized in growing rice and sugar. Sugar became Taiwan’s primary export. One of the most important Chinese investors in the Taiwanese economy was the leader of the Chinese community in Dutch Batavia (on Java) and during this period the Chinese economy on Taiwan bore a marked resemblance to the Batavian economy.

Koxinga, a Chinese-Japanese sea lord, drove the Dutch off the island in 1661. Under the rule of Koxinga and his heirs (1661-1683), Chinese settlement continued to spread in southern Taiwan. On the one hand, Chinese civilians made the crossing to flee the chaos that accompanied the Ming-Qing transition. On the other hand, Koxinga and his heirs brought over soldiers who were required to clear land and farm when they were not being used in wars. The Chinese population probably rose to about 120,000. Taiwan’s exports changed little, but the Tainan harbor lost importance as a center of international trade, as much of this trade now passed through Xiamen (Amoy), a port across the strait in Fujian that was also under the control of Koxinga and his heirs.

Taiwan under Qing Rule

The Qing dynasty defeated Koxinga’s grandson and took control of Taiwan in 1683. Taiwan remained part of the Chinese empire until it ceded the island to Japan in 1895. The Qing government originally saw control of Taiwan as an economic burden that had to be borne in order to keep the island out of the hand of pirates. In the first year of occupation, the Qing government shipped as many Chinese residents as possible back to the mainland. The island lost perhaps one-third of its Chinese population. Travel to Taiwan by all but male migrant workers was illegal until 1732 and this prohibition was reinstated off-and-on until it was finally permanently rescinded in 1788. However, the island’s Chinese population grew about two percent per year in the century following the Qing takeover. Both illegal immigration and natural increase were important components of this growth. The Qing government feared the expense of Chinese-aborigine confrontations and tried futilely to restrain Chinese settlement and keep the populations apart. Chinese pioneers, however, were constantly pushing the bounds of Chinese settlement northward and eastward and the aborigines were forced to adapt. Some groups permanently leased their land to Chinese settlers. Others learned Chinese farming skills and eventually assimilated or else moved toward the mountains where they continued hunting, learned to raise cattle or served as Qing soldiers. Due to the lack of Chinese women, intermarriage was also common.

Individual entrepreneurs or land companies usually organized Chinese pioneering enterprises. These people obtained land from aborigines or the government, recruited settlers, supplied loans to the settlers and sometimes invested in irrigation projects. Large land developers often lived in the village during the early years but moved to a city after the village was established. They remained responsible for paying the land tax and they received “large rents” from the settlers amounting to 10-15 percent of the expected harvest. However, they did not retain control of land usage or have any say in land sales or rental. The “large rents” were, in effect, a tax paid to a tax farmer who shared this revenue with the government. The payers of the large rents were the true owners who controlled the land. These people often chose to rent out their property to tenants who did the actual farming and paid a “small rent” of about 50 percent of the expected harvest.

Chinese pioneers made extensive use of written contracts but government enforcement of contracts was minimal. In the pioneers’ homeland across the strait, protecting property and enforcing agreements was usually a function of the lineage. Being part of a strong lineage was crucial to economic success and violent struggles among lineages were a problem endemic to south China. Taiwanese settlers had crossed the strait as individuals or in small groups and lacked strong lineages. Like other Chinese immigrants throughout the world, they created numerous voluntary associations based on one’s place of residence, occupation, place of origin, surname, etc. These organizations substituted for lineages in protecting property and enforcing contracts, and violent conflict among these associations over land and water rights was frequent. Due to property rights problems, land sales contracts often included the signature of not only the owner, but also his family and neighbors agreeing to the transfer. The difficulty of seizing collateral led to the common use of “conditional sales” as a means of borrowing money. Under the terms of a conditional sale, the lender immediately took control of the borrower’s property and retained the right to the property’s production in lieu of rent until the borrower paid back the loan. Since the borrower could wait an indefinite period of time before repaying the loan, this led to an awkward situation in which the person who controlled the land did not have permanent ownership and had no incentive to invest in land improvements.

Taiwan prospered during a sugar boom in the early eighteenth century, but afterwards its sugar industry had a difficult time keeping up with advances in foreign production. Until the Japanese occupation in 1895, Taiwan’s sugar farms and sugar mills remained small-scale operations. The sugar industry was centered in the south of the island and throughout the nineteenth century, the southern population showed little growth and may have declined. By the end of the nineteenth century, the south of the island was poorer than the north of the island and its population was shorter in stature and had a lower life expectancy. The north of the island was better suited to rice production and the northern economy seems to have grown robustly. As the Chinese population moved into the foothills of the northern mountains in the mid-nineteenth century, they began growing tea, which added to the north’s economic vitality and became the island’s leading export during the last quarter of the nineteenth century. The tea industry’s most successful product was oolong tea produced primarily for the U.S. market.

During the last years of the Qing dynasty’s rule in Taiwan, Taiwan was made a full province of China and some attempts were made to modernize the island by carrying out a land survey and building infrastructure. Taiwan’s first railroad was constructed linking several cities in the north.

Taiwan under Japanese Rule

The Japanese gained control of Taiwan in 1895 after the Sino-Japanese War. After several years of suppressing both Chinese resistance and banditry, the Japanese began to modernize the island’s economy. A railroad was constructed running the length of the island and modern roads and bridges were built. A modern land survey was carried out. Large rents were eliminated and those receiving these rents were compensated with bonds. Ownership of approximately twenty percent of the land could not be established to Japanese satisfaction and was confiscated. Much of this land was given to Japanese conglomerates that wanted land for sugarcane. Several banks were established and reorganized irrigation districts began borrowing money to make improvements. Since many Japanese soldiers had died of disease, improving the island’s sanitation and disease environment was also a top priority.

Under the Japanese, Taiwan remained an agricultural economy. Although sugarcane continued to be grown mainly on family farms, sugar processing was modernized and sugar once again became Taiwan’s leading export. During the early years of modernization, native Taiwanese sugar refiners remained important but, largely due to government policy, Japanese refiners holding regional monopsony power came to control the industry. Taiwanese sugar remained uncompetitive on the international market, but was sold duty free within the protected Japanese market. Rice, also bound for the protected Japanese market, displaced tea to become the second major export crop. Altogether, almost half of Taiwan’s agricultural production was being exported in the 1930s. After 1935, the government began encouraging investment in non-agricultural industry on the island. The war that followed was a time of destruction and economic collapse.

Growth in Taiwan’s per-capita economic product during this colonial period roughly kept up with that of Japan. Population also grew quickly as health improved and death rates fell. The native Taiwanese population’s per-capita consumption grew about one percent per year, slower than the growth in consumption in Japan, but greater than the growth in China. Better property rights enforcement, population growth, transportation improvements and protected agricultural markets caused the value of land to increase quickly, but real wage rates increased little. Most Taiwanese farmers did own some land but since the poor were more dependent on wages, income inequality increased.

Taiwan Under Nationalist Rule

Taiwan’s economy recovered from the war slower than the Japanese economy. The Chinese Nationalist government took control of Taiwan in 1945 and lost control of their original territory on the mainland in 1949. The Japanese population, which had grown to over five percent of Taiwan’s population (and a much greater proportion of Taiwan’s urban population), was shipped to Japan and the new government confiscated Japanese property creating large public corporations. The late 1940s was a period of civil war in China, and Taiwan also experienced violence and hyperinflation. In 1949, soldiers and refugees from the mainland flooded onto the island increasing Taiwan’s population by about twenty percent. Mainlanders tended to settle in cities and were predominant in the public sector.

In the 1950s, Taiwan was dependent on American aid, which allowed its government to maintain a large military without overburdening the economy. Taiwan’s agricultural economy was left in shambles by the events of the 1940s. It had lost its protected Japanese markets and the low-interest-rate formal-sector loans to which even tenant farmers had access in the 1930s were no longer available. With American help, the government implemented a land reform program. This program (1) sold public land to tenant farmers, (2) limited rent to 37.5% of the expected harvest and (3) severely restricted the size of individual landholdings forcing landlords to sell most of their land to the government in exchange for stocks and bonds valued at 2.5 times the land’s annual expected harvest. This land was then redistributed. The land reform increased equality among the farm population and strengthened government control of the countryside. Its justice and effect on agricultural investment and productivity are still hotly debated.

High-speed growth accompanied by quick industrialization began in the late-1950s. Taiwan became known for its cheap manufactured exports produced by small enterprises bound together by flexible sub-contracting networks. Taiwan’s postwar industrialization is usually attributed to (1) the decline in land per capita, (2) the change in export markets and (3) government policy. Between 1940 and 1962, Taiwan’s population increased at an annual rate of slightly over three percent. This cut the amount of land per capita in half. Taiwan’s agricultural exports had been sold tariff-free at higher-than-world-market prices in pre-war Japan while Taiwan’s only important pre-war manufactured export, imitation panama hats, faced a 25% tariff in the U.S., their primary market. After the war, agricultural products generally faced the greatest trade barriers. As for government policy, Taiwan went through a period of import substitution policy in the 1950s, followed by promotion of manufactured exports in the 1960s and 1970s. Subsidies were available for certain manufactures under both regimes. During the import substitution regime, domestic manufactures were protected both by tariffs and multiple overvalued exchange rates. Under the later export promotion regime, export processing zones were set up in which privileges were extended to businesses which produced products which would not be sold domestically.

Historical research into the “Taiwanese miracle” has focused on government policy and its effects, but statistical data for the first few post-war decades is poor and the overall effect of the various government policies is unclear. During the 1960s and 1970s, real GDP grew about 10% (7% per capita) each year. Most of this growth can be explained by increases in factors of production. Savings rates began rising after the currency was stabilized and reached almost 30% by 1970. Meanwhile, primary education, in which 70% of Taiwanese children had participated under the Japanese, became universal, and students in higher education increased many-fold. Although recent research has emphasized the importance of factor growth in the Asian “miracle economies,” studies show that productivity also grew substantially in Taiwan.

Further Reading

Chang, Han-Yu and Ramon Myers. “Japanese Colonial Development Policy in Taiwan, 1895-1906.” Journal of Asian Studies 22, no. 4 (August 1963): 433-450.

Davidson, James. The Island of Formosa: Past and Present. London: MacMillan & Company, 1903.

Fei, John et.al. Growth with Equity: The Taiwan Case. New York: Oxford University Press, 1979.

Gardella, Robert. Harvesting Mountains: Fujian and the China Tea Trade, 1757-1937. Berkeley: University of California Press, 1994.

Ho, Samuel. Economic Development of Taiwan 1860-1970. New Haven: Yale University Press, 1978.

Ho, Yhi-Min. Agricultural Development of Taiwan, 1903-1960. Nashville: Vanderbilt University Press, 1966.

Ka, Chih-Ming. Japanese Colonialism in Taiwan: Land Tenure, Development, and Dependency, 1895-1945. Boulder: Westview Press, 1995.

Knapp, Ronald, editor. China’s Island Frontier: Studies in the Historical Geography of Taiwan. Honolulu: University Press of Hawaii, 1980.

Li, Kuo-Ting. The Evolution of Policy Behind Taiwan’s Development Success. New Haven: Yale University Press, 1988.

Koo Hui-Wen and Chun-Chieh Wang. “Indexed Pricing: Sugarcane Price Guarantees in Colonial Taiwan, 1930-1940.” Journal of Economic History 59, no. 4 (December 1999): 912-926.

Mazumdar, Sucheta. Sugar and Society in China: Peasants, Technology, and the World Market. Cambridge, MA: Harvard University Asia Center, 1998.

Meskill, Johanna. A Chinese Pioneer Family: The Lins of Wu-feng, Taiwan, 1729-1895. Princeton, NJ: Princeton University Press, 1979.

Ng, Chin-Keong. Trade and Society: The Amoy Network on the China Coast 1683-1735. Singapore: Singapore University Press, 1983.

Olds, Kelly. “The Risk Premium Differential in Japanese-Era Taiwan and Its Effect.” Journal of Institutional and Theoretical Economics 158, no. 3 (September 2002): 441-463.

Olds, Kelly. “The Biological Standard of Living in Taiwan under Japanese Occupation.” Economics and Human Biology, 1 (2003): 1-20.

Olds, Kelly and Ruey-Hua Liu. “Economic Cooperation in Nineteenth-Century Taiwan.” Journal of Institutional and Theoretical Economics 156, no. 2 (June 2000): 404-430.

Rubinstein, Murray, editor. Taiwan: A New History. Armonk, NY: M.E. Sharpe, 1999.

Shepherd, John. Statecraft and Political Economy on the Taiwan Frontier, 1600-1800. Stanford: Stanford University Press, 1993.

Citation: Olds, Kelly. “The Economic History of Taiwan”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-taiwan/

Slavery in the United States

Jenny Bourne, Carleton College

Slavery is fundamentally an economic phenomenon. Throughout history, slavery has existed where it has been economically worthwhile to those in power. The principal example in modern times is the U.S. South. Nearly 4 million slaves with a market value estimated to be between $3.1 and $3.6 billion lived in the U.S. just before the Civil War. Masters enjoyed rates of return on slaves comparable to those on other assets; cotton consumers, insurance companies, and industrial enterprises benefited from slavery as well. Such valuable property required rules to protect it, and the institutional practices surrounding slavery display a sophistication that rivals modern-day law and business.

THE SPREAD OF SLAVERY IN THE U.S.

Not long after Columbus set sail for the New World, the French and Spanish brought slaves with them on various expeditions. Slaves accompanied Ponce de Leon to Florida in 1513, for instance. But a far greater proportion of slaves arrived in chains in crowded, sweltering cargo holds. The first dark-skinned slaves in what was to become British North America arrived in Virginia — perhaps stopping first in Spanish lands — in 1619 aboard a Dutch vessel. From 1500 to 1900, approximately 12 million Africans were forced from their homes to go westward, with about 10 million of them completing the journey. Yet very few ended up in the British colonies and young American republic. By 1808, when the trans-Atlantic slave trade to the U.S. officially ended, only about 6 percent of African slaves landing in the New World had come to North America.

Slavery in the North

Colonial slavery had a slow start, particularly in the North. The proportion there never got much above 5 percent of the total population. Scholars have speculated as to why, without coming to a definite conclusion. Some surmise that indentured servants were fundamentally better suited to the Northern climate, crops, and tasks at hand; some claim that anti-slavery sentiment provided the explanation. At the time of the American Revolution, fewer than 10 percent of the half million slaves in the thirteen colonies resided in the North, working primarily in agriculture. New York had the greatest number, with just over 20,000. New Jersey had close to 12,000 slaves. Vermont was the first Northern region to abolish slavery when it became an independent republic in 1777. Most of the original Northern colonies implemented a process of gradual emancipation in the late eighteenth and early nineteenth centuries, requiring the children of slave mothers to remain in servitude for a set period, typically 28 years. Other regions above the Mason-Dixon line ended slavery upon statehood early in the nineteenth century — Ohio in 1803 and Indiana in 1816, for instance.

TABLE 1
Population of the Original Thirteen Colonies, selected years by type

1750 1750 1790 1790 1790 1810 1810 1810 1860 1860 1860

State

White Black White Free Slave White Free Slave White Free Slave
Nonwhite Nonwhite Nonwhite
108,270 3,010 232,236 2,771 2,648 255,179 6,453 310 451,504 8,643 Connecticut
27,208 1,496 46,310 3,899 8,887 55,361 13,136 4,177 90,589 19,829 1,798 Delaware
4,200 1,000 52,886 398 29,264 145,414 1,801 105,218 591,550 3,538 462,198 Georgia
97,623 43,450 208,649 8,043 103,036 235,117 33,927 111,502 515,918 83,942 87,189 Maryland
183,925 4,075 373,187 5,369 465,303 6,737 1,221,432 9,634 Massachusetts
26,955 550 141,112 630 157 182,690 970 325,579 494 New Hampshire
66,039 5,354 169,954 2,762 11,423 226,868 7,843 10,851 646,699 25,318 New Jersey
65,682 11,014 314,366 4,682 21,193 918,699 25,333 15,017 3,831,590 49,145 New York
53,184 19,800 289,181 5,041 100,783 376,410 10,266 168,824 629,942 31,621 331,059 North Carolina
116,794 2,872 317,479 6,531 3,707 786,804 22,492 795 2,849,259 56,956 Pennsylvania
29,879 3,347 64,670 3,484 958 73,214 3,609 108 170,649 3,971 Rhode Island
25,000 39,000 140,178 1,801 107,094 214,196 4,554 196,365 291,300 10,002 402,406 South Carolina
129,581 101,452 442,117 12,866 292,627 551,534 30,570 392,518 1,047,299 58,154 490,865 Virginia
934,340 236,420 2,792,325 58,277 681,777 4,486,789 167,691 1,005,685 12,663,310 361,247 1,775,515 United States

Source: Historical Statistics of the U.S. (1970), Franklin (1988).

Slavery in the South

Throughout colonial and antebellum history, U.S. slaves lived primarily in the South. Slaves comprised less than a tenth of the total Southern population in 1680 but grew to a third by 1790. At that date, 293,000 slaves lived in Virginia alone, making up 42 percent of all slaves in the U.S. at the time. South Carolina, North Carolina, and Maryland each had over 100,000 slaves. After the American Revolution, the Southern slave population exploded, reaching about 1.1 million in 1810 and over 3.9 million in 1860.

TABLE 2
Population of the South 1790-1860 by type

Year White Free Nonwhite Slave
1790 1,240,454 32,523 654,121
1800 1,691,892 61,575 851,532
1810 2,118,144 97,284 1,103,700
1820 2,867,454 130,487 1,509,904
1830 3,614,600 175,074 1,983,860
1840 4,601,873 207,214 2,481,390
1850 6,184,477 235,821 3,200,364
1860 8,036,700 253,082 3,950,511

Source: Historical Statistics of the U.S. (1970).

Slave Ownership Patterns

Despite their numbers, slaves typically comprised a minority of the local population. Only in antebellum South Carolina and Mississippi did slaves outnumber free persons. Most Southerners owned no slaves and most slaves lived in small groups rather than on large plantations. Less than one-quarter of white Southerners held slaves, with half of these holding fewer than five and fewer than 1 percent owning more than one hundred. In 1860, the average number of slaves residing together was about ten.

TABLE 3
Slaves as a Percent of the Total Population
selected years, by Southern state

1750 1790 1810 1860
State Black/total Slave/total Slave/total Slave/total
population population population population
Alabama 45.12
Arkansas 25.52
Delaware 5.21 15.04 5.75 1.60
Florida 43.97
Georgia 19.23 35.45 41.68 43.72
Kentucky 16.87 19.82 19.51
Louisiana 46.85
Maryland 30.80 32.23 29.30 12.69
Mississippi 55.18
Missouri 9.72
North Carolina 27.13 25.51 30.39 33.35
South Carolina 60.94 43.00 47.30 57.18
Tennessee 17.02 24.84
Texas 30.22
Virginia 43.91 39.14 40.27 30.75
Overall 37.97 33.95 33.25 32.27

Sources: Historical Statistics of the United States (1970), Franklin (1988).

TABLE 4
Holdings of Southern Slaveowners
by states, 1860

State Total Held 1 Held 2 Held 3 Held 4 Held 5 Held 1-5 Held 100- Held 500+
slaveholders slave slaves Slaves slaves slaves slaves 499 slaves slaves
AL 33,730 5,607 3,663 2,805 2,329 1,986 16,390 344
AR 11,481 2,339 1,503 1,070 894 730 6,536 65 1
DE 587 237 114 74 51 34 510
FL 5,152 863 568 437 365 285 2,518 47
GA 41,084 6,713 4,335 3,482 2,984 2,543 20,057 211 8
KY 38,645 9,306 5,430 4,009 3,281 2,694 24,720 7
LA 22,033 4,092 2,573 2,034 1,536 1,310 11,545 543 4
MD 13,783 4,119 1,952 1,279 1,023 815 9,188 16
MS 30,943 4,856 3,201 2,503 2,129 1,809 14,498 315 1
MO 24,320 6,893 3,754 2,773 2,243 1,686 17,349 4
NC 34,658 6,440 4,017 3,068 2,546 2,245 18,316 133
SC 26,701 3,763 2,533 1,990 1,731 1,541 11,558 441 8
TN 36,844 7,820 4,738 3,609 3,012 2,536 21,715 47
TX 21,878 4,593 2,874 2,093 1,782 1,439 12,781 54
VA 52,128 11,085 5,989 4,474 3,807 3,233 28,588 114
TOTAL 393,967 78,726 47,244 35,700 29,713 24,886 216,269 2,341 22

Source: Historical Statistics of the United States (1970).

Rapid Natural Increase in U.S. Slave Population

How did the U.S. slave population increase nearly fourfold between 1810 and 1860, given the demise of the trans-Atlantic trade? They enjoyed an exceptional rate of natural increase. Unlike elsewhere in the New World, the South did not require constant infusions of immigrant slaves to keep its slave population intact. In fact, by 1825, 36 percent of the slaves in the Western hemisphere lived in the U.S. This was partly due to higher birth rates, which were in turn due to a more equal ratio of female to male slaves in the U.S. relative to other parts of the Americas. Lower mortality rates also figured prominently. Climate was one cause; crops were another. U.S. slaves planted and harvested first tobacco and then, after Eli Whitney’s invention of the cotton gin in 1793, cotton. This work was relatively less grueling than the tasks on the sugar plantations of the West Indies and in the mines and fields of South America. Southern slaves worked in industry, did domestic work, and grew a variety of other food crops as well, mostly under less abusive conditions than their counterparts elsewhere. For example, the South grew half to three-quarters of the corn crop harvested between 1840 and 1860.

INSTITUTIONAL FRAMEWORK

Central to the success of slavery are political and legal institutions that validate the ownership of other persons. A Kentucky court acknowledged the dual character of slaves in Turner v. Johnson (1838): “[S]laves are property and must, under our present institutions, be treated as such. But they are human beings, with like passions, sympathies, and affections with ourselves.” To construct slave law, lawmakers borrowed from laws concerning personal property and animals, as well as from rules regarding servants, employees, and free persons. The outcome was a set of doctrines that supported the Southern way of life.

The English common law of property formed a foundation for U.S. slave law. The French and Spanish influence in Louisiana — and, to a lesser extent, Texas — meant that Roman (or civil) law offered building blocks there as well. Despite certain formal distinctions, slave law as practiced differed little from common-law to civil-law states. Southern state law governed roughly five areas: slave status, masters’ treatment of slaves, interactions between slaveowners and contractual partners, rights and duties of noncontractual parties toward others’ slaves, and slave crimes. Federal law and laws in various Northern states also dealt with matters of interstate commerce, travel, and fugitive slaves.

Interestingly enough, just as slave law combined elements of other sorts of law, so too did it yield principles that eventually applied elsewhere. Lawmakers had to consider the intelligence and volition of slaves as they crafted laws to preserve property rights. Slavery therefore created legal rules that could potentially apply to free persons as well as to those in bondage. Many legal principles we now consider standard in fact had their origins in slave law.

Legal Status Of Slaves And Blacks

By the end of the seventeenth century, the status of blacks — slave or free — tended to follow the status of their mothers. Generally, “white” persons were not slaves but Native and African Americans could be. One odd case was the offspring of a free white woman and a slave: the law often bound these people to servitude for thirty-one years. Conversion to Christianity could set a slave free in the early colonial period, but this practice quickly disappeared.

Skin Color and Status

Southern law largely identified skin color with status. Those who appeared African or of African descent were generally presumed to be slaves. Virginia was the only state to pass a statute that actually classified people by race: essentially, it considered those with one quarter or more black ancestry as black. Other states used informal tests in addition to visual inspection: one-quarter, one-eighth, or one-sixteenth black ancestry might categorize a person as black.

Even if blacks proved their freedom, they enjoyed little higher status than slaves except, to some extent, in Louisiana. Many Southern states forbade free persons of color from becoming preachers, selling certain goods, tending bar, staying out past a certain time of night, or owning dogs, among other things. Federal law denied black persons citizenship under the Dred Scott decision (1857). In this case, Chief Justice Roger Taney also determined that visiting a free state did not free a slave who returned to a slave state, nor did traveling to a free territory ensure emancipation.

Rights And Responsibilities Of Slave Masters

Southern masters enjoyed great freedom in their dealings with slaves. North Carolina Chief Justice Thomas Ruffin expressed the sentiments of many Southerners when he wrote in State v. Mann (1829): “The power of the master must be absolute, to render the submission of the slave perfect.” By the nineteenth century, household heads had far more physical power over their slaves than their employees. In part, the differences in allowable punishment had to do with the substitutability of other means of persuasion. Instead of physical coercion, antebellum employers could legally withhold all wages if a worker did not complete all agreed-upon services. No such alternate mechanism existed for slaves.

Despite the respect Southerners held for the power of masters, the law — particularly in the thirty years before the Civil War — limited owners somewhat. Southerners feared that unchecked slave abuse could lead to theft, public beatings, and insurrection. People also thought that hungry slaves would steal produce and livestock. But masters who treated slaves too well, or gave them freedom, caused consternation as well. The preamble to Delaware’s Act of 1767 conveys one prevalent view: “[I]t is found by experience, that freed [N]egroes and mulattoes are idle and slothful, and often prove burdensome to the neighborhood wherein they live, and are of evil examples to slaves.” Accordingly, masters sometimes fell afoul of the criminal law not only when they brutalized or neglected their slaves, but also when they indulged or manumitted slaves. Still, prosecuting masters was extremely difficult, because often the only witnesses were slaves or wives, neither of whom could testify against male heads of household.

Law of Manumission

One area that changed dramatically over time was the law of manumission. The South initially allowed masters to set their slaves free because this was an inherent right of property ownership. During the Revolutionary period, some Southern leaders also believed that manumission was consistent with the ideology of the new nation. Manumission occurred only rarely in colonial times, increased dramatically during the Revolution, then diminished after the early 1800s. By the 1830s, most Southern states had begun to limit manumission. Allowing masters to free their slaves at will created incentives to emancipate only unproductive slaves. Consequently, the community at large bore the costs of young, old, and disabled former slaves. The public might also run the risk of having rebellious former slaves in its midst.

Antebellum U.S. Southern states worried considerably about these problems and eventually enacted restrictions on the age at which slaves could be free, the number freed by any one master, and the number manumitted by last will. Some required former masters to file indemnifying bonds with state treasurers so governments would not have to support indigent former slaves. Some instead required former owners to contribute to ex-slaves’ upkeep. Many states limited manumissions to slaves of a certain age who were capable of earning a living. A few states made masters emancipate their slaves out of state or encouraged slaveowners to bequeath slaves to the Colonization Society, which would then send the freed slaves to Liberia. Former slaves sometimes paid fees on the way out of town to make up for lost property tax revenue; they often encountered hostility and residential fees on the other end as well. By 1860, most Southern states had banned in-state and post-mortem manumissions, and some had enacted procedures by which free blacks could voluntarily become slaves.

Other Restrictions

In addition to constraints on manumission, laws restricted other actions of masters and, by extension, slaves. Masters generally had to maintain a certain ratio of white to black residents upon plantations. Some laws barred slaves from owning musical instruments or bearing firearms. All states refused to allow slaves to make contracts or testify in court against whites. About half of Southern states prohibited masters from teaching slaves to read and write although some of these permitted slaves to learn rudimentary mathematics. Masters could use slaves for some tasks and responsibilities, but they typically could not order slaves to compel payment, beat white men, or sample cotton. Nor could slaves officially hire themselves out to others, although such prohibitions were often ignored by masters, slaves, hirers, and public officials. Owners faced fines and sometimes damages if their slaves stole from others or caused injuries.

Southern law did encourage benevolence, at least if it tended to supplement the lash and shackle. Court opinions in particular indicate the belief that good treatment of slaves could enhance labor productivity, increase plantation profits, and reinforce sentimental ties. Allowing slaves to control small amounts of property, even if statutes prohibited it, was an oft-sanctioned practice. Courts also permitted slaves small diversions, such as Christmas parties and quilting bees, despite statutes that barred slave assemblies.

Sale, Hire, And Transportation Of Slaves

Sales of Slaves

Slaves were freely bought and sold across the antebellum South. Southern law offered greater protection to slave buyers than to buyers of other goods, in part because slaves were complex commodities with characteristics not easily ascertained by inspection. Slave sellers were responsible for their representations, required to disclose known defects, and often liable for unknown defects, as well as bound by explicit contractual language. These rules stand in stark contrast to the caveat emptor doctrine applied in antebellum commodity sales cases. In fact, they more closely resemble certain provisions of the modern Uniform Commercial Code. Sales law in two states stands out. South Carolina was extremely pro-buyer, presuming that any slave sold at full price was sound. Louisiana buyers enjoyed extensive legal protection as well. A sold slave who later manifested an incurable disease or vice — such as a tendency to escape frequently — could generate a lawsuit that entitled the purchaser to nullify the sale.

Hiring Out Slaves

Slaves faced the possibility of being hired out by their masters as well as being sold. Although scholars disagree about the extent of hiring in agriculture, most concur that hired slaves frequently worked in manufacturing, construction, mining, and domestic service. Hired slaves and free persons often labored side by side. Bond and free workers both faced a legal burden to behave responsibly on the job. Yet the law of the workplace differed significantly for the two: generally speaking, employers were far more culpable in cases of injuries to slaves. The divergent law for slave and free workers does not necessarily imply that free workers suffered. Empirical evidence shows that nineteenth-century free laborers received at least partial compensation for the risks of jobs. Indeed, the tripartite nature of slave-hiring arrangements suggests why antebellum laws appeared as they did. Whereas free persons had direct work and contractual relations with their bosses, slaves worked under terms designed by others. Free workers arguably could have walked out or insisted on different conditions or wages. Slaves could not. The law therefore offered substitute protections. Still, the powerful interests of slaveowners also may mean that they simply were more successful at shaping the law. Postbellum developments in employment law — North and South — in fact paralleled earlier slave-hiring law, at times relying upon slave cases as legal precedents.

Public Transportation

Public transportation also figured into slave law: slaves suffered death and injury aboard common carriers as well as traveled as legitimate passengers and fugitives. As elsewhere, slave-common carrier law both borrowed from and established precedents for other areas of law. One key doctrine originating in slave cases was the “last-clear-chance rule.” Common-carrier defendants that had failed to offer slaves — even negligent slaves — a last clear chance to avoid accidents ended up paying damages to slaveowners. Slaveowner plaintiffs won several cases in the decade before the Civil War when engineers failed to warn slaves off railroad tracks. Postbellum courts used slave cases as precedents to entrench the last-clear-chance doctrine.

Slave Control: Patrollers And Overseers

Society at large shared in maintaining the machinery of slavery. In place of a standing police force, Southern states passed legislation to establish and regulate county-wide citizen patrols. Essentially, Southern citizens took upon themselves the protection of their neighbors’ interests as well as their own. County courts had local administrative authority; court officials appointed three to five men per patrol from a pool of white male citizens to serve for a specified period. Typical patrol duty ranged from one night per week for a year to twelve hours per month for three months. Not all white men had to serve: judges, magistrates, ministers, and sometimes millers and blacksmiths enjoyed exemptions. So did those in the higher ranks of the state militia. In many states, courts had to select from adult males under a certain age, usually 45, 50, or 60. Some states allowed only slaveowners or householders to join patrols. Patrollers typically earned fees for captured fugitive slaves and exemption from road or militia duty, as well as hourly wages. Keeping order among slaves was the patrollers’ primary duty. Statutes set guidelines for appropriate treatment of slaves and often imposed fines for unlawful beatings. In rare instances, patrollers had to compensate masters for injured slaves. For the most part, however, patrollers enjoyed quasi-judicial or quasi-executive powers in their dealings with slaves.

Overseers commanded considerable control as well. The Southern overseer was the linchpin of the large slave plantation. He ran daily operations and served as a first line of defense in safeguarding whites. The vigorous protests against drafting overseers into military service during the Civil War reveal their significance to the South. Yet slaves were too valuable to be left to the whims of frustrated, angry overseers. Injuries caused to slaves by overseers’ cruelty (or “immoral conduct”) usually entitled masters to recover civil damages. Overseers occasionally confronted criminal charges as well. Brutality by overseers naturally generated responses by their victims; at times, courts reduced murder charges to manslaughter when slaves killed abusive overseers.

Protecting The Master Against Loss: Slave Injury And Slave Stealing

Whether they liked it or not, many Southerners dealt daily with slaves. Southern law shaped these interactions among strangers, awarding damages more often for injuries to slaves than injuries to other property or persons, shielding slaves more than free persons from brutality, and generating convictions more frequently in slave-stealing cases than in other criminal cases. The law also recognized more offenses against slaveowners than against other property owners because slaves, unlike other property, succumbed to influence.

Just as assaults of slaves generated civil damages and criminal penalties, so did stealing a slave to sell him or help him escape to freedom. Many Southerners considered slave stealing worse than killing fellow citizens. In marked contrast, selling a free black person into slavery carried almost no penalty.

The counterpart to helping slaves escape — picking up fugitives — also created laws. Southern states offered rewards to defray the costs of capture or passed statutes requiring owners to pay fees to those who caught and returned slaves. Some Northern citizens worked hand-in-hand with their Southern counterparts, returning fugitive slaves to masters either with or without the prompting of law. But many Northerners vehemently opposed the peculiar institution. In an attempt to stitch together the young nation, the federal government passed the first fugitive slave act in 1793. To circumvent its application, several Northern states passed personal liberty laws in the 1840s. Stronger federal fugitive slave legislation then passed in 1850. Still, enough slaves fled to freedom — perhaps as many as 15,000 in the decade before the Civil War — with the help (or inaction) of Northerners that the profession of “slave-catching” evolved. This occupation was often highly risky — enough so that such men could not purchase life insurance coverage — and just as often highly lucrative.

Slave Crimes

Southern law governed slaves as well as slaveowners and their adversaries. What few due process protections slaves possessed stemmed from desires to grant rights to masters. Still, slaves faced harsh penalties for their crimes. When slaves stole, rioted, set fires, or killed free people, the law sometimes had to subvert the property rights of masters in order to preserve slavery as a social institution.

Slaves, like other antebellum Southern residents, committed a host of crimes ranging from arson to theft to homicide. Other slave crimes included violating curfew, attending religious meetings without a master’s consent, and running away. Indeed, a slave was not permitted off his master’s farm or business without his owner’s permission. In rural areas, a slave was required to carry a written pass to leave the master’s land.

Southern states erected numerous punishments for slave crimes, including prison terms, banishment, whipping, castration, and execution. In most states, the criminal law for slaves (and blacks generally) was noticeably harsher than for free whites; in others, slave law as practiced resembled that governing poorer white citizens. Particularly harsh punishments applied to slaves who had allegedly killed their masters or who had committed rebellious acts. Southerners considered these acts of treason and resorted to immolation, drawing and quartering, and hanging.

MARKETS AND PRICES

Market prices for slaves reflect their substantial economic value. Scholars have gathered slave prices from a variety of sources, including censuses, probate records, plantation and slave-trader accounts, and proceedings of slave auctions. These data sets reveal that prime field hands went for four to six hundred dollars in the U.S. in 1800, thirteen to fifteen hundred dollars in 1850, and up to three thousand dollars just before Fort Sumter fell. Even controlling for inflation, the prices of U.S. slaves rose significantly in the six decades before South Carolina seceded from the Union. By 1860, Southerners owned close to $4 billion worth of slaves. Slavery remained a thriving business on the eve of the Civil War: Fogel and Engerman (1974) projected that by 1890 slave prices would have increased on average more than 50 percent over their 1860 levels. No wonder the South rose in armed resistance to protect its enormous investment.

Slave markets existed across the antebellum U.S. South. Even today, one can find stone markers like the one next to the Antietam battlefield, which reads: “From 1800 to 1865 This Stone Was Used as a Slave Auction Block. It has been a famous landmark at this original location for over 150 years.” Private auctions, estate sales, and professional traders facilitated easy exchange. Established dealers like Franklin and Armfield in Virginia, Woolfolk, Saunders, and Overly in Maryland, and Nathan Bedford Forrest in Tennessee prospered alongside itinerant traders who operated in a few counties, buying slaves for cash from their owners, then moving them overland in coffles to the lower South. Over a million slaves were taken across state lines between 1790 and 1860 with many more moving within states. Some of these slaves went with their owners; many were sold to new owners. In his monumental study, Michael Tadman (1989) found that slaves who lived in the upper South faced a very real chance of being sold for profit. From 1820 to 1860, he estimated that an average of 200,000 slaves per decade moved from the upper to the lower South, most via sales. A contemporary newspaper, The Virginia Times, calculated that 40,000 slaves were sold in the year 1830.

Determinants of Slave Prices

The prices paid for slaves reflected two economic factors: the characteristics of the slave and the conditions of the market. Important individual features included age, sex, childbearing capacity (for females), physical condition, temperament, and skill level. In addition, the supply of slaves, demand for products produced by slaves, and seasonal factors helped determine market conditions and therefore prices.

Age and Price

Prices for both male and female slaves tended to follow similar life-cycle patterns. In the U.S. South, infant slaves sold for a positive price because masters expected them to live long enough to make the initial costs of raising them worthwhile. Prices rose through puberty as productivity and experience increased. In nineteenth-century New Orleans, for example, prices peaked at about age 22 for females and age 25 for males. Girls cost more than boys up to their mid-teens. The genders then switched places in terms of value. In the Old South, boys aged 14 sold for 71 percent of the price of 27-year-old men, whereas girls aged 14 sold for 65 percent of the price of 27-year-old men. After the peak age, prices declined slowly for a time, then fell off rapidly as the aging process caused productivity to fall. Compared to full-grown men, women were worth 80 to 90 percent as much. One characteristic in particular set some females apart: their ability to bear children. Fertile females commanded a premium. The mother-child link also proved important for pricing in a different way: people sometimes paid more for intact families.


Source: Fogel and Engerman (1974)

Other Characteristics and Price

Skills, physical traits, mental capabilities, and other qualities also helped determine a slave’s price. Skilled workers sold for premiums of 40-55 percent whereas crippled and chronically ill slaves sold for deep discounts. Slaves who proved troublesome — runaways, thieves, layabouts, drunks, slow learners, and the like — also sold for lower prices. Taller slaves cost more, perhaps because height acts as a proxy for healthiness. In New Orleans, light-skinned females (who were often used as concubines) sold for a 5 percent premium.

Fluctuations in Supply

Prices for slaves fluctuated with market conditions as well as with individual characteristics. U.S. slave prices fell around 1800 as the Haitian revolution sparked the movement of slaves into the Southern states. Less than a decade later, slave prices climbed when the international slave trade was banned, cutting off legal external supplies. Interestingly enough, among those who supported the closing of the trans-Atlantic slave trade were several Southern slaveowners. Why this apparent anomaly? Because the resulting reduction in supply drove up the prices of slaves already living in the U.S and, hence, their masters’ wealth. U.S. slaves had high enough fertility rates and low enough mortality rates to reproduce themselves, so Southern slaveowners did not worry about having too few slaves to go around.

Fluctuations in Demand

Demand helped determine prices as well. The demand for slaves derived in part from the demand for the commodities and services that slaves provided. Changes in slave occupations and variability in prices for slave-produced goods therefore created movements in slave prices. As slaves replaced increasingly expensive indentured servants in the New World, their prices went up. In the period 1748 to 1775, slave prices in British America rose nearly 30 percent. As cotton prices fell in the 1840s, Southern slave prices also fell. But, as the demand for cotton and tobacco grew after about 1850, the prices of slaves increased as well.

Interregional Price Differences

Differences in demand across regions led to transitional regional price differences, which in turn meant large movements of slaves. Yet because planters experienced greater stability among their workforce when entire plantations moved, 84 percent of slaves were taken to the lower South in this way rather than being sold piecemeal.

Time of Year and Price

Demand sometimes had to do with the time of year a sale took place. For example, slave prices in the New Orleans market were 10 to 20 percent higher in January than in September. Why? September was a busy time of year for plantation owners: the opportunity cost of their time was relatively high. Prices had to be relatively low for them to be willing to travel to New Orleans during harvest time.

Expectations and Prices

One additional demand factor loomed large in determining slave prices: the expectation of continued legal slavery. As the American Civil War progressed, prices dropped dramatically because people could not be sure that slavery would survive. In New Orleans, prime male slaves sold on average for $1381 in 1861 and for $1116 in 1862. Burgeoning inflation meant that real prices fell considerably more. By war’s end, slaves sold for a small fraction of their 1860 price.


Source: Data supplied by Stanley Engerman and reported in Walton and Rockoff (1994).

PROFITABILITY, EFFICIENCY, AND EXPLOITATION

That slavery was profitable seems almost obvious. Yet scholars have argued furiously about this matter. On one side stand antebellum writers such as Hinton Rowan Helper and Frederick Law Olmstead, many antebellum abolitionists, and contemporary scholars like Eugene Genovese (at least in his early writings), who speculated that American slavery was unprofitable, inefficient, and incompatible with urban life. On the other side are scholars who have marshaled masses of data to support their contention that Southern slavery was profitable and efficient relative to free labor and that slavery suited cities as well as farms. These researchers stress the similarity between slave markets and markets for other sorts of capital.

Consensus That Slavery Was Profitable

This battle has largely been won by those who claim that New World slavery was profitable. Much like other businessmen, New World slaveowners responded to market signals — adjusting crop mixes, reallocating slaves to more profitable tasks, hiring out idle slaves, and selling slaves for profit. One well-known instance shows that contemporaneous free labor thought that urban slavery may even have worked too well: employees of the Tredegar Iron Works in Richmond, Virginia, went out on their first strike in 1847 to protest the use of slave labor at the Works.

Fogel and Engerman’s Time on the Cross

Carrying the banner of the “slavery was profitable” camp is Nobel laureate Robert Fogel. Perhaps the most controversial book ever written about American slavery is Time on the Cross, published in 1974 by Fogel and co-author Stanley Engerman. These men were among the first to use modern statistical methods, computers, and large datasets to answer a series of empirical questions about the economics of slavery. To find profit levels and rates of return, they built upon the work of Alfred Conrad and John Meyer, who in 1958 had calculated similar measures from data on cotton prices, physical yield per slave, demographic characteristics of slaves (including expected lifespan), maintenance and supervisory costs, and (in the case of females) number of children. To estimate the relative efficiency of farms, Fogel and Engerman devised an index of “total factor productivity,” which measured the output per average unit of input on each type of farm. They included in this index controls for quality of livestock and land and for age and sex composition of the workforce, as well as amounts of output, labor, land, and capital

Time on the Cross generated praise — and considerable criticism. A major critique appeared in 1976 as a collection of articles entitled Reckoning with Slavery. Although some contributors took umbrage at the tone of the book and denied that it broke new ground, others focused on flawed and insufficient data and inappropriate inferences. Despite its shortcomings, Time on the Cross inarguably brought people’s attention to a new way of viewing slavery. The book also served as a catalyst for much subsequent research. Even Eugene Genovese, long an ardent proponent of the belief that Southern planters had held slaves for their prestige value, finally acknowledged that slavery was probably a profitable enterprise. Fogel himself refined and expanded his views in a 1989 book, Without Consent or Contract.

Efficiency Estimates

Fogel’s and Engerman’s research led them to conclude that investments in slaves generated high rates of return, masters held slaves for profit motives rather than for prestige, and slavery thrived in cities and rural areas alike. They also found that antebellum Southern farms were 35 percent more efficient overall than Northern ones and that slave farms in the New South were 53 percent more efficient than free farms in either North or South. This would mean that a slave farm that is otherwise identical to a free farm (in terms of the amount of land, livestock, machinery and labor used) would produce output worth 53 percent more than the free. On the eve of the Civil War, slavery flourished in the South and generated a rate of economic growth comparable to that of many European countries, according to Fogel and Engerman. They also discovered that, because slaves constituted a considerable portion of individual wealth, masters fed and treated their slaves reasonably well. Although some evidence indicates that infant and young slaves suffered much worse conditions than their freeborn counterparts, teenaged and adult slaves lived in conditions similar to — sometimes better than — those enjoyed by many free laborers of the same period.

Transition from Indentured Servitude to Slavery

One potent piece of evidence supporting the notion that slavery provides pecuniary benefits is this: slavery replaces other labor when it becomes relatively cheaper. In the early U.S. colonies, for example, indentured servitude was common. As the demand for skilled servants (and therefore their wages) rose in England, the cost of indentured servants went up in the colonies. At the same time, second-generation slaves became more productive than their forebears because they spoke English and did not have to adjust to life in a strange new world. Consequently, the balance of labor shifted away from indentured servitude and toward slavery.

Gang System

The value of slaves arose in part from the value of labor generally in the antebellum U.S. Scarce factors of production command economic rent, and labor was by far the scarcest available input in America. Moreover, a large proportion of the reward to owning and working slaves resulted from innovative labor practices. Certainly, the use of the “gang” system in agriculture contributed to profits in the antebellum period. In the gang system, groups of slaves perfomed synchronized tasks under the watchful overseer’s eye, much like parts of a single machine. Masters found that treating people like machinery paid off handsomely.

Antebellum slaveowners experimented with a variety of other methods to increase productivity. They developed an elaborate system of “hand ratings” in order to improve the match between the slave worker and the job. Hand ratings categorized slaves by age and sex and rated their productivity relative to that of a prime male field hand. Masters also capitalized on the native intelligence of slaves by using them as agents to receive goods, keep books, and the like.

Use of Positive Incentives

Masters offered positive incentives to make slaves work more efficiently. Slaves often had Sundays off. Slaves could sometimes earn bonuses in cash or in kind, or quit early if they finished tasks quickly. Some masters allowed slaves to keep part of the harvest or to work their own small plots. In places, slaves could even sell their own crops. To prevent stealing, however, many masters limited the products that slaves could raise and sell, confining them to corn or brown cotton, for example. In antebellum Louisiana, slaves even had under their control a sum of money called a peculium. This served as a sort of working capital, enabling slaves to establish thriving businesses that often benefited their masters as well. Yet these practices may have helped lead to the downfall of slavery, for they gave slaves a taste of freedom that left them longing for more.

Slave Families

Masters profited from reproduction as well as production. Southern planters encouraged slaves to have large families because U.S. slaves lived long enough — unlike those elsewhere in the New World — to generate more revenue than cost over their lifetimes. But researchers have found little evidence of slave breeding; instead, masters encouraged slaves to live in nuclear or extended families for stability. Lest one think sentimentality triumphed on the Southern plantation, one need only recall the willingness of most masters to sell if the bottom line was attractive enough.

Profitability and African Heritage

One element that contributed to the profitability of New World slavery was the African heritage of slaves. Africans, more than indigenous Americans, were accustomed to the discipline of agricultural practices and knew metalworking. Some scholars surmise that Africans, relative to Europeans, could better withstand tropical diseases and, unlike Native Americans, also had some exposure to the European disease pool.

Ease of Identifying Slaves

Perhaps the most distinctive feature of Africans, however, was their skin color. Because they looked different from their masters, their movements were easy to monitor. Denying slaves education, property ownership, contractual rights, and other things enjoyed by those in power was simple: one needed only to look at people to ascertain their likely status. Using color was a low-cost way of distinguishing slaves from free persons. For this reason, the colonial practices that freed slaves who converted to Christianity quickly faded away. Deciphering true religious beliefs is far more difficult than establishing skin color. Other slave societies have used distinguishing marks like brands or long hair to denote slaves, yet color is far more immutable and therefore better as a cheap way of keeping slaves separate. Skin color, of course, can also serve as a racist identifying mark even after slavery itself disappears.

Profit Estimates

Slavery never generated superprofits, because people always had the option of putting their money elsewhere. Nevertheless, investment in slaves offered a rate of return — about 10 percent — that was comparable to returns on other assets. Slaveowners were not the only ones to reap rewards, however. So too did cotton consumers who enjoyed low prices and Northern entrepreneurs who helped finance plantation operations.

Exploitation Estimates

So slavery was profitable; was it an efficient way of organizing the workforce? On this question, considerable controversy remains. Slavery might well have profited masters, but only because they exploited their chattel. What is more, slavery could have locked people into a method of production and way of life that might later have proven burdensome.

Fogel and Engerman (1974) claimed that slaves kept about ninety percent of what they produced. Because these scholars also found that agricultural slavery produced relatively more output for a given set of inputs, they argued that slaves may actually have shared in the overall material benefits resulting from the gang system. Other scholars contend that slaves in fact kept less than half of what they produced and that slavery, while profitable, certainly was not efficient. On the whole, current estimates suggest that the typical slave received only about fifty percent of the extra output that he or she produced.

Did Slavery Retard Southern Economic Development?

Gavin Wright (1978) called attention as well to the difference between the short run and the long run. He noted that slaves accounted for a very large proportion of most masters’ portfolios of assets. Although slavery might have seemed an efficient means of production at a point in time, it tied masters to a certain system of labor which might not have adapted quickly to changed economic circumstances. This argument has some merit. Although the South’s growth rate compared favorably with that of the North in the antebellum period, a considerable portion of wealth was held in the hands of planters. Consequently, commercial and service industries lagged in the South. The region also had far less rail transportation than the North. Yet many plantations used the most advanced technologies of the day, and certain innovative commercial and insurance practices appeared first in transactions involving slaves. What is more, although the South fell behind the North and Great Britain in its level of manufacturing, it compared favorably to other advanced countries of the time. In sum, no clear consensus emerges as to whether the antebellum South created a standard of living comparable to that of the North or, if it did, whether it could have sustained it.

Ultimately, the South’s system of law, politics, business, and social customs strengthened the shackles of slavery and reinforced racial stereotyping. As such, it was undeniably evil. Yet, because slaves constituted valuable property, their masters had ample incentives to take care of them. And, by protecting the property rights of masters, slave law necessarily sheltered the persons embodied within. In a sense, the apologists for slavery were right: slaves sometimes fared better than free persons because powerful people had a stake in their well-being.

Conclusion: Slavery Cannot Be Seen As Benign

But slavery cannot be thought of as benign. In terms of material conditions, diet, and treatment, Southern slaves may have fared as well in many ways as the poorest class of free citizens. Yet the root of slavery is coercion. By its very nature, slavery involves involuntary transactions. Slaves are property, whereas free laborers are persons who make choices (at times constrained, of course) about the sort of work they do and the number of hours they work.

The behavior of former slaves after abolition clearly reveals that they cared strongly about the manner of their work and valued their non-work time more highly than masters did. Even the most benevolent former masters in the U.S. South found it impossible to entice their former chattels back into gang work, even with large wage premiums. Nor could they persuade women back into the labor force: many female ex-slaves simply chose to stay at home. In the end, perhaps slavery is an economic phenomenon only because slave societies fail to account for the incalculable costs borne by the slaves themselves.

REFERENCES AND FURTHER READING

For studies pertaining to the economics of slavery, see particularly Aitken, Hugh, editor. Did Slavery Pay? Readings in the Economics of Black Slavery in the United States. Boston: Houghton-Mifflin, 1971.

Barzel, Yoram. “An Economic Analysis of Slavery.” Journal of Law and Economics 20 (1977): 87-110.

Conrad, Alfred H., and John R. Meyer. The Economics of Slavery and Other Studies. Chicago: Aldine, 1964.

David, Paul A., Herbert G. Gutman, Richard Sutch, Peter Temin, and Gavin Wright. Reckoning with Slavery: A Critical Study in the Quantitative History of American Negro Slavery. New York: Oxford University Press, 1976

Fogel , Robert W. Without Consent or Contract. New York: Norton, 1989.

Fogel, Robert W., and Stanley L. Engerman. Time on the Cross: The Economics of American Negro Slavery. New York: Little, Brown, 1974.

Galenson, David W. Traders, Planters, and Slaves: Market Behavior in Early English America. New York: Cambridge University Press, 1986

Kotlikoff, Laurence. “The Structure of Slave Prices in New Orleans, 1804-1862.” Economic Inquiry 17 (1979): 496-518.

Ransom, Roger L., and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. New York: Cambridge University Press, 1977.

Ransom, Roger L., and Richard Sutch “Capitalists Without Capital” Agricultural History 62 (1988): 133-160.

Vedder, Richard K. “The Slave Exploitation (Expropriation) Rate.” Explorations in Economic History 12 (1975): 453-57.

Wright, Gavin. The Political Economy of the Cotton South: Households, Markets, and Wealth in the Nineteenth Century. New York: Norton, 1978.

Yasuba, Yasukichi. “The Profitability and Viability of Slavery in the U.S.” Economic Studies Quarterly 12 (1961): 60-67.

For accounts of slave trading and sales, see
Bancroft, Frederic. Slave Trading in the Old South. New York: Ungar, 1931. Tadman, Michael. Speculators and Slaves. Madison: University of Wisconsin Press, 1989.

For discussion of the profession of slave catchers, see
Campbell, Stanley W. The Slave Catchers. Chapel Hill: University of North Carolina Press, 1968.

To read about slaves in industry and urban areas, see
Dew, Charles B. Slavery in the Antebellum Southern Industries. Bethesda: University Publications of America, 1991.

Goldin, Claudia D. Urban Slavery in the American South, 1820-1860: A Quantitative History. Chicago: University of Chicago Press,1976.

Starobin, Robert. Industrial Slavery in the Old South. New York: Oxford University Press, 1970.

For discussions of masters and overseers, see
Oakes, James. The Ruling Race: A History of American Slaveholders. New York: Knopf, 1982.

Roark, James L. Masters Without Slaves. New York: Norton, 1977.

Scarborough, William K. The Overseer: Plantation Management in the Old South. Baton Rouge, Louisiana State University Press, 1966.

On indentured servitude, see
Galenson, David. “Rise and Fall of Indentured Servitude in the Americas: An Economic Analysis.” Journal of Economic History 44 (1984): 1-26.

Galenson, David. White Servitude in Colonial America: An Economic Analysis. New York: Cambridge University Press, 1981.

Grubb, Farley. “Immigrant Servant Labor: Their Occupational and Geographic Distribution in the Late Eighteenth Century Mid-Atlantic Economy.” Social Science History 9 (1985): 249-75.

Menard, Russell R. “From Servants to Slaves: The Transformation of the Chesapeake Labor System.” Southern Studies 16 (1977): 355-90.

On slave law, see
Fede, Andrew. “Legal Protection for Slave Buyers in the U.S. South.” American Journal of Legal History 31 (1987). Finkelman, Paul. An Imperfect Union: Slavery, Federalism, and Comity. Chapel Hill: University of North Carolina, 1981.

Finkelman, Paul. Slavery, Race, and the American Legal System, 1700-1872. New York: Garland, 1988.

Finkelman, Paul, ed. Slavery and the Law. Madison: Madison House, 1997.

Flanigan, Daniel J. The Criminal Law of Slavery and Freedom, 1800-68. New York: Garland, 1987.

Morris, Thomas D., Southern Slavery and the Law: 1619-1860. Chapel Hill: University of North Carolina Press, 1996.

Schafer, Judith K. Slavery, The Civil Law, and the Supreme Court of Louisiana. Baton Rouge: Louisiana State University Press, 1994.

Tushnet, Mark V. The American Law of Slavery, 1810-60: Considerations of Humanity and Interest. Princeton: Princeton University Press, 1981.

Wahl, Jenny B. The Bondsman’s Burden: An Economic Analysis of the Common Law of Southern Slavery. New York: Cambridge University Press, 1998.

Other useful sources include
Berlin, Ira, and Philip D. Morgan, eds. The Slave’s Economy: Independent Production by Slaves in the Americas. London: Frank Cass, 1991.

Berlin, Ira, and Philip D. Morgan, eds, Cultivation and Culture: Labor and the Shaping of Slave Life in the Americas. Charlottesville, University Press of Virginia, 1993.

Elkins, Stanley M. Slavery: A Problem in American Institutional and Intellectual Life. Chicago: University of Chicago Press, 1976.

Engerman, Stanley, and Eugene Genovese. Race and Slavery in the Western Hemisphere: Quantitative Studies. Princeton: Princeton University Press, 1975.

Fehrenbacher, Don. Slavery, Law, and Politics. New York: Oxford University Press, 1981.

Franklin, John H. From Slavery to Freedom. New York: Knopf, 1988.

Genovese, Eugene D. Roll, Jordan, Roll. New York: Pantheon, 1974.

Genovese, Eugene D. The Political Economy of Slavery: Studies in the Economy and Society of the Slave South . Middletown, CT: Wesleyan, 1989.

Hindus, Michael S. Prison and Plantation. Chapel Hill: University of North Carolina Press, 1980.

Margo, Robert, and Richard Steckel. “The Heights of American Slaves: New Evidence on Slave Nutrition and Health.” Social Science History 6 (1982): 516-538.

Phillips, Ulrich B. American Negro Slavery: A Survey of the Supply, Employment and Control of Negro Labor as Determined by the Plantation Regime. New York: Appleton, 1918.

Stampp, Kenneth M. The Peculiar Institution: Slavery in the Antebellum South. New York: Knopf, 1956.

Steckel, Richard. “Birth Weights and Infant Mortality Among American Slaves.” Explorations in Economic History 23 (1986): 173-98.

Walton, Gary, and Hugh Rockoff. History of the American Economy. Orlando: Harcourt Brace, 1994, chapter 13.

Whaples, Robert. “Where Is There Consensus among American Economic Historians?” Journal of Economic History 55 (1995): 139-154.

Data can be found at
U.S. Bureau of the Census, Historical Statistics of the United States, 1970, collected in ICPSR study number 0003, “Historical Demographic, Economic and Social Data: The United States, 1790-1970,” located at http://fisher.lib.virginia.edu/census/.

Citation: Bourne, Jenny. “Slavery in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/slavery-in-the-united-states/

History of Workplace Safety in the United States, 1880-1970

Mark Aldrich, Smith College

The dangers of work are usually measured by the number of injuries or fatalities occurring to a group of workers, usually over a period of one year. 1 Over the past century such measures reveal a striking improvement in the safety of work in all the advanced countries. In part this has been the result of the gradual shift of jobs from relatively dangerous goods production such as farming, fishing, logging, mining, and manufacturing into such comparatively safe work as retail trade and services. But even the dangerous trades are now far safer than they were in 1900. To take but one example, mining today remains a comparatively risky activity. Its annual fatality rate is about nine for every one hundred thousand miners employed. A century ago in 1900 about three hundred out of every one hundred thousand miners were killed on the job each year. 2

The Nineteenth Century

Before the late nineteenth century we know little about the safety of American workplaces because contemporaries cared little about it. As a result, only fragmentary information exists prior to the 1880s. Pre-industrial laborers faced risks from animals and hand tools, ladders and stairs. Industrialization substituted steam engines for animals, machines for hand tools, and elevators for ladders. But whether these new technologies generally worsened the dangers of work is unclear. What is clear is that nowhere was the new work associated with the industrial revolution more dangerous than in America.

US Was Unusually Dangerous

Americans modified the path of industrialization that had been pioneered in Britain to fit the particular geographic and economic circumstances of the American continent. Reflecting the high wages and vast natural resources of a new continent, this American system encouraged use of labor saving machines and processes. These developments occurred within a legal and regulatory climate that diminished employer’s interest in safety. As a result, Americans developed production methods that were both highly productive and often very dangerous. 3

Accidents Were “Cheap”

While workers injured on the job or their heirs might sue employers for damages, winning proved difficult. Where employers could show that the worker had assumed the risk, or had been injured by the actions of a fellow employee, or had himself been partly at fault, courts would usually deny liability. A number or surveys taken about 1900 showed that only about half of all workers fatally injured recovered anything and their average compensation only amounted to about half a year’s pay. Because accidents were so cheap, American industrial methods developed with little reference to their safety. 4

Mining

Nowhere was the American system more dangerous than in early mining. In Britain, coal seams were deep and coal expensive. As a result, British mines used mining methods that recovered nearly all of the coal because they used waste rock to hold up the roof. British methods also concentrated the working, making supervision easy, and required little blasting. American coal deposits by contrast, were both vast and near the surface; they could be tapped cheaply using techniques known as “room and pillar” mining. Such methods used coal pillars and timber to hold up the roof, because timber and coal were cheap. Since miners worked in separate rooms, labor supervision was difficult and much blasting was required to bring down the coal. Miners themselves were by no means blameless; most were paid by the ton, and when safety interfered with production, safety often took a back seat. For such reasons, American methods yielded more coal per worker than did European techniques, but they were far more dangerous, and toward the end of the nineteenth century, the dangers worsened (see Table 1).5

Table 1
British and American Mine Safety, 1890 -1904
(Fatality rates per Thousand Workers per Year)

Years American Anthracite American Bituminous Great Britain
1890-1894 3.29 2.52 1.61
1900-1904 3.13 3.53 1.28

Source: British data from Great Britain, General Report. Other data from Aldrich, Safety First.

Railroads

Nineteenth century American railroads were also comparatively dangerous to their workers – and their passengers as well – and for similar reasons. Vast North American distances and low population density turned American carriers into predominantly freight haulers – and freight was far more dangerous to workers than passenger traffic, for men had to go in between moving cars for coupling and uncoupling and ride the cars to work brakes. The thin traffic and high wages also forced American carriers to economize on both capital and labor. Accordingly, American carriers were poorly built and used few signals, both of which resulted in many derailments and collisions. Such conditions made American railroad work far more dangerous than that in Britain (see Table 2).6

Table 2
Comparative Safety of British and American Railroad Workers, 1889 – 1901
(Fatality Rates per Thousand Workers per Year)

1889 1895 1901
British railroad workers
All causes
1.14 0.95 0.89
British trainmena
All causes
4.26 3.22 2.21
Coupling 0.94 0.83 0.74
American Railroad workers
All causes
2.67 2.31 2.50
American trainmen
All causes
8.52 6.45 7.35
Coupling 1.73c 1.20 0.78
Brakingb 3.25c 2.44 2.03

Source: Aldrich, Safety First, Table 1 and Great Britain Board of Trade, General Report.

1

Note: Death rates are per thousand employees.
a. Guards, brakemen, and shunters.
b. Deaths from falls from cars and striking overhead obstructions.

Manufacturing

American manufacturing also developed in a distinctively American fashion that substituted power and machinery for labor and manufactured products with interchangeable arts for ease in mass production. Whether American methods were less safe than those in Europe is unclear but by 1900 they were extraordinarily risky by modern standards, for machines and power sources were largely unguarded. And while competition encouraged factory managers to strive for ever-increased output, they showed little interest in improving safety.7

Worker and Employer Responses

Workers and firms responded to these dangers in a number of ways. Some workers simply left jobs they felt were too dangerous, and risky jobs may have had to offer higher pay to attract workers. After the Civil War life and accident insurance companies expanded, and some workers purchased insurance or set aside savings to offset the income risks from death or injury. Some unions and fraternal organizations also offered their members insurance. Railroads and some mines also developed hospital and insurance plans to care for injured workers while many carriers provided jobs for all their injured men. 8

Improving safety, 1910-1939

Public efforts to improve safety date from the very beginnings of industrialization. States established railroad regulatory commissions as early as the 1840s. But while most of the commissions were intended to improve safety, they had few powers and were rarely able to exert much influence on working conditions. Similarly, the first state mining commission began in Pennsylvania in 1869, and other states soon followed. Yet most of the early commissions were ineffectual and as noted safety actually deteriorated after the Civil War. Factory commissions also dated from but most were understaffed and they too had little power.9

Railroads

The most successful effort to improve work safety during the nineteenth century began on the railroads in the 1880s as a small band of railroad regulators, workers, and managers began to campaign for the development of better brakes and couplers for freight cars. In response George Westinghouse modified his passenger train air brake in about 1887 so it would work on long freights, while at roughly the same time Ely Janney developed an automatic car coupler. For the railroads such equipment meant not only better safety, but also higher productivity and after 1888 they began to deploy it. The process was given a boost in 1889-1890 when the newly-formed Interstate Commerce Commission (ICC) published its first accident statistics. They demonstrated conclusively the extraordinary risks to trainmen from coupling and riding freight (Table 2). In 1893 Congress responded, passing the Safety Appliance Act, which mandated use of such equipment. It was the first federal law intended primarily to improve work safety, and by 1900 when the new equipment was widely diffused, risks to trainmen had fallen dramatically.10

Federal Safety Regulation

In the years between 1900 and World War I, a rather strange band of Progressive reformers, muckraking journalists, businessmen, and labor unions pressed for changes in many areas of American life. These years saw the founding of the Federal Food and Drug Administration, the Federal Reserve System and much else. Work safety also became of increased public concern and the first important developments came once again on the railroads. Unions representing trainmen had been impressed by the safety appliance act of 1893 and after 1900 they campaigned for more of the same. In response Congress passed a host of regulations governing the safety of locomotives and freight cars. While most of these specific regulations were probably modestly beneficial, collectively their impact was small because unlike the rules governing automatic couplers and air brakes they addressed rather minor risks.11

In 1910 Congress also established the Bureau of Mines in response to a series of disastrous and increasingly frequent explosions. The Bureau was to be a scientific, not a regulatory body and it was intended to discover and disseminate new knowledge on ways to improve mine safety.12

Workers’ Compensation Laws Enacted

Far more important were new laws that raised the cost of accidents to employers. In 1908 Congress passed a federal employers’ liability law that applied to railroad workers in interstate commerce and sharply limited defenses an employee could claim. Worker fatalities that had once cost the railroads perhaps $200 now cost $2,000. Two years later in 1910, New York became the first state to pass a workmen’s compensation law. This was a European idea. Instead of requiring injured workers to sue for damages in court and prove the employer was negligent, the new law automatically compensated all injuries at a fixed rate. Compensation appealed to businesses because it made costs more predictable and reduced labor strife. To reformers and unions it promised greater and more certain benefits. Samuel Gompers, leader of the American Federation of Labor had studied the effects of compensation in Germany. He was impressed with how it stimulated business interest in safety, he said. Between 1911 and 1921 forty-four states passed compensation laws.13

Employers Become Interested in Safety

The sharp rise in accident costs that resulted from compensation laws and tighter employers’ liability initiated the modern concern with work safety and initiated the long-term decline in work accidents and injuries. Large firms in railroading, mining, manufacturing and elsewhere suddenly became interested in safety. Companies began to guard machines and power sources while machinery makers developed safer designs. Managers began to look for hidden dangers at work, and to require that workers wear hard hats and safety glasses. They also set up safety departments run by engineers and safety committees that included both workers and managers. In 1913 companies founded the National Safety Council to pool information. Government agencies such as the Bureau of Mines and National Bureau of Standards provided scientific support while universities also researched safety problems for firms and industries14

Accident Rates Begin to Fall Steadily

During the years between World War I and World War II the combination of higher accident costs along with the institutionalization of safety concerns in large firms began to show results. Railroad employee fatality rates declined steadily after 1910 and at some large companies such as DuPont and whole industries such as steel making (see Table 3) safety also improved dramatically. Largely independent changes in technology and labor markets also contributed to safety as well. The decline in labor turnover meant fewer new employees who were relatively likely to get hurt, while the spread of factory electrification not only improved lighting but reduced the dangers from power transmission as well. In coal mining the shift from underground work to strip mining also improved safety. Collectively these long-term forces reduced manufacturing injury rates about 38 percent between 1926 and 1939 (see Table 4).15

Table 3
Steel Industry fatality and Injury rates, 1910-1939
(Rates are per million manhours)

Period Fatality rate Injury Rate
1910-1913 0.40 44.1
1937-1939 0.13 11.7

Pattern of Improvement Was Uneven

Yet the pattern of improvement was uneven, both over time and among firms and industries. Safety still deteriorated in times of economic boon when factories mines and railroads were worked to the limit and labor turnover rose. Nor were small companies as successful in reducing risks, for they paid essentially the same compensation insurance premium irrespective of their accident rate, and so the new laws had little effect there. Underground coal mining accidents also showed only modest improvement. Safety was also expensive in coal and many firms were small and saw little payoff from a lower accident rate. The one source of danger that did decline was mine explosions, which diminished in response to technologies developed by the Bureau of Mines. Ironically, however, in 1940 six disastrous blasts that killed 276 men finally led to federal mine inspection in 1941.16

Table 4
Work Injury Rates, Manufacturing and Coal Mining, 1926-1970
(Per Million Manhours)

.

Year Manufacturing Coal Mining
1926 24.2
1931 18.9 89.9
1939 14.9 69.5
1945 18.6 60.7
1950 14.7 53.3
1960 12.0 43.4
1970 15.2 42.6

Source: U.S. Department of Commerce Bureau of the Census, Historical Statistics of the United States, Colonial Times to 1970 (Washington, 1975), Series D-1029 and D-1031.

Postwar Trends, 1945-1970

The economic boon and associated labor turnover during World War II worsened work safety in nearly all areas of the economy, but after 1945 accidents again declined as long-term forces reasserted themselves (Table 4). In addition, after World War II newly powerful labor unions played an increasingly important role in work safety. In the 1960s however economic expansion again led to rising injury rates and the resulting political pressures led Congress to establish the Occupational Safety and Health Administration (OSHA) and the Mine Safety and Health Administration in 1970. The continuing problem of mine explosions also led to the foundation of the Mine Safety and Health Administration (MSHA) that same year. The work of these agencies had been controversial but on balance they have contributed to the continuing reductions in work injuries after 1970.17

References and Further Reading

Aldrich, Mark. Safety First: Technology, Labor and Business in the Building of Work Safety, 1870-1939. Baltimore: Johns Hopkins University Press, 1997.

Aldrich, Mark. “Preventing ‘The Needless Peril of the Coal Mine’: the Bureau of Mines and the Campaign Against Coal Mine Explosions, 1910-1940.” Technology and Culture 36, no. 3 (1995): 483-518.

Aldrich, Mark. “The Peril of the Broken Rail: the Carriers, the Steel Companies, and Rail Technology, 1900-1945.” Technology and Culture 40, no. 2 (1999): 263-291

Aldrich, Mark. “Train Wrecks to Typhoid Fever: The Development of Railroad Medicine Organizations, 1850 -World War I.” Bulletin of the History of Medicine, 75, no. 2 (Summer 2001): 254-89.

Derickson Alan. “Participative Regulation of Hazardous Working Conditions: Safety Committees of the United Mine Workers of America,” Labor Studies Journal 18, no. 2 (1993): 25-38.

Dix, Keith. Work Relations in the Coal Industry: The Hand Loading Era. Morgantown: University of West Virginia Press, 1977. The best discussion of coalmine work for this period.

Dix, Keith. What’s a Coal Miner to Do? Pittsburgh: University of Pittsburgh Press, 1988. The best discussion of coal mine labor during the era of mechanization.

Fairris, David. “From Exit to Voice in Shopfloor Governance: The Case of Company Unions.” Business History Review 69, no. 4 (1995): 494-529.

Fairris, David. “Institutional Change in Shopfloor Governance and the Trajectory of Postwar Injury Rates in U.S. Manufacturing, 1946-1970.” Industrial and Labor Relations Review 51, no. 2 (1998): 187-203.

Fishback, Price. Soft Coal Hard Choices: The Economic Welfare of Bituminous Coal Miners, 1890-1930. New York: Oxford University Press, 1992. The best economic analysis of the labor market for coalmine workers.

Fishback, Price and Shawn Kantor. A Prelude to the Welfare State: The Origins of Workers’ Compensation. Chicago: University of Chicago Press, 2000. The best discussions of how employers’ liability rules worked.

Graebner, William. Coal Mining Safety in the Progressive Period. Lexington: University of Kentucky Press, 1976.

Great Britain Board of Trade. General Report upon the Accidents that Have Occurred on Railways of the United Kingdom during the Year 1901. London, HMSO, 1902.

Great Britain Home Office Chief Inspector of Mines. General Report with Statistics for 1914, Part I. London: HMSO, 1915.

Hounshell, David. From the American System to Mass Production, 1800-1932: The Development of Manufacturing Technology in the United States. Baltimore: Johns Hopkins University Press, 1984.

Humphrey, H. B. “Historical Summary of Coal-Mine Explosions in the United States — 1810-1958.” United States Bureau of Mines Bulletin 586 (1960).

Kirkland, Edward. Men, Cities, and Transportation. 2 vols. Cambridge: Harvard University Press, 1948, Discusses railroad regulation and safety in New England.

Lankton, Larry. Cradle to Grave: Life, Work, and Death in Michigan Copper Mines. New York: Oxford University Press, 1991.

Licht, Walter. Working for the Railroad. Princeton: Princeton University Press, 1983.

Long, Priscilla. Where the Sun Never Shines. New York: Paragon, 1989. Covers coal mine safety at the end of the nineteenth century.

Mendeloff, John. Regulating Safety: An Economic and Political Analysis of Occupational Safety and Health Policy. Cambridge: MIT Press, 1979. An accessible modern discussion of safety under OSHA.

National Academy of Sciences. Toward Safer Underground Coal Mines. Washington, DC: NAS, 1982.

Rogers, Donald. “From Common Law to Factory Laws: The Transformation of Workplace Safety Law in Wisconsin before Progressivism.” American Journal of Legal History (1995): 177-213.

Root, Norman and Daley, Judy. “Are Women Safer Workers? A New Look at the Data.” Monthly Labor Review 103, no. 9 (1980): 3-10.

Rosenberg, Nathan. Technology and American Economic Growth. New York: Harper and Row, 1972. Analyzes the forces shaping American technology.

Rosner, David and Gerald Markowity, editors. Dying for Work. Blomington: Indiana University Press, 1987.

Shaw, Robert. Down Brakes: A History of Railroad Accidents, Safety Precautions, and Operating Practices in the United States of America. London: P. R. Macmillan. 1961.

Trachenberg, Alexander. The History of Legislation for the Protection of Coal Miners in Pennsylvania, 1824 – 1915. New York: International Publishers. 1942.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States, Colonial Times to 1970. Washington, DC, 1975.

Usselman, Steven. “Air Brakes for Freight Trains: Technological Innovation in the American Railroad Industry, 1869-1900.” Business History Review 58 (1984): 30-50.

Viscusi, W. Kip. Risk By Choice: Regulating Health and Safety in the Workplace. Cambridge: Harvard University Press, 1983. The most readable treatment of modern safety issues by a leading scholar.

Wallace, Anthony. Saint Clair. New York: Alfred A. Knopf, 1987. Provides a superb discussion of early anthracite mining and safety.

Whaples, Robert and David Buffum. “Fraternalism, Paternalism, the Family and the Market: Insurance a Century Ago.” Social Science History 15 (1991): 97-122.

White, John. The American Railroad Freight Car. Baltimore: Johns Hopkins University Press, 1993. The definitive history of freight car technology.

Whiteside, James. Regulating Danger: The Struggle for Mine Safety in the Rocky Mountain Coal Industry. Lincoln: University of Nebraska Press, 1990.

Wokutch, Richard. Worker Protection Japanese Style: Occupational Safety and Health in the Auto Industry. Ithaca, NY: ILR, 1992

Worrall, John, editor. Safety and the Work Force: Incentives and Disincentives in Workers’ Compensation. Ithaca, NY: ILR Press, 1983.

1 Injuries or fatalities are expressed as rates. For example, if ten workers are injured out of 450 workers during a year, the rate would be .006666. For readability it might be expressed as 6.67 per thousand or 666.7 per hundred thousand workers. Rates may also be expressed per million workhours. Thus if the average work year is 2000 hours, ten injuries in 450 workers results in [10/450×2000]x1,000,000 = 11.1 injuries per million hours worked.

2 For statistics on work injuries from 1922-1970 see U.S. Department of Commerce, Historical Statistics, Series 1029-1036. For earlier data are in Aldrich, Safety First, Appendix 1-3.

3 Hounshell, American System. Rosenberg, Technology,. Aldrich, Safety First.

4 On the workings of the employers’ liability system see Fishback and Kantor, A Prelude, chapter 2

5 Dix, Work Relations, and his What’s a Coal Miner to Do? Wallace, Saint Clair, is a superb discussion of early anthracite mining and safety. Long, Where the Sun, Fishback, Soft Coal, chapters 1, 2, and 7. Humphrey, “Historical Summary.” Aldrich, Safety First, chapter 2.

6 Aldrich, Safety First chapter 1.

7 Aldrich, Safety First chapter 3

8 Fishback and Kantor, A Prelude, chapter 3, discusses higher pay for risky jobs as well as worker savings and accident insurance See also Whaples and Buffum, “Fraternalism, Paternalism.” Aldrich, ” Train Wrecks to Typhoid Fever.”

9Kirkland, Men, Cities. Trachenberg, The History of Legislation Whiteside, Regulating Danger. An early discussion of factory legislation is in Susan Kingsbury, ed.,xxxxx. Rogers,” From Common Law.”

10 On the evolution of freight car technology see White, American Railroad Freight Car, Usselman “Air Brakes for Freight trains,” and Aldrich, Safety First, chapter 1. Shaw, Down Brakes, discusses causes of train accidents.

11 Details of these regulations may be found in Aldrich, Safety First, chapter 5.

12 Graebner, Coal-Mining Safety, Aldrich, “‘The Needless Peril.”

13 On the origins of these laws see Fishback and Kantor, A Prelude, and the sources cited therein.

14 For assessments of the impact of early compensation laws see Aldrich, Safety First, chapter 5 and Fishback and Kantor, A Prelude, chapter 3. Compensation in the modern economy is discussed in Worrall, Safety and the Work Force. Government and other scientific work that promoted safety on railroads and in coal mining are discussed in Aldrich, “‘The Needless Peril’,” and “The Broken Rail.”

15 Farris, “From Exit to Voice.”

16 Aldrich, “‘Needless Peril,” and Humphrey

17 Derickson, “Participative Regulation” and Fairris, “Institutional Change,” also emphasize the role of union and shop floor issues in shaping safety during these years. Much of the modern literature on safety is highly quantitative. For readable discussions see Mendeloff, Regulating Safety (Cambridge: MIT Press, 1979), and

Citation: Aldrich, Mark. “History of Workplace Safety in the United States, 1880-1970”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/history-of-workplace-safety-in-the-united-states-1880-1970/

Economic History of Portugal

Luciano Amaral, Universidade Nova de Lisboa

Main Geographical Features

Portugal is the south-westernmost country of Europe. With the approximate shape of a vertical rectangle, it has a maximum height of 561 km and a maximum length of 218 km, and is delimited (in its north-south range) by the parallels 37° and 42° N, and (in its east-west range) by the meridians 6° and 9.5° W. To the west, it faces the Atlantic Ocean, separating it from the American continent by a few thousand kilometers. To the south, it still faces the Atlantic, but the distance to Africa is only of a few hundred kilometers. To the north and the east, it shares land frontiers with Spain, and both countries constitute the Iberian Peninsula, a landmass separated directly from France and, then, from the rest of the continent by the Pyrenees. Two Atlantic archipelagos are still part of Portugal, the Azores – constituted by eight islands in the same latitudinal range of mainland Portugal, but much further west, with a longitude between 25° and 31° W – and Madeira – two islands, to the southwest of the mainland, 16° and 17° W, 32.5° and 33° N.

Climate in mainland Portugal is of the temperate sort. Due to its southern position and proximity to the Mediterranean Sea, the country’s weather still presents some Mediterranean features. Temperature is, on average, higher than in the rest of the continent. Thanks to its elongated form, Portugal displays a significant variety of landscapes and sometimes brisk climatic changes for a country of such relatively small size. Following a classical division of the territory, it is possible to identify three main geographical regions: a southern half – with practically no mountains and a very hot and dry climate – and a northern half subdivided into two other vertical sub-halves – with a north-interior region, mountainous, cool but relatively dry, and a north-coast region, relatively mountainous, cool and wet. Portugal’s population is close to 10,000,000, in an area of about 92,000 square kilometers (35,500 square miles).

The Period before the Creation of Portugal

We can only talk of Portugal as a more or less clearly identified and separate political unit (although still far from a defined nation) from the eleventh or twelfth centuries onwards. The geographical area which constitutes modern Portugal was not, of course, an eventless void before that period. But scarcity of space allows only a brief examination of the earlier period, concentrating on its main legacy to future history.

Roman and Visigothic Roots

That legacy is overwhelmingly marked by the influence of the Roman Empire. Portugal owes to Rome its language (a descendant of Latin) and main religion (Catholicism), as well as its primary juridical and administrative traditions. Interestingly enough, little of the Roman heritage passed directly to the period of existence of Portugal as a proper nation. Momentous events filtered the transition. Romans first arrived in the Iberian Peninsula around the third century B.C., and kept their rule until the fifth century of the Christian era. Then, they succumbed to the so-called “barbarian invasions.” Of the various peoples that then roamed the Peninsula, certainly the most influential were the Visigoths, a people of Germanic origin. The Visigoths may be ranked as the second most important force in the shaping of future Portugal. The country owes them the monarchical institution (which lasted until the twentieth century), as well as the preservation both of Catholicism and (although substantially transformed) parts of Roman law.

Muslim Rule

The most spectacular episode following Visigoth rule was the Muslim invasion of the eighth century. Islam ruled the Peninsula from then until the fifteenth century, although occupying an increasingly smaller area from the ninth century onwards, as the Christian Reconquista started repelling it with growing efficiency. Muslim rule set the area on a path different from the rest of Western Europe for a few centuries. However, apart from some ethnic traits legated to its people, a few words in its lexicon, as well as certain agricultural, manufacturing and sailing techniques and knowledge (of which the latter had significant importance to the Portuguese naval discoveries), nothing of the magnitude of the Roman heritage was left in the peninsula by Islam. This is particularly true of Portugal, where Muslim rule was less effective and shorter than in the South of Spain. Perhaps the most important legacy of Muslim rule was, precisely, its tolerance towards the Roman heritage. Much representative of that tolerance was the existence during the Muslim period of an ethnic group, the so-called moçárabe or mozarabe population, constituted by traditional residents that lived within Muslim communities, accepted Muslim rule, and mixed with Muslim peoples, but still kept their language and religion, i.e. some form of Latin and the Christian creed.

Modern Portugal is a direct result of the Reconquista, the Christian fight against Muslim rule in the Iberian Peninsula. That successful fight was followed by the period when Portugal as a nation came to existence. The process of creation of Portugal was marked by the specific Roman-Germanic institutional synthesis that constituted the framework of most of the country’s history.

Portugal from the Late Eleventh Century to the Late Fourteenth Century

Following the Muslim invasion, a small group of Christians kept their independence, settling in a northern area of the Iberian Peninsula called Asturias. Their resistance to Muslim rule rapidly transformed into an offensive military venture. During the eighth century a significant part of northern Iberia was recovered to Christianity. This frontier, roughly cutting the peninsula in two halves, held firm until the eleventh century. Then, the crusaders came, mostly from France and Germany, inserting the area in the overall European crusade movement. By the eleventh century, the original Asturian unit had been divided into two kingdoms, Leon and Navarra, which in turn were subdivided into three new political units, Castile, Aragon and the Condado Portucalense. The Condado Portucalense (the political unit at the origin of future Portugal) resulted from a donation, made in 1096, by the Leonese king to a Crusader coming from Burgundy (France), Count Henry. He did not claim the title king, a job that would be fulfilled only by his son, Afonso Henriques (generally accepted as the first king of Portugal) in the first decade of the twelfth century.

Condado Portucalense as the King’s “Private Property”

Such political units as the various peninsular kingdoms of that time must be seen as entities differing in many respects from current nations. Not only did their peoples not possess any clear “national consciousness,” but also the kings themselves did not rule them based on the same sort of principle we tend to attribute to current rulers (either democratic, autocratic or any other sort). Both the Condado Portucalense and Portugal were understood by their rulers as something still close to “private property” – the use of quotes here is justified by the fact that private property, in the sense we give to it today, was a non-existent notion then. We must, nevertheless, stress this as the moment in which Portuguese rulers started seeing Portugal as a political unit separate from the remaining units in the area.

Portugal as a Military Venture

Such novelty was strengthened by the continuing war against Islam, still occupying most of the center and south of what later became Portugal. This is a crucial fact about Portugal in its infancy, and one that helps one understand the most important episode in Portuguese history , the naval discoveries, i.e. that the country in those days was largely a military venture against Islam. As, in that fight, the kingdom expanded to the south, it did so separately from the other Christian kingdoms existing in the peninsula. And these ended up constituting the two main negative forces for Portugal’s definition as an independent country, i.e. Islam and the remaining Iberian Christian kingdoms. The country achieved a clear geographical definition quite early in its history, more precisely in 1249, when King Sancho II conquered the Algarve from Islam. Remarkably for a continent marked by so much permanent frontier redesign, Portugal acquired then its current geographical shape.

The military nature of the country’s growth gave rise to two of its most important characteristics in early times: Portugal was throughout this entire period a frontier country, and one where the central authority was unable to fully control the territory in its entirety. This latter fact, together with the reception of the Germanic feudal tradition, shaped the nature of the institutions then established in the country. This was particularly important in understanding the land donations made by the crown. These were crucial, for they brought a dispersion of central powers, devolved to local entities, as well as a delegation of powers we would today call “public” to entities we would call “private.” Donations were made in favor of three sorts of groups: noble families, religious institutions and the people in general of particular areas or cities. They resulted mainly from the needs of the process of conquest: noblemen were soldiers, and the crown’s concession of the control of a certain territory was both a reward for their military feats as well as an expedient way of keeping the territory under control (even if in a more indirect way) in a period when it was virtually impossible to directly control the full extent of the conquered area. Religious institutions were crucial in the Reconquista, since the purpose of the whole military effort was to eradicate the Muslim religion from the country. Additionally, priests and monks were full military participants in the process, not limiting their activity to studying or preaching. So, as the Reconquista proceeded, three sorts of territories came into existence: those under direct control of the crown, those under the control of local seigneurs (which subdivided into civil and ecclesiastical) and the communities.

Economic Impact of the Military Institutional Framework

This was an institutional framework that had a direct economic impact. The crown’s donations were not comparable to anything we would nowadays call private property. The land’s donation had attached to it the ability conferred on the beneficiary to a) exact tribute from the population living in it, b) impose personal services or reduce peasants to serfdom, and c) administer justice. This is a phenomenon that is typical of Europe until at least the eighteenth century, and is quite representative of the overlap between the private and public spheres then prevalent. The crown felt it was entitled to give away powers we would nowadays call public, such as those of taxation and administering justice, and beneficiaries from the crown’s donations felt they were entitled to them. As a further limit to full private rights, the land was donated under certain conditions, restricting the beneficiaries’ power to divide, sell or buy it. They managed those lands, thus, in a manner entirely dissimilar from a modern enterprise. And the same goes for actual farmers, those directly toiling the land, since they were sometimes serfs, and even when they were not, had to give personal services to seigneurs and pay arbitrary tributes.

Unusually Tight Connections between the Crown and High Nobility

Much of the history of Portugal until the nineteenth century revolves around the tension between these three layers of power – the crown, the seigneurs and the communities. The main trend in that relationship was, however, in the direction of an increased weight of central power over the others. This is already visible in the first centuries of existence of the country. In a process that may look paradoxical, that increased weight was accompanied by an equivalent increase in seigneurial power at the expense of the communities. This gave rise to a uniquely Portuguese institution, which would be of extreme importance for the development of the Portuguese economy (as we will later see): the extremely tight connection between the crown and the high nobility. As a matter of fact, very early in the country’s history, the Portuguese nobility and Church became much dependent on the redistributive powers of the crown, in particular in what concerns land and the tributes associated with it. This led to an apparently contradictory process, in which at the same time as the crown was gaining ascendancy in the ruling of the country, it also gave away to seigneurs some of those powers usually considered as being public in nature. Such was the connection between the crown and the seigneurs that the intersection between private and public powers proved to be very resistant in Portugal. That intersection lasted longer in Portugal than in other parts of Europe, and consequently delayed the introduction in the country of the modern notion of property rights. But this is something to be developed later, and to fully understand it we must go through some further episodes of Portuguese history. For now, we must note the novelty brought by these institutions. Although they can be seen as unfriendly to property rights from a nineteenth- and twentieth-century vantage point, they represented in fact a first, although primitive and incomplete, definition of property rights of a certain sort.

Centralization and the Evolution of Property

As the crown’s centralization of power proceeded in the early history of the country, some institutions such as serfdom and settling colonies gave way to contracts that granted fuller personal and property rights to farmers. Serfdom was not exceptionally widespread in early Portugal – and tended to disappear from the thirteenth century onwards. More common was the settlement of colonies, a situation in which settlers were simple toilers of land, having to pay significant tributes to either the king or seigneurs, but had no rights over buying and selling the land. From the thirteenth century onwards, as the king and the seigneurs began encroaching on the kingdom’s land and the military situation got calmer, serfdom and settling contracts were increasingly substituted by contracts of the copyhold type. When compared with current concepts of private property, copyhold includes serious restrictions to the full use of private property. Yet, it represented an improvement when compared to the prior legal forms of land use. In the end, private property as we understand it today began its dissemination through the country at this time, although in a form we would still consider primitive. This, to a large extent, repeats with one to two centuries of delay, the evolution that had already occurred in the core of “feudal Europe,” i.e. the Franco-Germanic world and its extension to the British Isles.

Movement toward an Exchange Economy

Precisely as in that core “feudal Europe,” such institutional change brought a first moment of economic growth to the country – of course, there are no consistent figures for economic activity in this period, and, consequently, this is entirely based on more or less superficial evidence pointing in that direction. The institutional change just noted was accompanied by a change in the way noblemen and the Church understood their possessions. As the national territory became increasingly sheltered from the destruction of war, seigneurs became less interested in military activity and conquest, and more so in the good management of the land they already owned land. Accompanying that, some vague principles of specialization also appeared. Some of those possessions were thus significantly transformed into agricultural firms devoted to a certain extent to selling on the market. One should not, of course, exaggerate the importance acquired by the exchange of goods in this period. Most of the economy continued to be of a non-exchange or (at best) barter character. But the signs of change were important, as a certain part of the economy (small as it was) led the way to future more widespread changes. Not by chance, this is the period when we have evidence of the first signs of monetization of the economy, certainly a momentous change (even if initially small in scale), corresponding to an entirely new framework for economic relations.

These essential changes are connected with other aspects of the country’s evolution in this period. First, the war at the frontier (rather than within the territory) seems to have had a positive influence on the rest of the economy. The military front was constituted by a large number of soldiers, who needed constant supply of various goods, and this geared a significant part of the economy. Also, as the conquest enlarged the territory under the Portuguese crown’s control, the king’s court became ever more complex, thus creating one more demand pole. Additionally, together with enlargement of territory also came the insertion within the economy of various cities previously under Muslim control (such as the future capital, Lisbon, after 1147). All this was accompanied by a widespread movement of what we might call internal colonization, whose main purpose was to farm previously uncultivated agricultural land. This is also the time of the first signs of contact of Portuguese merchants with foreign markets, and foreign merchants with Portuguese markets. There are various signs of the presence of Portuguese merchants in British, French and Flemish ports, and vice versa. Much of Portuguese exports were of a typical Mediterranean nature, such as wine, olive oil, salt, fish and fruits, and imports were mainly of grain and textiles. The economy became, thus, more complex, and it is only natural that, to accompany such changes, the notions of property, management and “firm” changed in such a way as to accommodate the new evolution. The suggestion has been made that the success of the Christian Reconquista depended to a significant extent on the economic success of those innovations.

Role of the Crown in Economic Reforms

Of additional importance for the increasing sophistication of the economy is the role played by the crown as an institution. From the thirteenth century onwards, the rulers of the country showed a growing interest in having a well organized economy able to grant them an abundant tax base. Kings such as Afonso III (ruling from 1248 until 1279) and D. Dinis (1279-1325) became famous for their economic reforms. Monetary reforms, fiscal reforms, the promotion of foreign trade, and the promotion of local fairs and markets (an extraordinarily important institution for exchange in medieval times) all point in the direction of an increased awareness on the part of Portuguese kings of the relevance of promoting a proper environment for economic activity. Again, we should not exaggerate the importance of that awareness. Portuguese kings were still significantly (although not entirely) arbitrary rulers, able with one decision to destroy years of economic hard work. But changes were occurring, and some in a direction positive for economic improvement.

As mentioned above, the definition of Portugal as a separate political entity had two main negative elements: Islam as occupier of the Iberian Peninsula and the centralization efforts of the other political entities in the same area. The first element faded as the Portuguese Reconquista, by mid-thirteenth century, reached the southernmost point in the territory of what is today’s Portugal. The conflict (either latent or open) with the remaining kingdoms of the peninsula was kept alive much beyond that. As the early centuries of the first millennium unfolded, a major centripetal force emerged in the peninsula, the kingdom of Castile. Castile progressively became the most successful centralizing political unit in the area. Such success reached a first climatic moment by the middle of the fifteenth century, during the reign of Ferdinand and Isabella, and a second one by the end of the sixteenth century, with the brief annexation of Portugal by the Spanish king, Phillip II. Much of the effort of Portuguese kings was to keep Portugal independent of those other kingdoms, particularly Castile. But sometimes they envisaged something different, such as an Iberian union with Portugal as its true political head. It was one of those episodes that led to a major moment both for the centralization of power in the Portuguese crown within the Portuguese territory and for the successful separation of Portugal from Castile.

Ascent of John I (1385)

It started during the reign of King Ferdinand (of Portugal), during the sixth and seventh decades of the fourteenth century. Through various maneuvers to unite Portugal to Castile (which included war and the promotion of diverse coups), Ferdinand ended up marrying his daughter to the man who would later become king of Castile. Ferdinand was, however, generally unsuccessful in his attempts to tie the crowns under his heading, and when he died in 1383 the king of Castile (thanks to his marriage with Ferdinand’s daughter) became the legitimate heir to the Portuguese crown. This was Ferdinand’s dream in reverse. The crowns would unite, but not under Portugal. The prospect of peninsular unity under Castile was not necessarily loathed by a large part of Portuguese elites, particularly parts of the aristocracy, which viewed Castile as a much more noble-friendly kingdom. This was not, however, a unanimous sentiment, and a strong reaction followed, led by other parts of the same elite, in order to keep the Portuguese crown in the hands of a Portuguese king, separate from Castile. A war with Castile and intimations of civil war ensued, and in the end Portugal’s independence was kept. The man chosen to be the successor of Ferdinand, under a new dynasty, was the bastard son of Peter I (Ferdinand’s father), the man who became John I in 1385.

This was a crucial episode, not simply because of the change in dynasty, imposed against the legitimate heir to the throne, but also because of success in the centralization of power by the Portuguese crown and, as a consequence, of separation of Portugal from Castile. Such separation led Portugal, additionally, to lose interest in further political adventures concerning Castile, and switch its attention to the Atlantic. It was the exploration of this path that led to the most unique period in Portuguese history, one during which Portugal reached heights of importance in the world that find no match in either its past or future history. This period is the Discoveries, a process that started during John I’s reign, in particular under the forceful direction of the king’s sons, most famous among them the mythical Henry, the Navigator. The 1383-85 crisis and John’s victory can thus be seen as the founding moment of the Portuguese Discoveries.

The Discoveries and the Apex of Portuguese International Power

The Discoveries are generally presented as the first great moment of world capitalism, with markets all over the world getting connected under European leadership. Albeit true, this is a largely post hoc perspective, for the Discoveries became a big commercial adventure only somewhere half-way into the story. Before they became such a thing, the aims of the Discoveries’ protagonists were mostly of another sort.

The Conquest of Ceuta

An interesting way to have a fuller picture of the Discoveries is to study the Portuguese contribution to them. Portugal was the pioneer of transoceanic navigation, discovering lands and sea routes formerly unknown to Europeans, and starting trades and commercial routes that linked Europe to other continents in a totally unprecedented fashion. But, at the start, the aims of the whole venture were entirely other. The event generally chosen to date the beginning of the Portuguese discoveries is the conquest of Ceuta – a city-state across the Straits of Gibraltar from Spain – in 1415. In itself such voyage would not differ much from other attempts made in the Mediterranean Sea from the twelfth century onwards by various European travelers. The main purpose of all these attempts was to control navigation in the Mediterranean, in what constitutes a classical fight between Christianity and Islam. Other objectives of Portuguese travelers were the will to find the mythical Prester John – a supposed Christian king surrounded by Islam: there are reasons to suppose that the legend of Prester John is associated with the real existence of the Copt Christians of Ethiopia – and to reach, directly at the source, the gold of Sudan. Despite this latter objective, religious reasons prevailed over others in spurring the first Portuguese efforts of overseas expansion. This should not surprise us, however, for Portugal had since its birth been, precisely, an expansionist political unit under a religious heading. The jump to the other side of the sea, to North Africa, was little else than the continuation of that expansionist drive. Here we must understand Portugal’s position as determined by two elements, one that was general to the whole European continent, and another one, more specific. The first is that the expansion of Portugal in the Middle-Ages coincides with the general expansion of Europe. And Portugal was very much a part of that process. The second is that, by being part of the process, Portugal was (by geographical hazard) at the forefront of the process. Portugal (and Spain) was in the first line of attack and defense against Islam. The conquest of Ceuta, by Henry, the Navigator, is hence a part of that story of confrontation with Islam.

Exploration from West Africa to India

The first efforts of Henry along the Western African coast and in the Atlantic high sea can be put within this same framework. The explorations along the African coast had two main objectives: to have a keener perception of how far south Islam’s strength went, and to surround Morocco, both in order to attack Islam on a wider shore and to find alternative ways to reach Prester John. These objectives depended, of course, on geographical ignorance, as the line of coast Portuguese navigators eventually found was much larger than the one Henry expected to find. In these efforts, Portuguese navigators went increasingly south, but also, mainly due to accidental changes of direction, west. Such westbound dislocations led to the discovery, in the first decades of the fifteenth century, of three archipelagos, the Canaries, Madeira (and Porto Santo) and the Azores. But the major navigational feat of this period was the passage of Cape Bojador in 1434, in the sequence of which the whole western coast of the African continent was opened for exploration and increasingly (and here is the novelty) commerce. As Africa revealed its riches, mostly gold and slaves, these ventures began acquiring a more strict economic meaning. And all this kept on fostering the Portuguese to go further south, and when they reached the southernmost tip of the African continent, to pass it and go east. And so they did. Bartolomeu Dias crossed the Cape of Good Hope in 1487 and ten years later Vasco da Gama would entirely circumnavigate Africa to reach India by sea. By the time of Vasco da Gama’s journey, the autonomous economic importance of intercontinental trade was well established.

Feitorias and Trade with West Africa, the Atlantic Islands and India

As the second half of the fifteenth century unfolded, Portugal created a complex trade structure connecting India and the African coast to Portugal and, then, to the north of Europe. This consisted of a net of trading posts (feitorias) along the African coast, where goods were shipped to Portugal, and then re-exported to Flanders, where a further Portuguese feitoria was opened. This trade was based on such African goods as gold, ivory, red peppers, slaves and other less important goods. As was noted by various authors, this was somehow a continuation of the pattern of trade created during the Middle Ages, meaning that Portugal was able to diversify it, by adding new goods to its traditional exports (wine, olive oil, fruits and salt). The Portuguese established a virtual monopoly of these African commercial routes until the early sixteenth century. The only threats to that trade structure came from pirates originating in Britain, Holland, France and Spain. One further element of this trade structure was the Atlantic Islands (Madeira, the Azores and the African archipelagos of Cape Verde and São Tomé). These islands contributed with such goods as wine, wheat and sugar cane. After the sea route to India was discovered and the Portuguese were able to establish regular connections with India, the trading structure of the Portuguese empire became more complex. Now the Portuguese began bringing multiple spices, precious stones, silk and woods from India, again based on a net of feitorias there established. The maritime route to India acquired an extreme importance to Europe, precisely at this time, since the Ottoman Empire was then able to block the traditional inland-Mediterranean route that supplied the continent with Indian goods.

Control of Trade by the Crown

One crucial aspect of the Portuguese Discoveries is the high degree of control exerted by the crown over the whole venture. The first episodes in the early fifteenth century, under Henry the Navigator (as well as the first exploratory trips along the African coast) were entirely directed by the crown. Then, as the activity became more profitable, it was, first, liberalized, and then rented (in totu) to merchants, whom were constrained to pay the crown a significant share of their profits. Finally, when the full Indo-African network was consolidated, the crown controlled directly the largest share of the trade (although never monopolizing it), participated in “public-private” joint-ventures, or imposed heavy tributes on traders. The grip of the crown increased with growth of the size and complexity of the empire. Until the early sixteenth century, the empire consisted mainly of a network of trading posts. No serious attempt was made by the Portuguese crown to exert a significant degree of territorial control over the various areas constituting the empire.

The Rise of a Territorial Empire

This changed with the growth of trade from India and Brazil. As India was transformed into a platform for trade not only around Africa but also in Asia, a tendency was developed (in particular under Afonso de Albuquerque, in the early sixteenth century) to create an administrative structure in the territory. This was not particularly successful. An administrative structure was indeed created, but stayed forever incipient. A relatively more complex administrative structure would only appear in Brazil. Until the middle of the sixteenth century, Brazil was relatively ignored by the crown. But with the success of the system of sugar cane plantation in the Atlantic Isles, the Portuguese crown decided to transplant it to Brazil. Although political power was controlled initially by a group of seigneurs to whom the crown donated certain areas of the territory, the system got increasingly more centralized as time went on. This is clearly visible with the creation of the post of governor-general of Brazil, directly respondent to the crown, in 1549.

Portugal Loses Its Expansionary Edge

Until the early sixteenth century, Portugal capitalized on being the pioneer of European expansion. It monopolized African and, initially, Indian trade. But, by that time, changes were taking place. Two significant events mark the change in political tide. First, the increasing assertiveness of the Ottoman Empire in the Eastern Mediterranean, which coincided with a new bout of Islamic expansionism – ultimately bringing the Mughal dynasty to India – as well as the re-opening of the Mediterranean route for Indian goods. This put pressure on Portuguese control over Indian trade. Not only was political control over the subcontinent now directly threatened by Islamic rulers, but also the profits from Indian trade started declining. This is certainly one of the reasons why Portugal redirected its imperial interests to the south Atlantic, particularly Brazil – the other reasons being the growing demand for sugar in Europe and the success of the sugar cane plantation system in the Atlantic islands. The second event marking the change in tide was the increased assertiveness of imperial Spain, both within Europe and overseas. Spain, under the Habsburgs (mostly Charles V and Phillip II), exerted a dominance over the European continent which was unprecedented since Roman times. This was complemented by the beginning of exploration of the American continent (from the Caribbean to Mexico and the Andes), again putting pressure on the Portuguese empire overseas. What is more, this is the period when not only Spain, but also Britain, Holland and France acquired navigational and commercial skills equivalent to the Portuguese, thus competing with them in some of their more traditional routes and trades. By the middle of the sixteenth century, Portugal had definitely lost the expansionary edge. And this would come to a tragic conclusion in 1580, with the death of the heirless King Sebastian in North Africa and the loss of political independence to Spain, under Phillip II.

Empire and the Role, Power and Finances of the Crown

The first century of empire brought significant political consequences for the country. As noted above, the Discoveries were directed by the crown to a very large extent. As such, they constituted one further step in the affirmation of Portugal as a separate political entity in the Iberian Peninsula. Empire created a political and economic sphere where Portugal could remain independent from the rest of the peninsula. It thus contributed to the definition of what we might call “national identity.” Additionally, empire enhanced significantly the crown’s redistributive power. To benefit from profits from transoceanic trade, to reach a position in the imperial hierarchy or even within the national hierarchy proper, candidates had to turn to the crown. As it controlled imperial activities, the crown became a huge employment agency, capable of attracting the efforts of most of the national elite. The empire was, thus, transformed into an extremely important instrument of the crown in order to centralize power. It has already been mentioned that much of the political history of Portugal from the Middle Ages to the nineteenth century revolves around the tension between the centripetal power of the crown and the centrifugal powers of the aristocracy, the Church and the local communities. Precisely, the imperial episode constituted a major step in the centralization of the crown’s power. The way such centralization occurred was, however, peculiar, and that would bring crucial consequences for the future. Various authors have noted how, despite the growing centralizing power of the crown, the aristocracy was able to keep its local powers, thanks to the significant taxing and judicial autonomy it possessed in the lands under its control. This is largely true, but as other authors have noted, this was done with the crown acting as an intermediary agent. The Portuguese aristocracy was since early times much less independent from the crown than in most parts of Western Europe, and this situation accentuated during the days of empire. As we have seen above, the crown directed the Reconquista in a way that made it able to control and redistribute (through the famous donations) most of the land that was conquered. In those early medieval days, it was, thus, the service to the crown that made noblemen eligible to benefit from land donations. It is undoubtedly true that by donating land the crown was also giving away (at least partially) the monopoly of taxing and judging. But what is crucial here is its significant intermediary power. With empire, that power increased again. And once more a large part of the aristocracy became dependent on the crown to acquire political and economic power. The empire became, furthermore, the main means of financing of the crown. Receipts from trade activities related to the empire (either profits, tariffs or other taxes) never went below 40 percent of total receipts of the crown, until the nineteenth century, and this was only briefly in its worst days. Most of the time, those receipts amounted to 60 or 70 percent of total crown’s receipts.

Other Economic Consequences of the Empire

Such a role for the crown’s receipts was one of the most important consequences of empire. Thanks to it, tax receipts from internal economic activity became in large part unnecessary for the functioning of national government, something that was going to have deep consequences, precisely for that exact internal activity. This was not, however, the only economic consequence of empire. One of the most important was, obviously, the enlargement of the trade base of the country. Thanks to empire, the Portuguese (and Europe, through the Portuguese) gained access to vast sources of precious metals, stones, tropical goods (such as fruit, sugar, tobacco, rice, potatoes, maize, and more), raw materials and slaves. Portugal used these goods to enlarge its comparative advantage pattern, which helped it penetrate European markets, while at the same time enlarging the volume and variety of imports from Europe. Such a process of specialization along comparative advantage principles was, however, very incomplete. As noted above, the crown exerted a high degree of control over the trade activity of empire, and as a consequence, many institutional factors interfered in order to prevent Portugal (and its imperial complex) from fully following those principles. In the end, in economic terms, the empire was inefficient – something to be contrasted, for instance, with the Dutch equivalent, much more geared to commercial success, and based on clearer efficiency managing-methods. By so significantly controlling imperial trade, the crown became a sort of barrier between the empire’s riches and the national economy. Much of what was earned in imperial activity was spent either on maintaining it or on the crown’s clientele. Consequently, the spreading of the gains from imperial trade to the rest of the economy was highly centralized in the crown. A much visible effect of this phenomenon was the fantastic growth and size of the country’s capital, Lisbon. In the sixteenth century, Lisbon was the fifth largest city in Europe, and from the sixteenth century to the nineteenth century it was always in the top ten, a remarkable feat for a country with such a small population as Portugal. And it was also the symptom of a much inflated bureaucracy, living on the gains of empire, as well as of the low degree of repercussion of those gains of empire through the whole of the economy.

Portuguese Industry and Agriculture

The rest of the economy did, indeed, remain very much untouched by this imperial manna. Most of industry was untouched by it, and the only visible impact of empire on the sector was by fostering naval construction and repair, and all the accessory activities. Most of industry kept on functioning according to old standards, far from the impact of transoceanic prosperity. And much the same happened with agriculture. Although benefiting from the introduction of new crops (mostly maize, but also potatoes and rice), Portuguese agriculture did not benefit significantly from the income stream arising from imperial trade, in particular when we could expect it to be a source of investment. Maize constituted an important technological innovation which had a much important impact on the Portuguese agriculture’s productivity, but it was too localized in the north-western part of the country, thus leaving the rest of the sector untouched.

Failure of a Modern Land Market to Develop

One very important consequence of empire on agriculture and, hence, on the economy, was the preservation of the property structure coming from the Middle Ages, namely that resulting from the crown’s donations. The empire enhanced again the crown’s powers to attract talent and, consequently, donate land. Donations were regulated by official documents called Cartas de Foral, in which the tributes due to the beneficiaries were specified. During the time of the empire, the conditions ruling donations changed in a way that reveals an increased monarchical power: donations were made for long periods (for instance, one life), but the land could not be sold nor divided (and, thus, no parts of it could be sold separately) and renewal required confirmation on the part of the crown. The rules of donation, thus, by prohibiting buying, selling and partition of land, were a major obstacle to the existence not only of a land market, but also of a clear definition of property rights, as well as freedom in the management of land use.

Additionally, various tributes were due to the beneficiaries. Some were in kind, some in money, some were fixed, others proportional to the product of the land. This process dissociated land ownership and appropriation of land product, since the land was ultimately the crown’s. Furthermore, the actual beneficiaries (thanks to the donation’s rules) had little freedom in the management of the donated land. Although selling land in such circumstances was forbidden to the beneficiaries, renting it was not, and several beneficiaries did so. A new dissociation between ownership and appropriation of product was thus introduced. Although in these donations some tributes were paid by freeholders, most of them were paid by copyholders. Copyhold granted to its signatories the use of land in perpetuity or in lives (one to three), but did not allow them to sell it. This introduced a new dissociation between ownership, appropriation of land product and its management. Although it could not be sold, land under copyhold could be ceded in “sub-copyhold” contracts – a replication of the original contract under identical conditions. This introduced, obviously, a new complication to the system. As should be clear by now, such a “baroque” system created an accumulation of layers of rights over the land, as different people could exert different rights over it, and each layer of rights was limited by the other layers, and sometimes conflicting with them in an intricate way. A major consequence of all this was the limited freedom the various owners of rights had in the management of their assets.

High Levels of Taxation in Agriculture

A second direct consequence of the system was the complicated juxtaposition of tributes on agricultural product. The land and its product in Portugal in those days were loaded with tributes (a sort of taxation). This explains one recent historian’s claim (admittedly exaggerated) that, in that period, those who owned the land did not toil it, and those who toiled it did not hold it. We must distinguish these tributes from strict rent payments, as rent contracts are freely signed by the two (or more) sides taking part in it. The tributes we are discussing here represented, in reality, an imposition, which makes the use of the word taxation appropriate to describe them. This is one further result of the already mentioned feature of the institutional framework of the time, the difficulty to distinguish between the private and the public spheres.

Besides the tributes we have just described, other tributes also impended on the land. Some were, again, of a nature we would call private nowadays, others of a more clearly defined public nature. The former were the tributes due to the Church, the latter the taxes proper, due explicitly as such to the crown. The main tribute due to the Church was the tithe. In theory, the tithe was a tenth of the production of farmers and should be directly paid to certain religious institutions. In practice, not always was it a tenth of the production nor did the Church always receive it directly, as its collection was in a large number of cases rented to various other agents. Nevertheless, it was an important tribute to be paid by producers in general. The taxes due to the crown were the sisa (an indirect tax on consumption) and the décima (an income tax). As far as we know, these tributes weighted on average much less than the seigneurial tributes. Still, when added to them, they accentuated the high level of taxation or para-taxation typical of the Portuguese economy of the time.

Portugal under Spanish Rule, Restoration of Independence and the Eighteenth Century

Spanish Rule of Portugal, 1580-1640

The death of King Sebastian in North Africa, during a military mission in 1578, left the Portuguese throne with no direct heir. There were, however, various indirect candidates in line, thanks to the many kinship links established by the Portuguese royal family to other European royal and aristocratic families. Among them was Phillip II of Spain. He would eventually inherit the Portuguese throne, although only after invading the country in 1580. Between 1578 and 1580 leaders in Portugal tried unsuccessfully to find a “national” solution to the succession problem. In the end, resistance to the establishment of Spanish rule was extremely light.

Initial Lack of Resistance to Spanish Rule

To understand why resistance was so mild one must bear in mind the nature of such political units as the Portuguese and Spanish kingdoms at the time. These kingdoms were not the equivalent of contemporary nation-states. They had a separate identity, evident in such things as a different language, a different cultural history, and different institutions, but this didn’t amount to being a nation. The crown itself, when seen as an institution, still retained many features of a “private” venture. Of course, to some extent it represented the materialization of the kingdom and its “people,” but (by the standards of current political concepts) it still retained a much more ambiguous definition. Furthermore, Phillip II promised to adopt a set of rules allowing for extensive autonomy: the Portuguese crown would be “aggregated” to the Spanish crown although not “absorbed” or “associated” or even “integrated” with it. According to those rules, Portugal was to keep its separate identity as a crown and as a kingdom. All positions in the Portuguese government were to be attributed to Portuguese persons, the Portuguese language was the only one allowed in official matters in Portugal, positions in the Portuguese empire were to be attributed only to Portuguese.

The implementation of such rules depended largely on the willingness of the Portuguese nobility, Church and high-ranking officials to accept them. As there were no major popular revolts that could pressure these groups to decide otherwise, they did not have much difficulty in accepting them. In reality, they saw the new situation as an opportunity for greater power. After all, Spain was then the largest and most powerful political unit in Europe, with vast extensions throughout the world. To participate in such a venture under conditions of great autonomy was seen as an excellent opening.

Resistance to Spanish Rule under Phillip IV

The autonomous status was kept largely untouched until the third decade of the seventeenth century, i.e., until Phillip IV’s reign (1621-1640, in Portugal). This was a reign marked by an important attempt at centralization of power under the Spanish crown. A major impulse for this was Spain’s participation in the Thirty Years War. Simply put, the financial stress caused by the war forced the crown not only to increase fiscal pressure on the various political units under it but also to try to control them more closely. This led to serious efforts at revoking the autonomous status of Portugal (as well as other European regions of the empire). And it was as a reaction to those attempts that many Portuguese aristocrats and important personalities led a movement to recover independence. This movement must, again, be interpreted with care, paying attention to the political concepts of the time. This was not an overtly national reaction, in today’s sense of the word “national.” It was mostly a reaction from certain social groups that felt a threat to their power by the new plans of increased centralization under Spain. As some historians have noted, the 1640 revolt should be best understood as a movement to preserve the constitutional elements of the framework of autonomy established in 1580, against the new centralizing drive, rather than a national or nationalist movement.

Although that was the original intent of the movement, the fact is that, progressively, the new Portuguese dynasty (whose first monarch was John IV, 1640-1656) proceeded to an unprecedented centralization of power in the hands of the Portuguese crown. This means that, even if the original intent of the mentors of the 1640 revolt was to keep the autonomy prevalent both under pre-1580 Portuguese rule and post-1580 Spanish rule, the final result of their action was to favor centralization in the Portuguese crown, and thus help define Portugal as a clearly separate country. Again, we should be careful not to interpret this new bout of centralization in the seventeenth and eighteenth centuries as the creation of a national state and of a modern government. Many of the intermediate groups (in particular the Church and the aristocracy) kept their powers largely intact, even powers we would nowadays call public (such as taxation, justice and police). But there is no doubt that the crown increased significantly its redistributive power, and the nobility and the church had, increasingly, to rely on service to the crown to keep most of their powers.

Consequences of Spanish Rule for the Portuguese Empire

The period of Spanish rule had significant consequences for the Portuguese empire. Due to integration in the Spanish empire, Portuguese colonial territories became a legitimate target for all of Spain’s enemies. The European countries having imperial strategies (in particular, Britain, the Netherlands and France) no longer saw Portugal as a countervailing ally in their struggle with Spain, and consequently promoted serious assaults on Portuguese overseas possessions. There was one further element of the geopolitical landscape of the period that aggravated the willingness of competitors to attack Portugal, and that was Holland’s process of separation from the Spanish empire. Spain was not only a large overseas empire but also an enormous European one, of which Holland was a part until the 1560s. Holland, precisely, saw the Portuguese section of the Iberian empire as its weakest link, and, accordingly, attacked it in a fairly systematic way. The Dutch attack on Portuguese colonial possessions ranged from America (Brazil) to Africa (Sao Tome and Angola) to Asia (India, several points in Southeast Asia, and Indonesia), and in the course of it several Portuguese territories were conquered, mostly in Asia. Portugal, however, managed to keep most of its African and American territories.

The Shift of the Portuguese Empire toward the Atlantic

When it regained independence, Portugal had to re-align its external position in accordance with the new context. Interestingly enough, all those rivals that had attacked the country’s possessions during Spanish rule initially supported its separation. France was the most decisive partner in the first efforts to regain independence. Later (in the 1660s, in the final years of the war with Spain) Britain assumed that role. This was to inaugurate an essential feature of Portuguese external relations. From then on Britain became the most consistent Portuguese foreign partner. In the 1660s such a move was connected to the re-orientation of the Portuguese empire. What had until then been the center of empire (its Eastern part – India and the rest of Asia) lost importance. At first, this was due to the renewal in activity in the Mediterranean route, something that threatened the sea route to India. Then, this was because the Eastern empire was the part where the Portuguese had ceded more territory during Spanish rule, in particular to the Netherlands. Portugal kept most of its positions both in Africa and America, and this part of the world was to acquire extreme importance in the seventeenth and eighteenth centuries. In the last decades of the seventeenth century, Portugal was able to develop numerous trades mostly centered in Brazil (although some of the Atlantic islands also participated), involving sugar, tobacco and tropical woods, all sent to the growing market for luxury goods in Europe, to which was added a growing and prosperous trade of slaves from West Africa to Brazil.

Debates over the Role of Brazilian Gold and the Methuen Treaty

The range of goods in Atlantic trade acquired an important addition with the discovery of gold in Brazil in the late seventeenth century. It is the increased importance of gold in Portuguese trade relations that helps explain one of the most important diplomatic moments in Portuguese history, the Methuen Treaty (also called the Queen Anne Treaty), signed between Britain and Portugal in 1703. Many Portuguese economists and historians have blamed the treaty for Portugal’s inability to achieve modern economic growth during the eighteenth and nineteenth centuries. It must be remembered that the treaty stipulated tariffs to be reduced in Britain for imports of Portuguese wine (favoring it explicitly in relation to French wine), while, as a counterpart, Portugal had to eliminate all prohibitions on imports of British wool textiles (even if tariffs were left in place). Some historians and economists have seen this as Portugal’s abdication of having a national industrial sector and, instead, specializing in agricultural goods for export. As proof, such scholars present figures for the balance of trade between Portugal and Britain after 1703, with the former country exporting mainly wine and the latter textiles, and a widening trade deficit. Other authors, however, have shown that what mostly allowed for this trade (and the deficit) was not wine but the newly discovered Brazilian gold. Could, then, gold be the culprit for preventing Portuguese economic growth? Most historians now reject the hypothesis. The problem would lie not in a particular treaty signed in the early eighteenth century but in the existing structural conditions for the economy to grow – a question to be dealt with further below.

Portuguese historiography currently tends to see the Methuen Treaty mostly in the light of Portuguese diplomatic relations in the seventeenth and eighteenth centuries. The treaty would mostly mark the definite alignment of Portugal within the British sphere. The treaty was signed during the War of Spanish Succession. This was a war that divided Europe in a most dramatic manner. As the Spanish crown was left without a successor in 1700, the countries of Europe were led to support different candidates. The diplomatic choice ended up being polarized around Britain, on the one side, and France, on the other. Increasingly, Portugal was led to prefer Britain, as it was the country that granted more protection to the prosperous Portuguese Atlantic trade. As Britain also had an interest in this alignment (due to the important Portuguese colonial possessions), this explains why the treaty was economically beneficial to Portugal (contrary to what some of the older historiography tended to believe) In fact, in simple trade terms, the treaty was a good bargain for both countries, each having been given preferential treatment for certain of its more typical goods.

Brazilian Gold’s Impact on Industrialization

It is this sequence of events that has led several economists and historians to blame gold for the Portuguese inability to industrialize in the eighteenth and nineteenth centuries. Recent historiography, however, has questioned the interpretation. All these manufactures were dedicated to the production of luxury goods and, consequently, directed to a small market that had nothing to do (in both the nature of the market and technology) with those sectors typical of European industrialization. Were it to continue, it is very doubtful it would ever have become a full industrial spurt of the kind then underway in Britain. The problem lay elsewhere, as we will see below.

Prosperity in the Early 1700s Gives Way to Decline

Be that as it may, the first half of the eighteenth century was a period of unquestionable prosperity for Portugal, mostly thanks to gold, but also to the recovery of the remaining trades (both tropical and from the mainland). Such prosperity is most visible in the period of King John V (1706-1750). This is generally seen as the Portuguese equivalent to the reign of France’s Louis XIV. Palaces and monasteries of great dimensions were then built, and at the same time the king’s court acquired a pomp and grandeur not seen before or after, all financed largely by Brazilian gold. By the mid-eighteenth century, however, it all began to falter. The beginning of decline in gold remittances occurred in the sixth decade of the century. A new crisis began, which was compounded by the dramatic 1755 earthquake, which destroyed a large part of Lisbon and other cities. This new crisis was at the root of a political project aiming at a vast renaissance of the country. This was the first in a series of such projects, all of them significantly occurring in the sequence of traumatic events related to empire. The new project is associated with King Joseph I period (1750-1777), in particular with the policies of his prime-minister, the Marquis of Pombal.

Centralization under the Marquis of Pombal

The thread linking the most important political measures taken by the Marquis of Pombal is the reinforcement of state power. A major element in this connection was his confrontation with certain noble and church representatives. The most spectacular episodes in this respect were, first, the killing of an entire noble family and, second, the expulsion of the Jesuits from national soil. Sometimes this is taken as representing an outright hostile policy towards both aristocracy and church. However, it should be best seen as an attempt to integrate aristocracy and church into the state, thus undermining their autonomous powers. In reality, what the Marquis did was to use the power to confer noble titles, as well as the Inquisition, as means to centralize and increase state power. As a matter of fact, one of the most important instruments of recruitment for state functions during the Marquis’ rule was the promise of noble titles. And the Inquisition’s functions also changed form being mainly a religious court, mostly dedicated to the prosecution of Jews, to becoming a sort of civil political police. The Marquis’ centralizing policy covered a wide range of matters, in particular those most significant to state power. Internal police was reinforced, with the creation of new police institutions directly coordinated by the central government. The collection of taxes became more efficient, through an institution more similar to a modern Treasury than any earlier institutions. Improved collection also applied to tariffs and profits from colonial trade.

Centralizing power by the government had significant repercussions in certain aspects of the relationship between state and civil society. Although the Marquis’ rule is frequently pictured as violent, it included measures generally considered as “enlightened.” Such is the case of the abolition of the distinction between “New Christians” and Christians (new Christians were Jews converted to Catholicism, and as such suffered from a certain degree of segregation, constituting an intermediate category between Jews and Christians proper). Another very important political measure by the Marquis was the abolition of slavery in the empire’s mainland (even if slavery kept on being used in the colonies and the slave trade continued to prosper, there is no way of questioning the importance of the measure).

Economic Centralization under the Marquis of Pombal

The Marquis applied his centralizing drive to economic matters as well. This happened first in agriculture, with the creation of a monopolizing company for trade in Port wine. It continued in colonial trade, where the method applied was the same, that is, the creation of companies monopolizing trade for certain products or regions of the empire. Later, interventionism extended to manufacturing. Such interventionism was essentially determined by the international trade crisis that affected many colonial goods, the most important among them gold. As the country faced a new international payments crisis, the Marquis reverted to protectionism and subsidization of various industrial sectors. Again, as such state support was essentially devoted to traditional, low-tech, industries, this policy failed to boost Portugal’s entry into the group of countries that first industrialized.

Failure to Industrialize

The country would never be the same after the Marquis’ consulate. The “modernization” of state power and his various policies left a profound mark in the Portuguese polity. They were not enough, however, to create the necessary conditions for Portugal to enter a process of industrialization. In reality, most of the structural impediments to modern growth were left untouched or aggravated by the Marquis’ policies. This is particularly true of the relationship between central power and peripheral (aristocratic) powers. The Marquis continued the tradition exacerbated during the fifteenth and sixteenth centuries of liberally conferring noble titles to court members. Again, this accentuated the confusion between the public and the private spheres, with a particular incidence (for what concerns us here) in the definition of property and property rights. The act of granting a noble title by the crown, on many occasions implied a donation of land. The beneficiary of the donation was entitled to collect tributes from the population living in the territory but was forbidden to sell it and, sometimes, even rent it. This meant such beneficiaries were not true owners of the land. The land could not exactly be called their property. This lack of private rights was, however, compensated by the granting of such “public” rights as the ability to obtain tributes – a sort of tax. Beneficiaries of donations were, thus, neither true landowners nor true state representatives. And the same went for the crown. By giving away many of the powers we tend to call public today, the crown was acting as if it could dispose of land under its administration in the same manner as private property. But since this was not entirely private property, by doing so the crown was also conceding public powers to agents we would today call private. Such confusion did not help the creation of either a true entrepreneurial class or of a state dedicated to the protection of private property rights.

The whole property structure described above was kept, even after the reforming efforts of the Marquis of Pombal. The system of donations as a method of payment for jobs taken at the King’s court as well as the juxtaposition of various sorts of tributes, either to the crown or local powers, allowed for the perpetuation of a situation where the private and the public spheres were not clearly separated. Consequently, property rights were not well defined. If there is a crucial reason for Portugal’s impaired economic development, these are the things we should pay attention to. Next, we will begin the study of the nineteenth and twentieth centuries, and see how difficult was the dismantling of such an institutional structure and how it affected the growth potential of the Portuguese economy.

Suggested Reading:

Birmingham, David. A Concise History of Portugal. Cambridge: Cambridge University Press, 1993.

Boxer, C.R. The Portuguese Seaborne Empire, 1415-1825. New York: Alfred A. Knopf, 1969.

Godinho, Vitorino Magalhães. “Portugal and Her Empire, 1680-1720.” The New Cambridge Modern History, Vol. VI. Cambridge: Cambridge University Press, 1970.

Oliveira Marques, A.H. History of Portugal. New York: Columbia University Press, 1972.

Wheeler, Douglas. Historical Dictionary of Portugal. London: Scarecrow Press, 1993.

Citation: Amaral, Luciano. “Economic History of Portugal”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/economic-history-of-portugal/

English Poor Laws

George Boyer, Cornell University

A compulsory system of poor relief was instituted in England during the reign of Elizabeth I. Although the role played by poor relief was significantly modified by the Poor Law Amendment Act of 1834, the Crusade Against Outrelief of the 1870s, and the adoption of various social insurance programs in the early twentieth century, the Poor Law continued to assist the poor until it was replaced by the welfare state in 1948. For nearly three centuries, the Poor Law constituted “a welfare state in miniature,” relieving the elderly, widows, children, the sick, the disabled, and the unemployed and underemployed (Blaug 1964). This essay will outline the changing role played by the Poor Law, focusing on the eighteenth and nineteenth centuries.

The Origins of the Poor Law

While legislation dealing with vagrants and beggars dates back to the fourteenth century, perhaps the first English poor law legislation was enacted in 1536, instructing each parish to undertake voluntary weekly collections to assist the “impotent” poor. The parish had been the basic unit of local government since at least the fourteenth century, although Parliament imposed few if any civic functions on parishes before the sixteenth century. Parliament adopted several other statutes relating to the poor in the next sixty years, culminating with the Acts of 1597-98 and 1601 (43 Eliz. I c. 2), which established a compulsory system of poor relief that was administered and financed at the parish (local) level. These Acts laid the groundwork for the system of poor relief up to the adoption of the Poor Law Amendment Act in 1834. Relief was to be administered by a group of overseers, who were to assess a compulsory property tax, known as the poor rate, to assist those within the parish “having no means to maintain them.” The poor were divided into three groups: able-bodied adults, children, and the old or non-able-bodied (impotent). The overseers were instructed to put the able-bodied to work, to give apprenticeships to poor children, and to provide “competent sums of money” to relieve the impotent.

Deteriorating economic conditions and loss of traditional forms of charity in the 1500s

The Elizabethan Poor Law was adopted largely in response to a serious deterioration in economic circumstances, combined with a decline in more traditional forms of charitable assistance. Sixteenth century England experienced rapid inflation, caused by rapid population growth, the debasement of the coinage in 1526 and 1544-46, and the inflow of American silver. Grain prices more than tripled from 1490-1509 to 1550-69, and then increased by an additional 73 percent from 1550-69 to 1590-1609. The prices of other commodities increased nearly as rapidly — the Phelps Brown and Hopkins price index rose by 391 percent from 1495-1504 to 1595-1604. Nominal wages increased at a much slower rate than did prices; as a result, real wages of agricultural and building laborers and of skilled craftsmen declined by about 60 percent over the course of the sixteenth century. This decline in purchasing power led to severe hardship for a large share of the population. Conditions were especially bad in 1595-98, when four consecutive poor harvests led to famine conditions. At the same time that the number of workers living in poverty increased, the supply of charitable assistance declined. The dissolution of the monasteries in 1536-40, followed by the dissolution of religious guilds, fraternities, almshouses, and hospitals in 1545-49, “destroyed much of the institutional fabric which had provided charity for the poor in the past” (Slack 1990). Given the circumstances, the Acts of 1597-98 and 1601 can be seen as an attempt by Parliament both to prevent starvation and to control public order.

The Poor Law, 1601-1750

It is difficult to determine how quickly parishes implemented the Poor Law. Paul Slack (1990) contends that in 1660 a third or more of parishes regularly were collecting poor rates, and that by 1700 poor rates were universal. The Board of Trade estimated that in 1696 expenditures on poor relief totaled £400,000 (see Table 1), slightly less than 1 percent of national income. No official statistics exist for this period concerning the number of persons relieved or the demographic characteristics of those relieved, but it is possible to get some idea of the makeup of the “pauper host” from local studies undertaken by historians. These suggest that, during the seventeenth century, the bulk of relief recipients were elderly, orphans, or widows with young children. In the first half of the century, orphans and lone-parent children made up a particularly large share of the relief rolls, while by the late seventeenth century in many parishes a majority of those collecting regular weekly “pensions” were aged sixty or older. Female pensioners outnumbered males by as much as three to one (Smith 1996). On average, the payment of weekly pensions made up about two-thirds of relief spending in the late seventeenth and early eighteenth centuries; the remainder went to casual benefits, often to able-bodied males in need of short-term relief because of sickness or unemployment.

Settlement Act of 1662

One of the issues that arose in the administration of relief was that of entitlement: did everyone within a parish have a legal right to relief? Parliament addressed this question in the Settlement Act of 1662, which formalized the notion that each person had a parish of settlement, and which gave parishes the right to remove within forty days of arrival any newcomer deemed “likely to be chargeable” as well as any non-settled applicant for relief. While Adam Smith, and some historians, argued that the Settlement Law put a serious brake on labor mobility, available evidence suggests that parishes used it selectively, to keep out economically undesirable migrants such as single women, older workers, and men with large families.

Relief expenditures increased sharply in the first half of the eighteenth century, as can be seen in Table 1. Nominal expenditures increased by 72 percent from 1696 to 1748-50 despite the fact that prices were falling and population was growing slowly; real expenditures per capita increased by 84 percent. A large part of this rise was due to increasing pension benefits, especially for the elderly. Some areas also experienced an increase in the number of able-bodied relief recipients. In an attempt to deter some of the poor from applying for relief, Parliament in 1723 adopted the Workhouse Test Act, which empowered parishes to deny relief to any applicant who refused to enter a workhouse. While many parishes established workhouses as a result of the Act, these were often short-lived, and the vast majority of paupers continued to receive outdoor relief (that is, relief in their own homes).

The Poor Law, 1750-1834

The period from 1750 to 1820 witnessed an explosion in relief expenditures. Real per capita expenditures more than doubled from 1748-50 to 1803, and remained at a high level until the Poor Law was amended in 1834 (see Table 1). Relief expenditures increased from 1.0% of GDP in 1748-50 to a peak of 2.7% of GDP in 1818-20 (Lindert 1998). The demographic characteristics of the pauper host changed considerably in the late eighteenth and early nineteenth centuries, especially in the rural south and east of England. There was a sharp increase in numbers receiving casual benefits, as opposed to regular weekly pensions. The age distribution of those on relief became younger — the share of paupers who were prime-aged (20- 59) increased significantly, and the share aged 60 and over declined. Finally, the share of relief recipients in the south and east who were male increased from about a third in 1760 to nearly two-thirds in 1820. In the north and west there also were shifts toward prime-age males and casual relief, but the magnitude of these changes was far smaller than elsewhere (King 2000).

Gilbert’s Act and the Removal Act

There were two major pieces of legislation during this period. Gilbert’s Act (1782) empowered parishes to join together to form unions for the purpose of relieving their poor. The Act stated that only the impotent poor should be relieved in workhouses; the able-bodied should either be found work or granted outdoor relief. To a large extent, Gilbert’s Act simply legitimized the policies of a large number of parishes that found outdoor relief both less and expensive and more humane that workhouse relief. The other major piece of legislation was the Removal Act of 1795, which amended the Settlement Law so that no non-settled person could be removed from a parish unless he or she applied for relief.

Speenhamland System and other forms of poor relief

During this period, relief for the able-bodied took various forms, the most important of which were: allowances-in-aid-of-wages (the so-called Speenhamland system), child allowances for laborers with large families, and payments to seasonally unemployed agricultural laborers. The system of allowances-in-aid-of-wages was adopted by magistrates and parish overseers throughout large parts of southern England to assist the poor during crisis periods. The most famous allowance scale, though by no means the first, was that adopted by Berkshire magistrates at Speenhamland on May 6, 1795. Under the allowance system, a household head (whether employed or unemployed) was guaranteed a minimum weekly income, the level of which was determined by the price of bread and by the size of his or her family. Such scales typically were instituted only during years of high food prices, such as 1795-96 and 1800-01, and removed when prices declined. Child allowance payments were widespread in the rural south and east, which suggests that laborers’ wages were too low to support large families. The typical parish paid a small weekly sum to laborers with four or more children under age 10 or 12. Seasonal unemployment had been a problem for agricultural laborers long before 1750, but the extent of seasonality increased in the second half of the eighteenth century as farmers in southern and eastern England responded to the sharp increase in grain prices by increasing their specialization in grain production. The increase in seasonal unemployment, combined with the decline in other sources of income, forced many agricultural laborers to apply for poor relief during the winter.

Regional differences in relief expenditures and recipients

Table 2 reports data for fifteen counties located throughout England on per capita relief expenditures for the years ending in March 1783-85, 1803, 1812, and 1831, and on relief recipients in 1802-03. Per capita expenditures were higher on average in agricultural counties than in more industrial counties, and were especially high in the grain-producing southern counties — Oxford, Berkshire, Essex, Suffolk, and Sussex. The share of the population receiving poor relief in 1802-03 varied significantly across counties, being 15 to 23 percent in the grain- producing south and less than 10 percent in the north. The demographic characteristics of those relieved also differed across regions. In particular, the share of relief recipients who were elderly or disabled was higher in the north and west than it was in the south; by implication, the share that were able-bodied was higher in the south and east than elsewhere. Economic historians typically have concluded that these regional differences in relief expenditures and numbers on relief were caused by differences in economic circumstances; that is, poverty was more of a problem in the agricultural south and east than it was in the pastoral southwest or in the more industrial north (Blaug 1963; Boyer 1990). More recently, King (2000) has argued that the regional differences in poor relief were determined not by economic structure but rather by “very different welfare cultures on the part of both the poor and the poor law administrators.”

Causes of the Increase in Relief to Able-bodied Males

What caused the increase in the number of able-bodied males on relief? In the second half of the eighteenth century, a large share of rural households in southern England suffered significant declines in real income. County-level cross-sectional data suggest that, on average, real wages for day laborers in agriculture declined by 19 percent from 1767-70 to 1795 in fifteen southern grain-producing counties, then remained roughly constant from 1795 to 1824, before increasing to a level in 1832 about 10 percent above that of 1770 (Bowley 1898). Farm-level time-series data yield a similar result — real wages in the southeast declined by 13 percent from 1770-79 to 1800-09, and remained low until the 1820s (Clark 2001).

Enclosures

Some historians contend that the Parliamentary enclosure movement, and the plowing over of commons and waste land, reduced the access of rural households to land for growing food, grazing animals, and gathering fuel, and led to the immiseration of large numbers of agricultural laborers and their families (Hammond and Hammond 1911; Humphries 1990). More recent research, however, suggests that only a relatively small share of agricultural laborers had common rights, and that there was little open access common land in southeastern England by 1750 (Shaw-Taylor 2001; Clark and Clark 2001). Thus, the Hammonds and Humphries probably overstated the effect of late eighteenth-century enclosures on agricultural laborers’ living standards, although those laborers who had common rights must have been hurt by enclosures.

Declining cottage industry

Finally, in some parts of the south and east, women and children were employed in wool spinning, lace making, straw plaiting, and other cottage industries. Employment opportunities in wool spinning, the largest cottage industry, declined in the late eighteenth century, and employment in the other cottage industries declined in the early nineteenth century (Pinchbeck 1930; Boyer 1990). The decline of cottage industry reduced the ability of women and children to contribute to household income. This, in combination with the decline in agricultural laborers’ wage rates and, in some villages, the loss of common rights, caused many rural household’s incomes in southern England to fall dangerously close to subsistence by 1795.

North and Midlands

The situation was different in the north and midlands. The real wages of day laborers in agriculture remained roughly constant from 1770 to 1810, and then increased sharply, so that by the 1820s wages were about 50 percent higher than they were in 1770 (Clark 2001). Moreover, while some parts of the north and midlands experienced a decline in cottage industry, in Lancashire and the West Riding of Yorkshire the concentration of textile production led to increased employment opportunities for women and children.

The Political Economy of the Poor Law, 1795-1834

A comparison of English poor relief with poor relief on the European continent reveals a puzzle: from 1795 to 1834 relief expenditures per capita, and expenditures as a share of national product, were significantly higher in England than on the continent. However, differences in spending between England and the continent were relatively small before 1795 and after 1834 (Lindert 1998). Simple economic explanations cannot account for the different patterns of English and continental relief.

Labor-hiring farmers take advantage of the poor relief system

The increase in relief spending in the late-eighteenth and early-nineteenth centuries was partly a result of politically-dominant farmers taking advantage of the poor relief system to shift some of their labor costs onto other taxpayers (Boyer 1990). Most rural parish vestries were dominated by labor-hiring farmers as a result of “the principle of weighting the right to vote according to the amount of property occupied,” introduced by Gilbert’s Act (1782), and extended in 1818 by the Parish Vestry Act (Brundage 1978). Relief expenditures were financed by a tax levied on all parishioners whose property value exceeded some minimum level. A typical rural parish’s taxpayers can be divided into two groups: labor-hiring farmers and non-labor-hiring taxpayers (family farmers, shopkeepers, and artisans). In grain-producing areas, where there were large seasonal variations in the demand for labor, labor-hiring farmers anxious to secure an adequate peak season labor force were able to reduce costs by laying off unneeded workers during slack seasons and having them collect poor relief. Large farmers used their political power to tailor the administration of poor relief so as to lower their labor costs. Thus, some share of the increase in relief spending in the early nineteenth century represented a subsidy to labor-hiring farmers rather than a transfer from farmers and other taxpayers to agricultural laborers and their families. In pasture farming areas, where the demand for labor was fairly constant over the year, it was not in farmers’ interests to shed labor during the winter, and the number of able-bodied laborers receiving casual relief was smaller. The Poor Law Amendment Act of 1834 reduced the political power of labor-hiring farmers, which helps to account for the decline in relief expenditures after that date.

The New Poor Law, 1834-70

The increase in spending on poor relief in the late eighteenth and early nineteenth centuries, combined with the attacks on the Poor Laws by Thomas Malthus and other political economists and the agricultural laborers’ revolt of 1830-31 (the Captain Swing riots), led the government in 1832 to appoint the Royal Commission to Investigate the Poor Laws. The Commission published its report, written by Nassau Senior and Edwin Chadwick, in March 1834. The report, described by historian R. H. Tawney (1926) as “brilliant, influential and wildly unhistorical,” called for sweeping reforms of the Poor Law, including the grouping of parishes into Poor Law unions, the abolition of outdoor relief for the able-bodied and their families, and the appointment of a centralized Poor Law Commission to direct the administration of poor relief. Soon after the report was published Parliament adopted the Poor Law Amendment Act of 1834, which implemented some of the report’s recommendations and left others, like the regulation of outdoor relief, to the three newly appointed Poor Law Commissioners.

By 1839 the vast majority of rural parishes had been grouped into poor law unions, and most of these had built or were building workhouses. On the other hand, the Commission met with strong opposition when it attempted in 1837 to set up unions in the industrial north, and the implementation of the New Poor Law was delayed in several industrial cities. In an attempt to regulate the granting of relief to able-bodied males, the Commission, and its replacement in 1847, the Poor Law Board, issued several orders to selected Poor Law Unions. The Outdoor Labour Test Order of 1842, sent to unions without workhouses or where the workhouse test was deemed unenforceable, stated that able-bodied males could be given outdoor relief only if they were set to work by the union. The Outdoor Relief Prohibitory Order of 1844 prohibited outdoor relief for both able-bodied males and females except on account of sickness or “sudden and urgent necessity.” The Outdoor Relief Regulation Order of 1852 extended the labor test for those relieved outside of workhouses.

Historical debate about the effect of the New Poor Law

Historians do not agree on the effect of the New Poor Law on the local administration of relief. Some contend that the orders regulating outdoor relief largely were evaded by both rural and urban unions, many of whom continued to grant outdoor relief to unemployed and underemployed males (Rose 1970; Digby 1975). Others point to the falling numbers of able- bodied males receiving relief in the national statistics and the widespread construction of union workhouses, and conclude that the New Poor Law succeeded in abolishing outdoor relief for the able-bodied by 1850 (Williams 1981). A recent study by Lees (1998) found that in three London parishes and six provincial towns in the years around 1850 large numbers of prime-age males continued to apply for relief, and that a majority of those assisted were granted outdoor relief. The Poor Law also played an important role in assisting the unemployed in industrial cities during the cyclical downturns of 1841-42 and 1847-48 and the Lancashire cotton famine of 1862-65 (Boot 1990; Boyer 1997). There is no doubt, however, that spending on poor relief declined after 1834 (see Table 1). Real per capita relief expenditures fell by 43 percent from 1831 to 1841, and increased slowly thereafter.

Beginning in 1840, data on the number of persons receiving poor relief are available for two days a year, January 1 and July 1; the “official” estimates in Table 1 of the annual number relieved were constructed as the average of the number relieved on these two dates. Studies conducted by Poor Law administrators indicate that the number recorded in the day counts was less than half the number assisted during the year. Lees’s “revised” estimates of annual relief recipients (see Table 1) assumes that the ratio of actual to counted paupers was 2.24 for 1850- 1900 and 2.15 for 1905-14; these suggest that from 1850 to 1870 about 10 percent of the population was assisted by the Poor Law each year. Given the temporary nature of most spells of relief, over a three year period as much as 25 percent of the population made use of the Poor Law (Lees 1998).

The Crusade Against Outrelief

In the 1870s Poor Law unions throughout England and Wales curtailed outdoor relief for all types of paupers. This change in policy, known as the Crusade Against Outrelief, was not a result of new government regulations, although it was encouraged by the newly formed Local Government Board (LGB). The Board was aided in convincing the public of the need for reform by the propaganda of the Charity Organization Society (COS), founded in 1869. The LGB and the COS maintained that the ready availability of outdoor relief destroyed the self-reliance of the poor. The COS went on to argue that the shift from outdoor to workhouse relief would significantly reduce the demand for assistance, since most applicants would refuse to enter workhouses, and therefore reduce Poor Law expenditures. A policy that promised to raise the morals of the poor and reduce taxes was hard for most Poor Law unions to resist (MacKinnon 1987).

The effect of the Crusade can be seen in Table 1. The deterrent effect associated with the workhouse led to a sharp fall in numbers on relief — from 1871 to 1876, the number of paupers receiving outdoor relief fell by 33 percent. The share of paupers relieved in workhouses increased from 12-15 percent in 1841-71 to 22 percent in 1880, and it continued to rise to 35 percent in 1911. The extent of the crusade varied considerably across poor law unions. Urban unions typically relieved a much larger share of their paupers in workhouses than did rural unions, but there were significant differences in practice across cities. In 1893, over 70 percent of the paupers in Liverpool, Manchester, Birmingham, and in many London Poor Law unions received indoor relief; however, in Leeds, Bradford, Newcastle, Nottingham and several other industrial and mining cities the majority of paupers continued to receive outdoor relief (Booth 1894).

Change in the attitude of the poor toward relief

The last third of the nineteenth century also witnessed a change in the attitude of the poor towards relief. Prior to 1870, a large share of the working class regarded access to public relief as an entitlement, although they rejected the workhouse as a form of relief. Their opinions changed over time, however, and by the end of the century most workers viewed poor relief as stigmatizing (Lees 1998). This change in perceptions led many poor people to go to great lengths to avoid applying for relief, and available evidence suggests that there were large differences between poverty rates and pauperism rates in late Victorian Britain. For example, in York in 1900, 3,451 persons received poor relief at some point during the year, less than half of the 7,230 persons estimated by Rowntree to be living in primary poverty.

The Declining Role of the Poor Law, 1870-1914

Increased availability of alternative sources of assistance

The share of the population on relief fell sharply from 1871 to 1876, and then continued to decline, at a much slower pace, until 1914. Real per capita relief expenditures increased from 1876 to 1914, largely because the Poor Law provided increasing amounts of medical care for the poor. Otherwise, the role played by the Poor Law declined over this period, due in large part to an increase in the availability of alternative sources of assistance. There was a sharp increase in the second half of the nineteenth century in the membership of friendly societies — mutual help associations providing sickness, accident, and death benefits, and sometimes old age (superannuation) benefits — and of trade unions providing mutual insurance policies. The benefits provided workers and their families with some protection against income loss, and few who belonged to friendly societies or unions providing “friendly” benefits ever needed to apply to the Poor Law for assistance.

Work relief

Local governments continued to assist unemployed males after 1870, but typically not through the Poor Law. Beginning with the Chamberlain Circular in 1886 the Local Government Board encouraged cities to set up work relief projects when unemployment was high. The circular stated that “it is not desirable that the working classes should be familiarised with Poor Law relief,” and that the work provided should “not involve the stigma of pauperism.” In 1905 Parliament adopted the Unemployed Workman Act, which established in all large cities distress committees to provide temporary employment to workers who were unemployed because of a “dislocation of trade.”

Liberal welfare reforms, 1906-1911

Between 1906 and 1911 Parliament passed several pieces of social welfare legislation collectively known as the Liberal welfare reforms. These laws provided free meals and medical inspections (later treatment) for needy school children (1906, 1907, 1912) and weekly pensions for poor persons over age 70 (1908), and established national sickness and unemployment insurance (1911). The Liberal reforms purposely reduced the role played by poor relief, and paved the way for the abolition of the Poor Law.

The Last Years of the Poor Law

During the interwar period the Poor Law served as a residual safety net, assisting those who fell through the cracks of the existing social insurance policies. The high unemployment of 1921-38 led to a sharp increase in numbers on relief. The official count of relief recipients rose from 748,000 in 1914 to 1,449,000 in 1922; the number relieved averaged 1,379,800 from 1922 to 1938. A large share of those on relief were unemployed workers and their dependents, especially in 1922-26. Despite the extension of unemployment insurance in 1920 to virtually all workers except the self-employed and those in agriculture or domestic service, there still were large numbers who either did not qualify for unemployment benefits or who had exhausted their benefits, and many of them turned to the Poor Law for assistance. The vast majority were given outdoor relief; from 1921 to 1923 the number of outdoor relief recipients increased by 1,051,000 while the number receiving indoor relieve increased by 21,000.

The Poor Law becomes redundant and is repealed

Despite the important role played by poor relief during the interwar period, the government continued to adopt policies, which bypassed the Poor Law and left it “to die by attrition and surgical removals of essential organs” (Lees 1998). The Local Government Act of 1929 abolished the Poor Law unions, and transferred the administration of poor relief to the counties and county boroughs. In 1934 the responsibility for assisting those unemployed who were outside the unemployment insurance system was transferred from the Poor Law to the Unemployment Assistance Board. Finally, from 1945 to 1948, Parliament adopted a series of laws that together formed the basis for the welfare state, and made the Poor Law redundant. The National Assistance Act of 1948 officially repealed all existing Poor Law legislation, and replaced the Poor Law with the National Assistance Board to act as a residual relief agency.

Table 1
Relief Expenditures and Numbers on Relief, 1696-1936

Expend. Real Expend. Expend. Number Share of Number Share of Share of
on expend. as share as share relieved Pop. relieved pop. paupers
Year Relief per capita of GDP of GDP (Official) relieved (Lees) relieved relieved
(£s) 1803=100 (Slack) (Lindert) 1 000s (Official) 1 000s (Lees) indoors
1696 400 24.9 0.8
1748-50 690 45.8 1.0 0.99
1776 1 530 64.0 1.6 1.59
1783-85 2 004 75.6 2.0 1.75
1803 4 268 100.0 1.9 2.15 1 041 11.4 8.0
1813 6 656 91.8 2.58
1818 7 871 116.8
1821 6 959 113.6 2.66
1826 5 929 91.8
1831 6 799 107.9 2.00
1836 4 718 81.1
1841 4 761 61.8 1.12 1 299 8.3 2 910 18.5 14.8
1846 4 954 69.4 1 332 8.0 2 984 17.8 15.0
1851 4 963 67.8 1.07 941 5.3 2 108 11.9 12.1
1856 6 004 62.0 917 4.9 2 054 10.9 13.6
1861 5 779 60.0 0.86 884 4.4 1 980 9.9 13.2
1866 6 440 65.0 916 4.3 2 052 9.7 13.7
1871 7 887 73.3 1 037 4.6 2 323 10.3 14.2
1876 7 336 62.8 749 3.1 1 678 7.0 18.1
1881 8 102 69.1 0.70 791 3.1 1 772 6.9 22.3
1886 8 296 72.0 781 2.9 1 749 6.4 23.2
1891 8 643 72.3 760 2.6 1 702 5.9 24.0
1896 10 216 84.7 816 2.7 1 828 6.0 25.9
1901 11 549 84.7 777 2.4 1 671 5.2 29.2
1906 14 036 96.9 892 2.6 1 918 5.6 31.1
1911 15 023 93.6 886 2.5 1 905 5.3 35.1
1921 31 925 75.3 627 1.7 35.7
1926 40 083 128.3 1 331 3.4 17.7
1931 38 561 133.9 1 090 2.7 21.5
1936 44 379 165.7 1 472 3.6 12.6

Notes: Relief expenditure data are for the year ended on March 25. In calculating real per capita expenditures, I used cost of living and population data for the previous year.

Table 2
County-level Poor Relief Data, 1783-1831

Per capita Per capita Per capita Per capita Share of Percent Share of
relief relief relief relief Percent of Recipients of land in Pop
spending spending spending spending population over 60 or arable Employed
County (s.) (s.) (s.) (s.) relieved Disabled farming in Agric
1783-5 1802-03 1812 1831 1802-03 1802-03 c. 1836 1821
North
Durham 2.78 6.50 9.92 6.83 9.3 22.8 54.9 20.5
Northumberland 2.81 6.67 7.92 6.25 8.8 32.2 46.5 26.8
Lancashire 3.48 4.42 7.42 4.42 6.7 15.0 27.1 11.2
West Riding 2.91 6.50 9.92 5.58 9.3 18.1 30.0 19.6
Midlands
Stafford 4.30 6.92 8.50 6.50 9.1 17.2 44.8 26.6
Nottingham 3.42 6.33 10.83 6.50 6.8 17.3 na 35.4
Warwick 6.70 11.25 13.33 9.58 13.3 13.7 47.5 27.9
Southeast
Oxford 7.07 16.17 24.83 16.92 19.4 13.2 55.8 55.4
Berkshire 8.65 15.08 27.08 15.75 20.0 12.7 58.5 53.3
Essex 9.10 12.08 24.58 17.17 16.4 12.7 72.4 55.7
Suffolk 7.35 11.42 19.33 18.33 16.6 11.4 70.3 55.9
Sussex 11.52 22.58 33.08 19.33 22.6 8.7 43.8 50.3
Southwest
Devon 5.53 7.25 11.42 9.00 12.3 23.1 22.5 40.8
Somerset 5.24 8.92 12.25 8.83 12.0 20.8 24.4 42.8
Cornwall 3.62 5.83 9.42 6.67 6.6 31.0 23.8 37.7
England & Wales 4.06 8.92 12.75 10.08 11.4 16.0 48.0 33.0

References

Blaug, Mark. “The Myth of the Old Poor Law and the Making of the New.” Journal of Economic History 23 (1963): 151-84.

Blaug, Mark. “The Poor Law Report Re-examined.” Journal of Economic History (1964) 24: 229-45.

Boot, H. M. “Unemployment and Poor Law Relief in Manchester, 1845-50.” Social History 15 (1990): 217-28.

Booth, Charles. The Aged Poor in England and Wales. London: MacMillan, 1894.

Boyer, George R. “Poor Relief, Informal Assistance, and Short Time during the Lancashire Cotton Famine.” Explorations in Economic History 34 (1997): 56-76.

Boyer, George R. An Economic History of the English Poor Law, 1750-1850. Cambridge: Cambridge University Press, 1990.

Brundage, Anthony. The Making of the New Poor Law. New Brunswick, N.J.: Rutgers University Press, 1978.

Clark, Gregory. “Farm Wages and Living Standards in the Industrial Revolution: England, 1670-1869.” Economic History Review, 2nd series 54 (2001): 477-505.

Clark, Gregory and Anthony Clark. “Common Rights to Land in England, 1475-1839.” Journal of Economic History 61 (2001): 1009-36.

Digby, Anne. “The Labour Market and the Continuity of Social Policy after 1834: The Case of the Eastern Counties.” Economic History Review, 2nd series 28 (1975): 69-83.

Eastwood, David. Governing Rural England: Tradition and Transformation in Local Government, 1780-1840. Oxford: Clarendon Press, 1994.

Fraser, Derek, editor. The New Poor Law in the Nineteenth Century. London: Macmillan, 1976.

Hammond, J. L. and Barbara Hammond. The Village Labourer, 1760-1832. London: Longmans, Green, and Co., 1911.

Hampson, E. M. The Treatment of Poverty in Cambridgeshire, 1597-1834. Cambridge: Cambridge University Press, 1934

Humphries, Jane. “Enclosures, Common Rights, and Women: The Proletarianization of Families in the Late Eighteenth and Early Nineteenth Centuries.” Journal of Economic History 50 (1990): 17-42.

King, Steven. Poverty and Welfare in England, 1700-1850: A Regional Perspective. Manchester: Manchester University Press, 2000.

Lees, Lynn Hollen. The Solidarities of Strangers: The English Poor Laws and the People, 1770-1948. Cambridge: Cambridge University Press, 1998.

Lindert, Peter H. “Poor Relief before the Welfare State: Britain versus the Continent, 1780- 1880.” European Review of Economic History 2 (1998): 101-40.

MacKinnon, Mary. “English Poor Law Policy and the Crusade Against Outrelief.” Journal of Economic History 47 (1987): 603-25.

Marshall, J. D. The Old Poor Law, 1795-1834. 2nd edition. London: Macmillan, 1985.

Pinchbeck, Ivy. Women Workers and the Industrial Revolution, 1750-1850. London: Routledge, 1930.

Pound, John. Poverty and Vagrancy in Tudor England, 2nd edition. London: Longmans, 1986.

Rose, Michael E. “The New Poor Law in an Industrial Area.” In The Industrial Revolution, edited by R.M. Hartwell. Oxford: Oxford University Press, 1970.

Rose, Michael E. The English Poor Law, 1780-1930. Newton Abbot: David & Charles, 1971.

Shaw-Taylor, Leigh. “Parliamentary Enclosure and the Emergence of an English Agricultural Proletariat.” Journal of Economic History 61 (2001): 640-62.

Slack, Paul. Poverty and Policy in Tudor and Stuart England. London: Longmans, 1988.

Slack, Paul. The English Poor Law, 1531-1782. London: Macmillan, 1990.

Smith, Richard (1996). “Charity, Self-interest and Welfare: Reflections from Demographic and Family History.” In Charity, Self-Interest and Welfare in the English Past, edited by Martin Daunton. NewYork: St Martin’s.

Sokoll, Thomas. Household and Family among the Poor: The Case of Two Essex Communities in the Late Eighteenth and Early Nineteenth Centuries. Bochum: Universitätsverlag Brockmeyer, 1993.

Solar, Peter M. “Poor Relief and English Economic Development before the Industrial Revolution.” Economic History Review, 2nd series 48 (1995): 1-22.

Tawney, R. H. Religion and the Rise of Capitalism: A Historical Study. London: J. Murray, 1926.

Webb, Sidney and Beatrice Webb. English Poor Law History. Part I: The Old Poor Law. London: Longmans, 1927.

Williams, Karel. From Pauperism to Poverty. London: Routledge, 1981.

Citation: Boyer, George. “English Poor Laws”. EH.Net Encyclopedia, edited by Robert Whaples. May 7, 2002. URL http://eh.net/encyclopedia/english-poor-laws/

Path Dependence

Douglas Puffert, University of Warwick

Path dependence is the dependence of economic outcomes on the path of previous outcomes, rather than simply on current conditions. In a path dependent process, “history matters” — it has an enduring influence. Choices made on the basis of transitory conditions can persist long after those conditions change. Thus, explanations of the outcomes of path-dependent processes require looking at history, rather than simply at current conditions of technology, preferences, and other factors that determine outcomes.

Path-dependent features of the economy range from small-scale technical standards to large-scale institutions and patterns of economic development. Several of the most prominent path-dependent features of the economy are technical standards, such as the “QWERTY” standard typewriter (and computer) keyboard and the “standard gauge” of railway track — i.e., the width between the rails. The case of QWERTY has been particularly controversial, and it is discussed at some length below. The case of track gauge is useful for introducing several typical features of path-dependent processes and their outcomes.

Standard Railway Gauges and the Questions They Suggest

Four feet 8-1/2 inches (1.435 meters) is the standard gauge for railways throughout North America, in much of Europe, and altogether on over half of the world’s railway routes. Indeed, it has been the most common gauge throughout the history of modern railways, since the late 1820s. Should we conclude, as economists often do for popular products or practices, that this standard gauge has proven itself technically and economically optimal? Has it been chosen because of its superior performance or lower costs? If so, has it proven superior for every new generation of railway technology and for all changes in traffic conditions? What of the other gauges, broader or narrower, that are used as local standards in some parts of the world — are these gauges generally used because different technology or different traffic conditions in those regions favor these gauges?

The answer to all these questions is no. The consensus of engineering opinion has usually favored gauges broader than 4’8.5″, and in the late nineteenth century an important minority of engineers favored narrower gauges. Nevertheless, the gauge of 4’8.5″ has always had greater use in practice because of the history of its use. Indeed, even the earliest modern railways adopted the gauge as a result of history. The “father of railways,” British engineer George Stephenson, had experience using the gauge on an older system of primitive coal tramways serving a small group of mines near Newcastle, England. Rather than determining optimal gauge anew for a new generation of railways, he simply continued his prior practice. Thus the gauge first adopted more than two hundred years ago for horse-drawn coal carts is the gauge now used for powerful locomotives, massive tonnages of freight shipments, and passenger trains traveling at speeds as great as 300 kilometers per hour (186 mph).

We will examine the case of railway track gauge in more detail below, along with other instances of path dependence. We first take an analytical look at what conditions may give rise to path dependence — or prevent it from arising, as some critics of the importance of path dependence have argued.

What Conditions Give Rise to Path Dependence?

Durability of Capital Equipment

The most trivial — and uninteresting — form of path dependence is based simply on the durability of capital equipment. Obsolete, inferior equipment may remain in use because its fixed cost is already “sunk” or paid for, while its variable costs are lower than the total costs of replacing it with a new generation of equipment. The duration of this sort of path dependence is limited by the service life of the obsolete equipment.

Technical Interrelatedness

In railways, none of the original gauge-specific capital equipment from the early nineteenth century remains in use today. Why, then, has Stephenson’s standard gauge persisted? Part of the reason is the technical interrelatedness of railway track and the wheel sets of rolling stock. When either track or rolling stock wears out, it must be replaced with equipment of the same gauge, so that the wheels will still fit the track and the track will still fit the wheels. Railways almost never replace all their track and rolling stock at the same time. Thus a gauge readily persists beyond the life of any piece of equipment that uses it.

Increasing Returns

A further reason for the persistence, and indeed spread, of the Stephenson gauge is increasing returns to the extent of use. Different railway companies or administrations benefit from using a common gauge, because this saves costs and improves both service quality and profits on through-shipments or passenger trips that pass over each other’s track. New railways have therefore nearly always adopted the gauge of established connecting lines, even when engineers have favored different gauges. Once built, railway lines are reluctant to change their gauge unless neighboring lines do so as well. This adds coordination costs to the physical costs of any conversion.

In early articles on path dependence, Paul David (1985, 1987) listed these same three conditions for path dependence: first, the technical interrelatedness of system components; second, increasing returns to scale in the use of a common technique; and, third, “quasi-irreversibility of investment,” for example in the durability of capital equipment (or of human capital). The third condition gives rise to switching costs, while the first two conditions make gradual change impractical and rapid change costly, due to the transactions costs required to coordinate the actions of different agents. Thus together, these three conditions may lend persistence or stability to a particular path of outcomes, “locking in” a particular feature of the economy, such as a standard railway track gauge.

David’s early work on path dependence represents, in part, the culmination of an earlier economic literature on technical interrelatedness (Veblen 1915; Frankel 1955; Kindleberger 1964; David 1975). By contrast, the other co-developer of the concept of path dependence, W. Brian Arthur, based his ideas on an analogy between increasing returns in the economy, particularly when expressed in the form of positive externalities, and conditions that give rise to positive feedbacks in the natural sciences.

Dynamic Increasing Returns to Adoption

In a series of theoretical papers starting in the early 1980s, Arthur (1989, 1990, 1994) emphasized the role of “increasing returns to adoption,” especially dynamic increasing returns that develop over time. These increasing returns might arise on the supply side of a market, as a result of learning effects that lower the cost or improve the quality of a product as its cumulative production increases. Alternatively, increasing returns might arise on the demand side of a market, as a result of positive “network” externalities, which raise the value of a product or technique for each user as the total number of users increases (Katz and Shapiro 1985, 1994). In the context of railways, for example, a railway finds a particular track gauge more valuable if a greater number of connecting railways use that gauge. (Note that a track gauge is not a “product” but rather a “technology,” as Arthur puts it, or a “technique,” as I prefer to call it.)

In Arthur’s (1989) basic analytical framework, “small events,” which he treated as random, lead to early fluctuations in the market shares of competing techniques. These fluctuations are magnified by positive feedbacks, because techniques with larger market shares tend to be more valuable to new adopters. As a result, one technique grows in market share until it is “locked in” as a de facto standard. In a simple version of Arthur’s model (Table 1), different consumers or firms initially favor different products or techniques. At first, market share for each technique fluctuates randomly, depending on how many early adopters happen to prefer each technique. Eventually, however, one of the techniques will gain enough of a lead in market share that it will offer higher payoffs to everyone — including to the consumers or firms that have a preference for the minority technique. For example, if the total number of adoptions for technique A reaches 80, while the number of adoptions of B is less than 60, then technique A offers higher payoffs for everyone, and it is locked in as the de facto standard.

Table 1. Adoption Payoffs in Arthur’s Basic Model

Number of previous adoptions 0 10 20 30 40 50 60 70 80 90
“R-type agents” (who prefer technique A):
Technique A 10 11 12 13 14 15 16 17 18 19
Technique B 8 9 10 11 12 13 14 15 16 17
“S-type agents” (who prefer technique B):
Technique A 8 9 10 11 12 13 14 15 16 17
Technique B 10 11 12 13 14 15 16 17 18 19

Source: Adapted from Arthur (1989).

Which of the competing techniques becomes the de facto standard is unpredictable on the basis of systematic conditions. Rather, later outcomes depend on the specific early history of the process. If early “small” events and choices are governed in part by non-systematic factors — even “historical accidents” — then these factors may have large effects on later outcomes. This is in contrast to the predictions of standard economic models, where decreasing returns and negative feedbacks diminish the impact of non-systematic factors. To cite another illustration from the history of railways, George Stephenson’s personal background was a non-systematic or “accidental” factor that, due to positive feedbacks, had a large influence on the entire subsequent history of track gauge.

Efficiency, Foresight, Remedies, and the Controversy over Path Dependence

Arthur’s (1989) basic model of a path-dependent process considered a case in which the selection of one outcome (or one path of outcomes) rather than another has no consequences for general economic efficiency — different economic agents favor different techniques, but no technique is best for all. Arthur also, however, used a variation of his modeling approach to argue that an inefficient outcome is possible. He considered a case where one technique offers higher payoffs than another for larger numbers of cumulative adoptions (technique B in Table 2), while for smaller numbers the other technique offers higher payoffs (technique A). Arthur argued that, given his model’s assumptions, each new adopter, arriving in turn, will prefer technique A and adopt only it, resulting later in lower total payoffs than would have resulted if each adopter had chosen technique B. Arthur’s assumptions were, first, that each agent’s payoff depends only on the number of previous adoptions and, second, that the competing techniques are “unsponsored,” that is, not owned and promoted by suppliers.

Table 2. Adoption Payoffs in Arthur’s Alternative Model

Number of previous adoptions 0 10 20 30 40 50 60 70 80 90
All agents:
Technique A 10 11 12 13 14 15 16 17 18 19
Technique B 4 7 10 13 16 19 22 25 28 31

Source: Arthur (1989), table 2.

Liebowitz and Margolis’s Critique of Arthur’s Model

Arthur’s discussion of efficiency provided the starting point for a theoretical critique of path dependence offered by Stan Liebowitz and Stephen E. Margolis (1995). Liebowitz and Margolis argued that two conditions, when present, prevent path-dependent processes from resulting in inefficient outcomes: first, foresight into the effects of choices and, second, opportunities to coordinate people’s choices, using direct communication, market interactions, and active product promotion. Using Arthur’s payoff table (Table 2), Liebowitz and Margolis argued that the purposeful, rational behavior of forward-looking, profit-seeking economic agents can override the effects of events in the past. In particular, if agents can foresee that some potential outcomes will be more efficient than others, then they have incentives to avoid the suboptimal ones. Agents who already own — or else find ways to appropriate — products or techniques that offer superior outcomes can often earn substantial profits by steering the process to favor those products or techniques. For the situation in Table 2, for example, the supplier of product or technique B could draw early adopters to that technique by temporarily setting a price below cost, making a profit by raising price above cost later.

Thus, in Liebowitz and Margolis’s analysis, the sort of inefficient or inferior outcomes that can arise in Arthur’s model are often not true equilibrium outcomes that market processes would lead to in the real world. Rather, they argued, purposeful behavior is likely to remedy any inferior outcome — except where the costs of a remedy, including transactions costs, are greater than the potential benefits. In that case, they argued, an apparently “inferior” outcome is actually the most efficient one available, once all costs are taken into account. “Remediable” inefficiency, they argued in contrast, is highly unlikely to persist.

Liebowitz and Margolis’s analysis gave rise to a substantial controversy over the meaning and implications of path dependence. In the view of Liebowitz and Margolis, the major claims of the economists who promote the concept of path dependence have amounted to assertions of remediable inefficiency. Liebowitz and Margolis coined the term “third-degree” path dependence to refer to such cases. They contrasted this category both to “first-degree” path dependence, which has no implications for efficiency, and to “second-degree” path dependence, where transactions costs and/or the impossibility of foresight lead to outcomes that offer lower payoffs than some hypothetical — but unattainable — alternative. In Liebowitz and Margolis’s view, only “third-degree” path dependence offers scope for optimizing behavior, and thus only this type stands in conflict with what they call “the neoclassical model of relentlessly rational behavior leading to efficient, and therefore predictable, outcomes” (1995). Only this category of path dependence, they argue, would constitute market failure. They cast strong doubt on the likelihood of its occurrence, and they asserted that no empirical examples have been demonstrated.

Responses to Liebowitz and Margolis’s Critique

Proponents of the importance of path dependence have responded, in large part, by asserting that the interesting features of path dependence have little to do with the question of remediability. David (1997, 2000) argued that the concept of third-degree path dependence proves incoherent upon close examination and that Liebowitz and Margolis had misconstrued the issues at stake. The present author asserted that one can usefully incorporate several of Liebowitz and Margolis’s ideas on foresight and forward-looking behavior into the theory of path dependence while still affirming the claims made by proponents (Puffert 2000, 2002, 2003).

Imperfect Foresight and Inefficiency

One point that I have emphasized is that the cases of path dependence cited by proponents typically involve imperfect foresight, and sometimes other features, that make remediation impossible. Indeed, proponents of the importance of path dependence partly recognized this point prior to the work of Liebowitz and Margolis. Nobel Prize-winner Kenneth Arrow argued in his foreword to Arthur’s collected articles that Arthur’s modeling approach applies specifically to cases where foresight is imperfect, or “expectations are based on limited information” (Arthur 1994). Thus, economic agents cannot foresee future payoffs, and they cannot know how best to direct the process to the outcomes they would prefer. In terms of the payoffs in Table 2, technique A might become locked-in because adopters as well as suppliers initially think, mistakenly, that technique A will continue to offer the higher payoffs. Similarly, David (1987) had argued still earlier that path dependence is sometimes of interest precisely because lock-in might happen too quickly, before the payoffs of different paths are known. Lock-in, as David and Arthur use the term, applies to a stable equilibrium — i.e., to an outcome that, if inefficient, is not remediable. (Liebowitz and Margolis introduce a different definition of lock-in.)

Imperfect foresight is, of course, a common condition — and especially common for new, unproven products (or techniques) in untested markets. Part of the difference between path-dependent and “path-independent” processes is that foresight doesn’t matter for path-independent processes. No matter what the path of events, path-independent processes still end up at unique outcomes that are predictable on the basis of fundamental conditions. Generally, these predictable outcomes are those that are most efficient and that offer the highest payoffs. By contrast, path-dependent processes have multiple potential outcomes, and the outcome selected is not necessarily the one offering the highest payoffs. This contrast to the results of standard economic analysis is part of what makes path dependence interesting.

Winners, Losers and Path Dependence

Path dependence is also interesting, however, when the issue at stake is not the overall efficiency (i.e., Pareto efficiency) of the outcome, but rather the distribution of rewards between “winners” and “losers” — for example, between firms competing to establish their products or techniques as a de facto standard, resulting in profits or economic rents to the winner only. This is something that finds no place in Liebowitz and Margolis’s taxonomy of “degrees.” In keeping with Liebowitz and Margolis’s analysis, competing firms certainly exercise forward-looking behavior in efforts to determine the outcome, but imperfect information and imperfect control over circumstances still make the outcome path dependent, as some of the case studies below illustrate.

Lack of Agreement on What the Debate Is About

Finally, market failure per se has never been the primary concern of proponents of the importance of path dependence. Even when proponents have highlighted inefficiency as one possible consequence of path dependence, this inefficiency is often the result of imperfect foresight rather than of market failure. Market failure is, however, the primary concern of Liebowitz and Margolis. This difference in perspective is one reason that the arguments of proponents and opponents have often failed to meet head on, as we shall consider in several case studies.

These contrasting analytical arguments can best be assessed through empirical cases. The case of the QWERTY keyboard is considered first, because it has generated the most controversy and it illustrates opposing arguments. Three further cases are particularly useful for the lessons they offer. Britain’s “coal wagon problem” offers a strong example of inefficiency. The worldwide history of railway track gauge, now considered at greater length, illustrates the roles of foresight (or lack thereof) and transitory circumstances, as well as the role of purposeful behavior to remedy outcomes. The case of competition in videocassette recorders illustrates how path dependence is compatible with purposeful behavior, and it shows how proponents and critics of the importance of path dependence can offer different interpretations of the same events.

The Debate over QWERTY

The most influential empirical case has been that of the “QWERTY” standard typewriter and computer keyboard, named for the first letters appearing on the top row of keys. The concept of path dependence first gained widespread attention through David’s (1985, 1986) interpretation of the emergence and persistence of the QWERTY standard. The critique of path dependence began with the alternative interpretation offered by Liebowitz and Margolis (1990).

David (1986) noted that the QWERTY keyboard was designed, in part, to reduce mechanical jamming on an early typewriter design that quickly went out of use, while other early keyboards were designed more with the intention of facilitating fast, efficient typing. In David’s account, QWERTY’s triumph over its initial revivals resulted largely from the happenstance that typing schools and manuals offered instruction in eight-finger “touch” typing first for QWERTY. The availability of trained typists encouraged office managers to buy QWERTY machines, which in turn gave further encouragement to budding typists to learn QWERTY. These positive feedbacks increased QWERTY’s market share until it was established as the de facto standard keyboard.

Furthermore, according to David, similar positive feedbacks have kept typewriter users “locked in” to QWERTY, so that new, superior keyboards could gain no more than a small foothold in the market. In particular the Dvorak Simplified Keyboard, introduced during the 1930s, has been locked out of the market despite experiments showing its superior ergonomic efficiency. David concluded that our choice of a keyboard even today is governed by history, not by what would be ergonomically and economically optimal apart from history.

Liebowitz and Margolis (1990) directed much of their counterargument to the alleged superiority of the Dvorak keyboard. They showed, indeed, that claims David cited for the dramatic superiority of the Dvorak keyboard were based on dubious experiments. The experiments that Liebowitz and Margolis prefer support the conclusion that it could never be profitable to retrain typists from QWERTY to the Dvorak keyboard. Moreover, Liebowitz and Margolis cited ergonomic studies that conclude that the Dvorak keyboard offers at most only a two to six percent efficiency advantage over QWERTY.

Liebowitz and Margolis did not address David’s proposed mechanism for the original triumph of QWERTY. Instead, they argued against the claims of some popular accounts that QWERTY owes its success largely to the demonstration effect of winning a single early typing contest. Liebowitz and Margolis showed that other, well-known typing contests were won by non-QWERTY typists, and so they cast doubt on the impact of a single historical accident. This, however, did not address the argument that David made about that one typing contest. David’s argument was that the contest’s modest impact consisted largely in vindicating the effectiveness of eight-finger touch-typing, which was being taught at the time only for QWERTY.

Although Liebowitz and Margolis never addressed David’s claims about the role of third-party typing instruction, they did argue that suppliers had opportunities to offer training in conjunction with selling typewriters to new offices, so that non-QWERTY keyboards would not have been disadvantaged. They did not, however, present evidence that suppliers actually offered such training during the early years of touch-typing, the time when QWERTY became dominant. Whether the early history of QWERTY was path dependent thus seems to depend largely on the unaddressed question of how much typing instruction was offered directly by suppliers, as Liebowitz and Margolis suggest could have happened, and how much was offered by third parties using QWERTY, as David showed did happen.

Liebowitz and Margolis showed that early typewriter manufacturers competed vigorously in the features of their machines. They inferred, therefore, that the reason that typewriter suppliers increasingly supported and promoted QWERTY must have been that it offered a competitive advantage as the most effective system available. This reasoning is plausible, but it was not supported by direct evidence. The alternative, path-dependent explanation would be that QWERTY’s competitive advantage in winning new customers consisted largely in its lead in trained typists and market share. That is, positive feedbacks would have affected the decisions of customers and, thus, also suppliers. David presented some evidence for this, although, in light of the issues raised by Liebowitz and Margolis, this evidence might now appear less than conclusive.

Liebowitz and Margolis highlighted the following lines from David’s article: “… competition in the absence of perfect futures markets drove the industry prematurely into de facto standardization on the wrong system — and that is where decentralized decision-making subsequently has sufficed to hold it” (emphasis original in David’s article). In Liebowitz and Margolis’s view, the focus here on decentralized decision-making constitutes a claim for market failure and third-degree path dependence, and they treat this as the central claim of David’s article. In the view of the present author, this interpretation is mistaken. David’s claim here plays only a minor role in his argument — indeed it is less than one sentence. Moreover, it is not clear that David’s comment about decentralized decision-making amounts to anything more than a reference to the high transactions costs that would be entailed in organizing a coordinated movement to an alternative outcome — a point that Liebowitz and Margolis themselves have argued in other (non-QWERTY) contexts. (A coordinated change would be necessary because few typists would wish to learn a non-QWERTY system unless they could be sure of conveniently finding a compatible keyboard wherever they go.) David may have wished to suggest that centralized decision-making (by government?) would have greatly reduced these transactions costs, but David made no explicit claim that such a remedy would be feasible. If David had wished to make market failure or remediable inefficiency the central focus of his claims for path dependence, then he surely could and would have done so in a more explicit and forceful manner.

Part of what remains of the case of QWERTY is modest support for David’s central claim that history has mattered, leaving us with a standard keyboard that is less efficient than alternatives available today — not as inefficient as the claims David cited, but still somewhat so. Donald Norman, one of the world’s leading authorities on ergonomics, estimates on the basis of several recent studies that QWERTY is about 10 percent less efficient than the Dvorak keyboard and other alternatives (Norman, 1990, and recent personal correspondence).

For Liebowitz and Margolis, it was most important to show that the costs of switching to an alternative keyboard would outweigh any benefits, so that there is no market failure in remaining with the QWERTY standard. This claim appears to stand. David had made no explicit claim for market failure, but Liebowitz and Margolis — as well, indeed, as some supporters of David’s account — took that as the main issue at stake in David’s argument.

Britain’s “Silly Little Bobtailed” Coal Wagons

A strong example of inefficiency in path dependence is offered by the small coal wagons that persisted in British railway traffic until the mid-twentieth century. Already in 1915, economist Thorstein Veblen cited these “silly little bobtailed carriages” as an example of how industrial modernization may be inhibited by “the restraining dead hand of … past achievement,” that is, the historical legacy of interrelated physical infrastructure: “the terminal facilities, tracks, shunting facilities, and all the ways and means of handling freight on this oldest and most complete of railway systems” (Veblen, 1915, pp. 125-8). Veblen’s analysis was the starting point for the literature on technical and institutional interrelatedness that formed the background to David’s early views on path dependence.

In recent years Van Vleck (1997, 1999) has defended the efficiency of Britain’s small coal wagons, arguing that they offered “a crude just-in-time approach to inventory” for coal users while economizing on the substantial costs of road haulage that would have been necessary for small deliveries if railway coal wagons were larger. More recently, however, Scott (1999, 2001) presented evidence that few coal users benefited from small deliveries. Rather, he showed, the wagons’ small size, widely dispersed ownership and control, antiquated braking and lubrication systems, and generally poor physical condition made them quite inefficient indeed. Replacing these cars and associated infrastructure with modern, larger wagons owned and controlled by the railways would have offered savings in railway operating costs of about 56 percent and a social rate of return of about 24 percent. Nevertheless, the small wagons were not replaced until both railways and collieries were nationalized after World War II. The reason, according to Scott, lay partly in the regulatory system that allocated certain rights to collieries and other car owners at the expense of the railways, and partly in the massive coordination problem that arose because railways would not have realized much savings in costs until a large proportion of antiquated cars were replaced. Together, these factors lowered the railways’ realizable private rate of return below profitable levels. (Van Vleck’s smaller estimates for potential efficiency gains from scrapping the small wagons were largely the result of assuming that there would be no change in the regulatory system or in the ownership and control of wagons. Scott argued that such changes added greatly to the potential cost savings.)

Scott noted that the persistence of small wagons was path dependent, because both the technology embodied in the small wagons and the institutions that supported fragmented ownership long outlasted the earlier, transitory conditions to which they were a rational response. Ownership of wagons by the collieries had been advantageous to railways as well as collieries in the mid-nineteenth century, and government regulation had assigned rights in a way designed to protect the interests of wagon owners from opportunistic behavior by the railways. By the early twentieth century, these regulatory institutions imposed a heavy burden on the railways, because they required either conveyance even of antiquated wagons for set rates or else payment of high levels of compensation to the wagon owners. The requirement for compensation helped to raise the railways’ private costs of scrapping the small wagons above the social costs of doing so.

The case shows the relevance of Paul David’s approach to path dependence, with its discussion of technical (and institutional) interrelatedness and quasi-irreversible investment, above and beyond Brian Arthur’s more narrow focus on increasing returns.

The case also supports Liebowitz and Margolis’s insight that an inferior path-dependent outcome can only persist where transactions costs (and other costs) prevent remediation, but it undercuts those authors’ skepticism toward the possibility of market failure. The high transactions costs that would have been entailed in scrapping Britain’s small wagons indeed outweighed the potential gains, but these costs were high only due to the institutions of property rights that supported fragmented ownership. When these institutions were later changed, a remedy to Britain’s coal-wagon problem followed quickly. Thus, the failure to scrap the small wagons earlier can be ascribed to institutional and market failure.

The case thus appears to satisfy Liebowitz and Margolis’s criterion for “third-degree” path dependence. This is not completely clear, however. Whether Britain’s coal-wagon problem qualifies for that status depends on whether the benefits of solving the problem would have been worth the cost of implementing the necessary institutional changes, a question that Scott did not address. Liebowitz and Margolis argue that an inferior outcome cannot be considered a result of market failure, or even meaningfully inefficient, unless this criterion of remediability is satisfied.

In this present author’s view, Liebowitz and Margolis’s criterion has some usefulness in the context of considering government policy toward inferior outcomes, which is Liebowitz and Margolis’s chief concern, but the criterion is much less useful for a more general analysis of these outcomes. If Britain’s coal-wagon problem does not qualify for “third-degree” status, then it suggests that Liebowitz and Margolis’s dismissive approach toward cases that they relegate to “second-degree” status is misplaced. The case seems to show that path dependence can have substantial effects on the economy, that the outcomes of path-dependent processes can vary substantially from the predictions of standard economic models, that these outcomes can exhibit substantial inefficiency of a sort discussed by proponents of path dependence, and that all this can happen despite the exercise of foresight and forward-looking behavior.

Railway Track Gauges

The case of railway track gauge illustrates how “accidental” or “contingent” events and transitory circumstances can affect choice of technique and economic efficiency over a period now approaching two centuries (Puffert 2000, 2002). The gauge now used on over half the world’s railways, 4 feet 8.5 inches (4’8.5″, 1435 mm), comes from the primitive mining tramway where George Stephenson gained his early experience. Stephenson transferred this gauge to the Liverpool and Manchester Railway, opened in 1830, which served as the model of best practice for many of the earliest modern railways in Britain, continental Europe, and North America. Many railway engineers today view this gauge as narrower than optimal. Yet, although they would choose a broader gauge today if the choice were open, they do not view potential gains in operating efficiency as worth the costs of conversion.

A much greater source of inefficiency has been the emergence of diversity in gauge. Six gauges came into widespread use in North America by the 1870s, and Britain’s extensive Great Western Railway system maintained a variant gauge for over half a century until 1892. Even today, Australia and Argentina each have three different regional-standard gauges, while India, Chile, and several other countries each make extensive use of two gauges. Breaks of gauge also persist at the border of France and Spain and most external borders of the former Russian and Soviet empires. This diversity adds costs and impairs service in interregional and international traffic. Where diversity has been resolved, conversion costs have sometimes been substantial.

This diversity arose as a result of several contributing factors: limited foresight, the search for an improved railway technology, transitory circumstances, and contingent events or “historical accidents.” Many early railway builders sought simply to serve local or regional transportation needs, and they did not foresee the later importance of railways in interregional traffic. Beginning in the late 1830s, locomotive builders found their ability to construct more powerful, easily maintained engines constrained by the Stephenson gauge, while some civil engineers thought that a broader gauge would offer improved capacity, speed, and passenger comfort. This led to a wave of adoption of broad gauges for new regions in Europe, the Americas, South Asia, and Australia. Changes in locomotive design soon eliminated much of the advantage of broad gauges, and by the 1860s it became possible to take advantage of the ability of narrow gauges to make sharper curves, following the contours of rugged landscape and reducing the need for costly bridges, embankments, cuttings, and tunnels. This, together with the beliefs of some engineers and promoters that narrow gauges would offer savings in operating costs, led to a wave of introductions of narrow gauges to new regions.

At every point of time there was some variation in engineering opinion and practice, so that which gauge was introduced to each new region often depended on the contingent circumstances of who decided the gauge. To cite only the most fateful example, Stephenson’s rivals for the contract to build the Liverpool and Manchester Railway proposed to adopt the gauge of 5’6″ (1676 mm). If that team had been employed, or if Stephenson had gained his earlier experience on almost any other mining tramway, then the ensuing worldwide history of railway gauge would have been different — perhaps far different.

After the introduction of particular gauges to new regions, later railways nearly always adopted the gauge of established connecting lines, reinforcing early contingent choices with positive feedbacks. As different local common-gauge regions expanded, regions that happened to have the same gauge merged into one another, but breaks of gauge emerged between regions of differing gauge. The extent of diversity that emerged at the national and continental levels, and thus the relative efficiency of the outcome, thus depended on earlier contingent events.

Once these patterns of diversity had been established by a path-dependent process, they were partly rationalized by the sort of forward-looking, profit-seeking behavior proposed by Liebowitz and Margolis. In North America, for example, a continental standard emerged quickly after demand for interregional transport grew, and standardization was facilitated both by the formation of interregional railway systems and by cooperation among independent railways. Elsewhere as well, much of the most inefficient diversity was resolved relatively quickly. Nonetheless, a costly diversity has persisted in places where variant-gauge regions had grown large and costly to convert before the value of conversion became apparent. Spain’s variant gauge has become more costly in recent years as the country’s economy has been integrated into that of the European Union, but estimated costs of (U.S.) $5 billion have precluded conversion. India and Australia have only recently made substantial progress toward the resolution of their century-old diversity.

Wherever gauge diversity has been resolved, it is one of the earliest gauges that has emerged as the standard. In no significant part of the world has current practice in gauge broken free of its early history. The inefficiency that has resulted, relative to what other sequences of events might have produced, was not the result of market failure. Rather, it resulted primarily from the natural inability of railway builders to foresee how railway networks and traffic patterns would develop and how technology would evolve.

The case also illustrates the usefulness of Arthur’s (1989) modeling approach for cases of unsponsored techniques and limited foresight (Puffert 2000, 2002). These were essentially the conditions Arthur assumed in proposing his model.

Videocassette Recording Systems

Markets for technical systems exhibiting network externalities (where users benefit from using the same system as other users) often tend to give rise to de facto standards — one system used by all. Foreseeing this, suppliers sometimes join to offer a common system standard from the outset, precluding any possibility for path-dependent competition. Examples include first-generation compact discs (CDs and CD-ROMs) and second-generation DVDs.

In the case of consumer videocassette recorders (VCRs), however, Sony with its Betamax system and JVC with its VHS system were unable to agree on a common set of technical specifications. This gave rise to a celebrated battle between the systems lasting from the mid-1970s to the mid-1980s. Arthur (1990) used this competition as the basis for a thought experiment to illustrate path dependence. He explained the triumph of VHS as the result of positive feedbacks in the video film rental market, as video rental stores stocked more film titles for the system with the larger user base, while new adopters chose the system for which they could rent more videos. He also suggested tentatively that, if the common perception that Betamax offered a superior picture quality is true, then the “the market’s choice” was not the best possible outcome.

In a closer look at the case, Cusumano et al. (1992) showed that Arthur’s suggested positive-feedback mechanism was real, and that this mechanism explains why Sony eventually withdrew Betamax from the market rather than continuing to offer it as an alternative system. However, they also showed that the video rental market emerged only at a late stage in the competition, after VHS already had a strong lead in market share. Thus, Arthur’s mechanism does not explain how the initial symmetry in competitors’ positions was broken.

Cusumano et al. argued, nonetheless, that the earlier competition already had a path-dependent market-share dynamic. They presented evidence that suppliers and distributors of VCRs increasingly chose to support VHS rather than Betamax because they saw other market participants doing so, leading them to believe that VHS would win the competition and emerge as a de facto standard. The authors did not make clear, however, why market participants believed that a single system would become so dominant. (In a private communication, coauthor Richard Rosenbloom said that this was largely because they foresaw the later emergence of a market for prerecorded videos.)

The authors argue that three early differences in promoters’ strategies gave VHS its initial lead. First, Sony proceeded without major co-sponsors for its Betamax system, while JVC shared VHS with several major competitors. Second, the VHS consortium quickly installed a large manufacturing capacity. Third, Sony opted for a more compact videocassette, while JVC chose instead a longer playing time for VHS. In the event, a longer playing time proved more important to many consumers and distributors, at least during early years of the competition when Sony cassettes could not accommodate a full (U.S.) football game.

This interpretation shows how purposeful, forward-looking behavior interacted with positive feedbacks in producing the final outcome. The different strategies, made under conditions of limited foresight, were contingent decisions that set competition among the firms on one path rather than another (Puffert 2003). Furthermore, the early inability of Sony cassettes to accommodate a football game was a transitory circumstance that may have affected outcomes long afterward.

Liebowitz and Margolis’s (1995) initial interpretation of the case responded only to Arthur’s brief discussion. They argued that the playing-time advantage for VHS was the crucial factor in the competition, so that VHS won because its features most closely matched consumer demand — and not due to path dependence. Although their discussion covers part of the same ground as that of Cusumano et al., Liebowitz and Margolis did not respond to the earlier article’s argument that the purposeful behavior of suppliers interacted with positive feedbacks. Rather, they treated this purposeful behavior as the antithesis of the mechanistic, non-purposeful evolution of market share that they see as the ultimate basis of path dependence.

Liebowitz and Margolis also presented substantial evidence that Betamax was not, in fact, a superior system for the consumer market. The primary concern of their argument was to refute a suggested case of path-dependent lock-in to an inferior technique, and in this they succeeded. It is arguable that they overstated their case, however, in asserting that what they refuted amounted to a claim for “third-degree” path dependence. Arthur had not argued that the selection of VHS, if inferior to Betamax, would have been remediable.

Recently, Liebowitz (2002) did respond to Cusumano et al. He argued, in part, that the larger VHS tape size offered a permanent rather than transitory advantage, as this size facilitated higher tape speeds and thus better picture quality for any given total playing time.

A Brief Discussion of Further Cases

Pest Control

Cowan and Gunby (1996) showed that there is path dependence in farmers’ choices between systems of chemical pest control and integrated pest management (IPM). IPM relies in part on predatory insects to devour harmful ones, and the drift of chemical pesticides from neighboring fields often makes the use of IPM impossible. Predatory insects also drift among fields, further raising farmers’ incentives to use the same techniques as neighbors. To be practical, IPM must be used on the whole set of farms that are in proximity to each other. Where this set is large, the transactions costs of persuading all farmers to forego chemical methods often prevent adoption. In addition to these localized positive feedbacks, local learning effects also make the choice between systems path dependent. The path-dependent local lock-in of each technique has sometimes been upset by such developments as invasions by new pests and the emergence of resistance to pesticides.

Nuclear Power Reactors

Cowan (1990) argued that transitory circumstances led to the establishment of the dominant “light-water” design for civilian nuclear power reactors. This design, adapted from power plants for nuclear submarines, was rushed into use during the Cold War because the political value of demonstrating peaceful uses for nuclear technology overrode the value of finding the most efficient technique. Thereafter, according to Cowan, learning effects arising from engineering experience for the light-water design continued to make it the rational choice for new reactors. He argued that there are fundamental scientific and engineering reasons for believing, however, that an equivalent degree of development of alternative designs may have made them superior.

Information Technology

Although Shapiro and Varian (1998) did not emphasize the term path dependence, they pointed to a broad range of research documenting positive feedbacks that affect competition in contemporary information technology. Like Morris and Ferguson (1993), they showed how competing firms recognize and seek to take advantage of these positive feedbacks. Strictly speaking, not all of these cases are path dependent, because in some cases firms have been able to control the direction and outcome of the allocation processes. In other cases, however, the allocation process has had its own path-dependent dynamic, affected both by the attempts of rival firms to promote their products and by factors that are unforeseen or out of their control.

Among the cases that Shapiro and Varian discuss are some involving Microsoft. In addition, some proponents of the importance of path dependence have argued that positive feedbacks favor Microsoft’s competitive position in ways that hinder competitors from developing and introducing innovative products (see, for example, Reback et al., 1995). Liebowitz and Margolis (2000), by contrast, offered evidence of cases where superior computer software products have had no trouble winning markets. Liebowitz and Margolis also argued that the lack of demonstrated empirical examples of “third-degree” path dependence creates a strong presumption against the existence of an inferior outcome that government antitrust measures could remedy.

Path Dependence at Larger Levels

Geography and Trade

The examples thus far all treat path dependence in the selection of alternative products or techniques. Krugman (1991, 1994) and Arthur (1994) have also pointed to a role for contingent events and positive feedbacks in economic geography, including in the establishment of Silicon Valley and other concentrations of economic activity. Some of these locations, they showed, are the result not of systematic advantages but rather of accidental origins reinforced by “agglomeration” economies that lead new firms to locate in the vicinity of similar established firms. Krugman (1994) also discussed how these same effects produce path dependence in patterns of international trade. Geographic patterns of economic activity, some of which arise as a result of contingent historical events, determine the patterns of comparative advantage that in turn determine patterns of trade.

Institutional Development

Path dependence also arises in the development of institutions — a term that economists use to refer to the “rules of the game” for an economy. Eichengreen (1996) showed, for example, that the emergence of international monetary systems, such as the classical gold standard of the late nineteenth century, was path dependent. This path dependence has been based on the benefits to different countries of adopting a common monetary system. Eichengreen noted that these benefits take the form of network externalities. Puffert (2003) has argued that path dependence in institutions is likely to be similar to path dependence in technology, as both are based on the value of adopting a common practice — some technique or rule — that becomes costly to change.

Thus path dependence can affect not only individual features of the economy but also larger patterns of economic activity and development. Indeed, some teachers of economic history interpret major regional and national patterns of industrialization and growth as partly the result of contingent events reinforced by positive feedbacks — that is, as path dependent. Some suggest, as well, that the institutions responsible for economic development in some parts of the world and those responsible for backwardness in others are, at least in part, path dependent. In the coming years we may expect these ideas to be included in a growing literature on path dependence.

Conclusion

Path dependence arises, ultimately, because there are increasing returns to the adoption of some technique or other practice and because there are costs in changing from an established practice to a different one. As a result, many current features of the economy are based on what appeared optimal or profit-maximizing at some point in the past, rather than on what might be preferred on the basis of current general conditions.

The theory of path dependence is not an alternative to neoclassical economics but rather a supplement to it. The theory of path dependence assumes, generally, that people optimize on the basis of their own interests and the information at their disposal, but it highlights ways that earlier choices put constraints on later ones, channeling the sequence of economic outcomes along one possible path rather than another. This theory offers reason to believe that some — or perhaps many — economic processes have multiple possible paths of outcomes, rather than a unique equilibrium (or unique path of equilibria). Thus the selection among outcomes may depend on nonsystematic or “contingent” choices or events. Empirical case studies offer examples of how such choices or events have led to the establishment, and “lock in,” of particular techniques, institutions, and other features of the economy that we observe today — although other outcomes would have been possible. Thus, the analysis of path dependence adds to what economists know on the basis of more established forms of neoclassical analysis.

It is not possible at this time to assess the overall importance of path dependence, either in determining individual features of the economy or in determining larger patterns of economic activity. Research has only partly sorted out the concrete conditions of technology, interactions among agents, foresight, and markets and other institutions that make allocation path dependent in some cases but not in others (Puffert 2003; see also David 1997, 1999, 2000 for recent refinements on theoretical conditions for path dependence).

Addendum: Technical Notes on Definitions

Path dependence, as economists use the term, corresponds closely to what mathematicians call non-ergodicity (David 2000). A non-ergodic stochastic process is one that, as it develops, undergoes a change in the limiting distribution of future states, that is, in the probabilities of different outcomes in the distant future. This is somewhat different from what mathematicians call path dependence. In mathematics, a stochastic process is called path dependent, as opposed to state dependent, if the probabilities of transition to alternative states depend not simply on the current state of the system but, additionally, on previous states.

Furthermore, the term path dependence is applied to economic processes in which small variations in early events can lead to large or discrete variations in later outcomes, but generally not to processes in which small variations in events lead only to small and continuous variations in outcomes. That is, the term is used for cases where positive feedbacks magnify the impact of early events, not for cases where negative feedbacks diminish this impact over time.

The term path dependence can also be used for cases in which the impact of early events persists without appreciably increasing or decreasing over time. The most important examples would be instances where transitory conditions have large, persistent impacts.

References

Arthur, W. Brian. 1989. “Competing Technologies, Increasing Returns, and Lock-in by Historical Events.” Economic Journal 99: 116‑31.

Arthur, W. Brian. 1990. “Positive Feedbacks in the Economy.” Scientific American 262 (February): 92-99.

Arthur, W. Brian. 1994. Increasing Returns and Path Dependence in the Economy. Ann Arbor: University of Michigan Press.

Cowan, Robin. 1990. “Nuclear Power Reactors: A Study in Technological Lock-in.” Journal of Economic History 50: 541-67.

Cowan, Robin, and Philip Gunby. 1996. “Sprayed to Death: Path Dependence, Lock-in and Pest Control Strategies.” Economic Journal 106: 521-42.

Cusumano, Michael A., Yiorgos Mylonadis, and Richard S. Rosenbloom. 1992. “Strategic Maneuvering and Mass-Market Dynamics: The Triumph of VHS over Beta.” Business History Review 66: 51-94.

David, Paul A. 1975. Technical Choice, Innovation and Economic Growth: Essays on American and British Experience in the Nineteenth Century. Cambridge: Cambridge University Press.

David, Paul A. 1985. “Clio and the Economics of QWERTY.” American Economic Review (Papers and Proceedings) 75: 332-37.

David, Paul A. 1986. “Understanding the Economics of QWERTY: The Necessity of History.” In W.N. Parker, ed., Economic History and the Modern Economist. Oxford: Oxford University Press.

David, Paul A. 1987. “Some New Standards for the Economics of Standardization in the Information Age.” In P. Dasgupta and P. Stoneman, eds., Economic Policy and Technological Performance. Cambridge, England: Cambridge University Press.

David, Paul A. 1997. “Path Dependence and the Quest for Historical Economics: One More Chorus of the Ballad of QWERTY.” University of Oxford Discussion Papers in Economic and Social History, Number 20. http://www.nuff.ox.ac.uk/economics/history/paper20/david3.pdf

David, Paul A. 1999. ” At Last, a Remedy for Chronic QWERTY-Skepticism!” Working paper, All Souls College, Oxford University. http://www.eh.net/Clio/Publications/remedy.shtml

David, Paul A. 2000. “Path Dependence, Its Critics and the Quest for ‘Historical Economics’.” Working paper, All Souls College, Oxford University.
http://www-econ.stanford.edu/faculty/workp/swp00011.html

Eichengreen, Barry. 1996 Globalizing Capital: A History of the International Monetary System. Princeton: Princeton University Press.

Frankel, M. 1955. “Obsolescence and Technological Change in a Maturing Economy.” American Economic Review 45: 296-319.

Katz, Michael L., and Carl Shapiro. 1985. “Network Externalities, Competition, and Compatibility.” American Economic Review 75: 424-40.

Katz, Michael L., and Carl Shapiro. 1994. “Systems Competition and Network Effects.” Journal of Economic Perspectives 8: 93-115.

Kindleberger, Charles P. 1964. Economic Growth in France and Britain, 1851-1950. Cambridge, MA: Harvard University Press.

Krugman, Paul. 1991. “Increasing Returns and Economic Geography.” Journal of Political Economy 99: 483-99.

Krugman, Paul. 1994. Peddling Prosperity. New York: W.W. Norton.

Liebowitz, S.J. 2002. Rethinking the Network Economy. New York: AMACOM

Liebowitz, S.J., and Stephen E. Margolis. 1990. “The Fable of the Keys.” Journal of Law and Economics 33: 1-25.

Liebowitz, S.J., and Stephen E. Margolis. 1995. “Path Dependence, Lock-In, and History.” Journal of Law, Economics, and Organization 11: 204-26. http://wwwpub.utdallas.edu/~liebowit/paths.html

Liebowitz, S.J., and Stephen E. Margolis. 2000. Winners, Losers, and Microsoft. Oakland: The Independent Institute.

Morris, Charles R., and Charles H. Ferguson. 1993. “How Architecture Wins Technology Wars.” Harvard Business Review (March-April): 86-96.

Norman, Donald A. 1990. The Design of Everyday Things. New York: Doubleday. (Originally published in 1988 as The Psychology of Everyday Things.)

Puffert, Douglas J. 2000. “The Standardization of Track Gauge on North American Railways, 1830-1890.” Journal of Economic History 60: 933-60.

Puffert, Douglas J. 2002. “Path Dependence in Spatial Networks: The Standardization of Railway Track Gauge.” Explorations in Economic History 39: 282-314.

Puffert, Douglas J. 2003 forthcoming. “Path Dependence, Network Form, and Technological Change.” In W. Sundstrom, T. Guinnane, and W. Whatley, eds., History Matters: Essays on Economic Growth, Technology, and Demographic Change. Stanford: Stanford University Press. http://www.vwl.uni-muenchen.de/ls_komlos/nettech1.pdf

Reback, Gary, Susan Creighton, David Killam, and Neil Nathanson. 1995. “Technological, Economic and Legal Perspectives Regarding Microsoft’s Business Strategy in Light of the Proposed Acquisition of Intuit, Inc.” (“Microsoft White Paper”). White paper, law firm of Wilson, Sonsini, Goodrich & Rosati. http://www.antitrust.org/cases/microsoft/whitep.html

Scott, Peter. 1999. “The Efficiency of Britain’s ‘Silly Little Bobtailed’ Coal Wagons: A Comment on Van Vleck.” Journal of Economic History 59: 1072-80.

Scott, Peter. 2001. “Path Dependence and Britain’s ‘Coal Wagon Problem’.” Explorations in Economic History 38: 366-85.

Shapiro, Carl and Hal R. Varian. 1998. Information Rules. Cambridge, MA: Harvard Business School Press.

Van Vleck, Va Nee L. 1997. “Delivering Coal by Road and Rail in Britain: The Efficiency of the ‘Silly Little Bobtailed’ Coal Wagons.” Journal of Economic History 57: 139-160.

Van Vleck, Va Nee L. 1999. “In Defense (Again) of ‘Silly Little Bobtailed’ Coal Wagons: Reply to Peter Scott.” Journal of Economic History 59: 1081-84.

Veblen, Thorstein. 1915. Imperial Germany and the Industrial Revolution. London: Macmillan.

Citation: Puffert, Douglas. “Path Dependence”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/path-dependence/

An Economic History of Patent Institutions

B. Zorina Khan, Bowdoin College

Introduction

Such scholars as Max Weber and Douglass North have suggested that intellectual property systems had an important impact on the course of economic development. However, questions from other eras are still current today, ranging from whether patents and copyrights constitute optimal policies toward intellectual inventions and their philosophical rationale to the growing concerns of international political economy. Throughout their history, patent and copyright regimes have confronted and accommodated technological innovations that were no less significant and contentious for their time than those of the twenty-first century. An economist from the nineteenth century would have been equally familiar with considerations about whether uniformity in intellectual property rights across countries harmed or benefited global welfare and whether piracy might be to the advantage of developing countries. The nineteenth and early twentieth centuries in particular witnessed considerable variation in the intellectual property policies that individual countries implemented, and this allows economic historians to determine the consequences of different rules and standards.

This article outlines crucial developments in the patent policies of Europe, the United States, and follower countries. The final section discusses the harmonization of international patent laws that occurred after the middle of the nineteenth century.

Europe

The British Patent System

The grant of exclusive property rights vested in patents developed from medieval guild practices in Europe. Britain in particular is noted for the establishment of a patent system which has been in continuous operation for a longer period than any other in the world. English monarchs frequently used patents to reward favorites with privileges, such as monopolies over trade that increased the retail prices of commodities. It was not until the seventeenth century that patents were associated entirely with awards to inventors, when Section 6 of the Statute of Monopolies (21 Jac. I. C. 3, 1623, implemented in 1624) repealed the practice of royal monopoly grants to all except patentees of inventions. The Statute of Monopolies allowed patent rights of fourteen years for “the sole making or working of any manner of new manufacture within this realm to the first and true inventor…” Importers of foreign discoveries were allowed to obtain domestic patent protection in their own right.

The British patent system established significant barriers in the form of prohibitively high costs that limited access to property rights in invention to a privileged few. Patent fees for England alone amounted to £100-£120 ($585) or approximately four times per capita income in 1860. The fee for a patent that also covered Scotland and Ireland could cost as much as £350 pounds ($1,680). Adding a co-inventor was likely to increase the costs by another £24. Patents could be extended only by a private Act of Parliament, which required political influence, and extensions could cost as much as £700. These constraints favored the elite class of those with wealth, political connections or exceptional technical qualifications, and consciously created disincentives for inventors from humble backgrounds. Patent fees provided an important source of revenues for the Crown and its employees, and created a class of administrators who had strong incentives to block proposed reforms.

In addition to the monetary costs, complicated administrative procedures that inventors had to follow implied that transactions costs were also high. Patent applications for England alone had to pass through seven offices, from the Home Secretary to the Lord Chancellor, and twice required the signature of the Sovereign. If the patent were extended to Scotland and Ireland it was necessary to negotiate another five offices in each country. The cumbersome process of patent applications (variously described as “mediaeval” and “fantastical”) afforded ample material for satire, but obviously imposed severe constraints on the ordinary inventor who wished to obtain protection for his discovery. These features testify to the much higher monetary and transactions costs, in both absolute and relative terms, of obtaining property rights to inventions in England in comparison to the United States. Such costs essentially restricted the use of the patent system to inventions of high value and to applicants who already possessed or could raise sufficient capital to apply for the patent. The complicated system also inhibited the diffusion of information and made it difficult, if not impossible, for inventors outside of London to readily conduct patent searches. Patent specifications were open to public inspection on payment of a fee, but until 1852 they were not officially printed, published or indexed. Since the patent could be filed in any of three offices in Chancery, searches of the prior art involved much time and inconvenience. Potential patentees were well advised to obtain the help of a patent agent to aid in negotiating the numerous steps and offices that were required for pursuit of the application in London.

In the second half of the eighteenth century, nation-wide lobbies of manufacturers and patentees expressed dissatisfaction with the operation of the British patent system. However, it was not until after the Crystal Palace Exhibition in 1851 that their concerns were finally addressed, in an effort to meet the burgeoning competition from the United States. In 1852 the efforts of numerous societies and of individual engineers, inventors and manufacturers over many decades were finally rewarded. Parliament approved the Patent Law Amendment Act, which authorized the first major adjustment of the system in two centuries. The new patent statutes incorporated features that drew on testimonials to the superior functioning of the American patent regime. Significant changes in the direction of the American system included lower fees and costs, and the application procedures were rationalized into a single Office of the Commissioners of Patents for Inventions, or “Great Seal Patent Office.”

The 1852 patent reform bills included calls for a U.S.-style examination system but this was amended in the House of Commons and the measure was not included in the final version. Opponents were reluctant to vest examiners with the necessary discretionary power, and pragmatic observers pointed to the shortage of a cadre of officials with the required expertise. The law established a renewal system that required the payment of fees in installments if the patentee wished to maintain the patent for the full term. Patentees initially paid £25 and later installments of £50 (after three years) and £100 (after seven years) to maintain the patent for a full term of fourteen years. Despite the relatively low number of patents granted in England, between 1852 and 1880 the patent office still made a profit of over £2 million. Provision was made for the printing and publication of the patent records. The 1852 reforms undoubtedly instituted improvements over the former opaque procedures, and the lower fees had an immediate impact. Nevertheless, the system retained many of the former features that had implied that patents were in effect viewed as privileges rather than merited rights, and only temporarily abated expressions of dissatisfaction.

One source of dissatisfaction that endured until the end of the nineteenth century was the state of the common law regarding patents. At least partially in reaction to a history of abuse of patent privileges, patents were widely viewed as monopolies that restricted community rights, and thus to be carefully monitored and narrowly construed. Second, British patents were granted “by the grace of the Crown” and therefore were subject to any restrictions that the government cared to impose. According to the statutes, as a matter of national expediency, patents were to be granted if “they be not contrary to the law, nor mischievous to the State, by raising prices of commodities at home, or to the hurt of trade, or generally inconvenient.” The Crown possessed the ability to revoke any patents that were deemed inconvenient or contrary to public policy. After 1855, the government could also appeal to a need for official secrecy to prohibit the publication of patent specifications in order to protect national security and welfare. Moreover, the state could commandeer a patentee’s invention without compensation or consent, although in some cases the patentee was paid a royalty.

Policies towards patent assignments and trade in intellectual property rights also constrained the market for inventions. Ever vigilant to protect an unsuspecting public from fraudulent financial schemes on the scale of the South Sea Bubble, ownership of patent rights was limited to five investors (later extended to twelve). Nevertheless, the law did not offer any relief to the purchaser of an invalid or worthless patent, so potential purchasers were well advised to engage in extensive searches before entering into contracts. When coupled with the lack of assurance inherent in a registration system, the purchase of a patent right involved a substantive amount of risk and high transactions costs — all indicative of a speculative instrument. It is therefore not surprising that the market for assignments and licenses seems to have been quite limited, and even in the year after the 1852 reforms only 273 assignments were recorded.

In 1883 new legislation introduced procedures that were somewhat simpler, with fewer steps. The fees fell to £4 for the initial term of four years, and the remaining £150 could be paid in annual increments. For the first time, applications could be forwarded to the Patent Office through the post office. This statute introduced opposition proceedings, which enabled interested parties to contest the proposed patent within two months of the filing of the patent specifications. Compulsory licenses were introduced in 1883 (and strengthened in 1919 as “licenses of right”) for fear that foreign inventors might injure British industry by refusing to grant other manufacturers the right to use their patent. The 1883 act provided for the employment of “examiners” but their activity was limited to ensuring that the material was patentable and properly described. Indeed, it was not until 1902 that the British system included an examination for novelty, and even then the process was not regarded as stringent as in other countries. Many new provisions were designed to thwart foreign competition. Until 1907 patentees who manufactured abroad were required to also make the patented product in Britain. Between 1919 and 1949 chemical products were excluded from patent protection to counter the threat posed by the superior German chemical industry. Licenses of right enabled British manufacturers to compel foreign patentees to permit the use of their patents on pharmaceuticals and food products.

In sum, changes in the British patent system were initially unforthcoming despite numerous calls for change. Ultimately, the realization that England’s early industrial and technological supremacy was threatened by the United States and other nations in Europe led to a slow process of revisions that lasted well into the twentieth century. One commentator summed up the series of developments by declaring that the British patent system at the time of writing (1967) remained essentially “a modified version of a pre-industrial economic institution.”

The French Patent System

Early French policies towards inventions and innovations in the eighteenth century were based on an extensive but somewhat arbitrary array of rewards and incentives. During this period inventors or introducers of inventions could benefit from titles, pensions that sometimes extended to spouses and offspring, loans (some interest-free), lump-sum grants, bounties or subsidies for production, exemptions from taxes, or monopoly grants in the form of exclusive privileges. This complex network of state policies towards inventors and their inventions was revised but not revoked after the outbreak of the French Revolution.

The modern French patent system was established according to the laws of 1791 (amended in 1800) and 1844. Patentees filed through a simple registration system without any need to specify what was new about their claim, and could persist in obtaining the grant even if warned that the patent was likely to be legally invalid. On each patent document the following caveat was printed: “The government, in granting a patent without prior examination, does not in any manner guarantee either the priority, merit or success of an invention.” The inventor decided whether to obtain a patent for a period of five, ten or fifteen years, and the term could only be extended through legislative action. Protection extended to all methods and manufactured articles, but excluded theoretical or scientific discoveries without practical application, financial methods, medicines, and items that could be covered by copyright.

The 1791 statute stipulated patent fees that were costly, ranging from 300 livres through 1500 livres, based on the declared term of the patent. The 1844 statute maintained this policy since fees were set at 500 francs ($100) for a five year patent, 1000 francs for a 10 year patent and 1500 for a patent of fifteen years, payable in annual installments. In an obvious attempt to limit international diffusion of French discoveries, until 1844 patents were voided if the inventor attempted to obtain a patent overseas on the same invention. On the other hand, the first introducer of an invention covered by a foreign patent would enjoy the same “natural rights” as the patentee of an original invention or improvement. Patentees had to put the invention into practice within two years from the initial grant, or face a tribunal which had the power to repeal the patent, unless the patentee could point to unforeseen events which had prevented his complying with the provisions of the law. The rights of patentees were also restricted if the invention related to items that were controlled by the French government, such as printing presses and firearms.

In return for the limited monopoly right, the patentee was expected to describe the invention in such terms that a workman skilled in the arts could replicate the invention and this information was expected to be made public. However, no provision was made for the publication or diffusion of these descriptions. At least until the law of April 7 1902, specifications were only available in manuscript form in the office in which they had originally been lodged, and printed information was limited to brief titles in patent indexes. The attempt to obtain information on the prior art was also inhibited by restrictions placed on access: viewers had to state their motives; foreigners had to be assisted by French attorneys; and no extract from the manuscript could be copied until the patent had expired.

The state remained involved in the discretionary promotion of invention and innovation through policies beyond the granting of patents. In the first place, the patent statutes did not limit their offer of potential appropriation of returns only to property rights vested in patents. The inventor of a discovery of proven utility could choose between a patent or making a gift of the invention to the nation in exchange for an award from funds that were set aside for the encouragement of industry. Second, institutions such as the Société d’encouragement pour l’industrie nationale awarded a number of medals each year to stimulate new discoveries in areas they considered to be worth pursuing, and also to reward deserving inventors and manufacturers. Third, the award of assistance and pensions to inventors and their families continued well into the nineteenth century. Fourth, at times the Society purchased patent rights and turned the invention over into the public domain.

The basic principles of the modern French patent system were evident in the early French statutes and were retained in later revisions. Since France during the ancien régime was likely the first country to introduce systematic examinations of applications for privileges, it is somewhat ironic that commentators point to the retention of registration without prior examination as the defining feature of the “French system” until 1978. In 1910 fees remained high, although somewhat lower in real terms, at one hundred francs per year. Working requirements were still in place, and patentees were not allowed to satisfy the requirement by importing the article even if the patentee had manufactured it in another European country. However, the requirement was waived if the patentee could persuade the tribunal that the patent was not worked because of unavoidable circumstances.

Similar problems were evident in the market for patent rights. Contracts for patent assignments were filed in the office of the Prefect for the district, but since there was no central source of information it was difficult to trace the records for specific inventions. The annual fees for the entire term of the patent had to be paid in advance if the patent was assigned to a second party. Like patents themselves, assignments and licenses were issued with a caveat emptor clause. This was partially due to the nature of patent property under a registration system, and partially to the uncertainties of legal jurisprudence in this area. For both buyer and seller, the uncertainties associated with the exchange likely reduced the net expected value of trade.

The Spanish Patent System

French patent laws were adopted in its colonies, but also diffused to other countries through its influence on Spain’s system following the Spanish Decree of 1811. The Spanish experience during the nineteenth century is instructive since this country experienced lower rates and levels of economic development than the early industrializers. Like its European neighbors, early Spanish rules and institutions were vested in privileges which had lasting effects that could be detected even in the later period. The per capita rate of patenting in Spain was lower than other major European countries, and foreigners filed the majority of patented inventions. Between 1759 and 1878, roughly one half of all grants were to citizens of other countries, notably France and (to a lesser extent) Britain. Thus, the transfer of foreign technology was a major concern in the political economy of Spain.

This dependence on foreign technologies was reflected in the structure of the Spanish patent system, which permitted patents of introduction as well as patents for invention. Patents of introduction were granted to entrepreneurs who wished to produce foreign technologies that were new to Spain, with no requirement of claims to being the true inventor. Thus, the sole objective of these instruments was to enhance innovation and production in Spain. Since the owners of introduction patents could not prevent third parties from importing similar machines from abroad, they also had an incentive to maintain reasonable pricing structures. Introduction patents had a term of only five years, with a cost of 3000 reales, whereas the fees of patents for invention varied from 1000 reales for five years, 3000 reales for ten years, and 6000 reales for a term of fifteen years. Patentees were required to work the patent within one year, and about a quarter of patents granted between 1826 and 1878 were actually implemented. Since patents of introduction had a brief term, they encouraged the production of items with high expected profits and a quick payback period, after which monopoly rights expired, and the country could benefit from its diffusion.

The German Patent System

The German patent system was influenced by developments in the United States, and itself influenced legislation in Argentina, Austria, Brazil, Denmark, Finland, Holland, Norway, Poland, Russia and Sweden. The German Empire was founded in 1871, and in the first six years each state adopted its own policies. Alsace-Lorraine favored a French-style system, whereas others such as Hamburg and Bremen did not offer patent protection. However, after strong lobbying by supporters of both sides of the debate regarding the merits of patent regimes, Germany passed a unified national Patent Act of 1877.

The 1877 statute created a centralized administration for the grant of a federal patent for original inventions. Industrial entrepreneurs succeeded in their objective of creating a “first to file” system, so patents were granted to the first applicant rather than to the “first and true inventor,” but in 1936 the National Socialists introduced a first to invent system. Applications were examined by examiners in the Patent Office who were expert in their field. During the eight weeks before the grant, patent applications were open to the public and an opposition could be filed denying the validity of the patent. German patent fees were deliberately high to eliminate protection for trivial inventions, with a renewal system that required payment of 30 marks for the first year, 50 marks for the second year, 100 marks for the third, and 50 marks annually after the third year. In 1923 the patent term was extended from fifteen years to eighteen years.

German patent policies encouraged diffusion, innovation and growth in specific industries with a view to fostering economic development. Patents could not be obtained for food products, pharmaceuticals or chemical products, although the process through which such items were produced could be protected. It has been argued that the lack of restrictions on the use of innovations and the incentives to patent around existing processes spurred productivity and diffusion in these industries. The authorities further ensured the diffusion of patent information by publishing claims and specification before they were granted. The German patent system also facilitated the use of inventions by firms, with the early application of a “work for hire” doctrine that allowed enterprises access to the rights and benefits of inventions of employees.

Although the German system was close to the American patent system, it was in other ways more stringent, resulting in patent grants that were lower in number, but likely higher in average value. The patent examination process required that the patent should be new, nonobvious, and also capable of producing greater efficiency. As in the United States, once granted, the courts adopted an extremely liberal attitude in interpreting and enforcing existing patent rights. Penalties for willful infringement included not only fines, but also the possibility of imprisonment. The grant of a patent could be revoked after the first three years if the patent was not worked, if the owner refused to grant licenses for the use of an invention that was deemed in the public interest, or if the invention was primarily being exploited outside of Germany. However, in most cases, a compulsory license was regarded as adequate.

After 1891 a parallel and weaker version of patent protection could be obtained through a gebrauchsmuster or utility patent (sometimes called a petty patent), which was granted through a registration system. Patent protection was available for inventions that could be represented by drawings or models with only a slight degree of novelty, and for a limited term of three years (renewable once for a total life of six years). About twice as many utility patents as examined patents were granted early in the 1930s. Patent protection based on co-existing systems of registration and examination appears to have served distinct but complementary purposes. Remedies for infringement of utility patents also included fines and imprisonment.

Other European Patent Systems

Very few developed countries would now seriously consider eliminating statutory protection for inventions, but in the second half of the nineteenth century the “patent controversy” in Europe pitted advocates of patent rights against an effective abolitionist movement. For a short period, the abolitionists were strong enough to obtain support for dismantling patent systems in a number of European countries. In 1863 the Congress of German Economists declared “patents of invention are injurious to common welfare;” and the movement achieved its greatest victory in Holland, which repealed its patent legislation in 1869. The Swiss cantons did not adopt patent protection until 1888, with an extension in the scope of coverage in 1907. The abolitionists based their arguments on the benefits of free trade and competition, and viewed patents as part of an anticompetitive and protectionist strategy analogous to tariffs on imports. Instead of state-sponsored monopoly awards, they argued, inventors could be rewarded by alternative policies, such as stipends from the government, payments from private industry or associations formed for that purpose, or simply through the lead time that the first inventor acquired over competitors by virtue of his prior knowledge.

According to one authority, the Netherlands eventually reinstated its patent system in 1912 and Switzerland introduced patent laws in 1888 largely because of a keen sense of morality, national pride and international pressure to do so. The appeal to “morality” as an explanatory factor is incapable of explaining the timing and nature of changes in strategies. Nineteenth-century institutions were not exogenous and their introduction or revisions generally reflected the outcome of a self-interested balancing of costs and benefits. The Netherlands and Switzerland were initially able to benefit from their ability to free-ride on the investments that other countries had made in technological advances. As for the cost of lower incentives for discoveries by domestic inventors, the Netherlands was never vaunted as a leader in technological innovation, and this is reflected in their low per capita patenting rates both before and after the period without patent laws. They recorded a total of only 4561 patents in the entire period from 1800 to 1869 and, even after adjusting for population, the Dutch patenting rate in 1869 was a mere 13.4 percent of the U.S. patenting rate. Moreover, between 1851 and 1865 88.6 percent of patents in the Netherlands had been granted to foreigners. After the patent laws were reintroduced in 1912, the major beneficiaries were again foreign inventors, who obtained 79.3 of the patents issued in the Netherlands. Thus, the Netherlands had little reason to adopt patent protection, except for external political pressures and the possibility that some types of foreign investment might be deterred.

The case was somewhat different for Switzerland, which was noted for being innovative, but in a narrow range of pursuits. Since the scale of output and markets were quite limited, much of Swiss industry generated few incentives for invention. A number of the industries in which the Swiss excelled, such as hand-made watches, chocolates and food products, were less susceptible to invention that warranted patent protection. For instance, despite the much larger consumer market in the United States, during the entire nineteenth century fewer than 300 U.S. patents related to chocolate composition or production. Improvements in pursuits such as watch-making could be readily protected by trade secrecy as long as the industry remained artisanal. However, with increased mechanization and worker mobility, secrecy would ultimately prove to be ineffective, and innovators would be unable to appropriate returns without more formal means of exclusion.

According to contemporary observers, the Swiss resolved to introduce patent legislation not because of a sudden newfound sense of morality, but because they feared that American manufacturers were surpassing them as a result of patented innovations in the mass production of products such as boots, shoes and watches. Indeed, before 1890, American inventors obtained more than 2068 patents on watches, and the U.S. watch making industry benefited from mechanization and strong economies of scale that led to rapidly falling prices of output, making them more competitive internationally. The implications are that the rates of industrial and technical progress in the United States were more rapid, and technological change was rendering artisanal methods obsolete in products with mass markets. Thus, the Swiss endogenously adopted patent laws because of falling competitiveness in their key industrial sectors.

What was the impact of the introduction of patent protection in Switzerland? Foreign inventors could obtain patents in the United States regardless of their domestic legislation, so we can approach this question tangentially by examining the patterns of patenting in the United States by Swiss residents before and after the 1888 reforms. Between 1836 and 1888, Swiss residents obtained a grand total of 585 patents in the United States. Fully a third of these patents were for watches and music boxes, and only six were for textiles or dyeing, industries in which Switzerland was regarded as competitive early on. Swiss patentees were more oriented to the international market, rather than the small and unprotected domestic market where they could not hope to gain as much from their inventions. For instance, in 1872 Jean-Jacques Mullerpack of Basel collaborated with Leon Jarossonl of Lille, France to invent an improvement in dyeing black with aniline colors, which they assigned to William Morgan Brown of London, England. Another Basel inventor, Alfred Kern, assigned his 1883 patent for violet aniline dyes to the Badische Anilin and Soda Fabrik of Mannheim, Germany.

After the patent reforms, the rate of Swiss patenting in the United States immediately increased. Swiss patentees obtained an annual average of 32.8 patents in the United States in the decade before the patent law was enacted in Switzerland. After the Swiss allowed patenting, this figure increased to an average of 111 each year in the following six years, and in the period from 1895 to 1900 a total of 821 Swiss patents were filed in the United States. The decadal rate of patenting per million residents increased from 111.8 for the ten years up to the reforms, to 451 per million residents in the 1890s, 513 in the 1900s, 458 in the 1910s and 684 in the 1920s. U.S. statutes required worldwide novelty, and patents could not be granted for discoveries that had been in prior use, so the increase was not due to a backlog of trade secrets that were now patented.

Moreover, the introduction of Swiss patent laws also affected the direction of inventions that Swiss residents patented in the United States. After the passage of the law, such patents covered a much broader range of inventions, including gas generators, textile machines, explosives, turbines, paints and dyes, and drawing instruments and lamps. The relative importance of watches and music boxes immediately fell from about a third before the reforms to 6.2 percent and 2.1 percent respectively in the 1890s and even further to 3.8 percent and 0.3 percent between 1900 and 1909. Another indication that international patenting was not entirely unconnected to domestic Swiss inventions can be discerned from the fraction of Swiss patents (filed in the U.S.) that related to process innovations. Before 1888, 21 percent of the patent specifications mentioned a process. Between 1888 and 1907, the Swiss statutes included the requirement that patents should include mechanical models, which precluded patenting of pure processes. The fraction of specifications that mentioned a process fell during the period between 1888 and 1907, but returned to 22 percent when the restriction was modified in 1907.

In short, although the Swiss experience is often cited as proof of the redundancy of patent protection, the limitations of this special case should be taken into account. The domestic market was quite small and offered minimal opportunity or inducements for inventors to take advantage of economies of scale or cost-reducing innovations. Manufacturing tended to cluster in a few industries where innovation was largely irrelevant, such as premium chocolates, or in artisanal production that was susceptible to trade secrecy, such as watches and music boxes. In other areas, notably chemicals, dyes and pharmaceuticals, Swiss industries were export-oriented, but even today their output tends to be quite specialized and high-valued rather than mass-produced. Export-oriented inventors were likely to have been more concerned about patent protection in the important overseas markets, rather than in the home market. Thus, between 1888 and 1907, although Swiss laws excluded patents for chemicals, pharmaceuticals and dyes, 20.7 percent of the Swiss patents filed in the United States were for just these types of inventions. The scanty evidence on Switzerland suggests that the introduction of patent rights was accompanied by changes in the rate and direction of inventive activity. In any event, both the Netherlands and Switzerland featured unique circumstances that seem to hold few lessons for developing countries today.

The Patent System in the United States

The United States stands out as having established one of the most successful patent systems in the world. Over six million patents have been issued since 1790, and American industrial supremacy has frequently been credited to its favorable treatment of inventors and the inducements held out for inventive activity. The first Article of the U.S. Constitution included a clause to “promote the Progress of Science and the useful Arts by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” Congress complied by passing a patent statute in April 1790. The United States created in 1836 the first modern patent institution in the world, a system whose features differed in significant respects from those of other major countries. The historical record indicates that the legislature’s creation of a uniquely American system was a deliberate and conscious process of promoting open access to the benefits of private property rights in inventions. The laws were enforced by a judiciary which was willing to grapple with difficult questions such as the extent to which a democratic and market-oriented political economy was consistent with exclusive rights. Courts explicitly attempted to implement decisions that promoted economic growth and social welfare.

The primary feature of the “American system” is that all applications are subject to an examination for conformity with the laws and for novelty. An examination system was set in place in 1790, when a select committee consisting of the Secretary of State (Thomas Jefferson), the Attorney General and the Secretary of War scrutinized the applications. These duties proved to be too time-consuming for highly ranked officials who had other onerous duties, so three years later it was replaced by a registration system. The validity of patents was left up to the district courts, which had the power to set in motion a process that could end in the repeal of the patent. However by the 1830s this process was viewed as cumbersome and the statute that was passed in 1836 set in place the essential structure of the current patent system. In particular, the 1836 Patent Law established the Patent Office, whose trained and technically qualified employees were authorized to examine applications. Employees of the Patent Office were not permitted to obtain patent rights. In order to constrain the ability of examiners to engage in arbitrary actions, the applicant was given the right to file a bill in equity to contest the decisions of the Patent Office with the further right of appeal to the Supreme Court of the United States.

American patent policy likewise stands out in its insistence on affordable fees. The legislature debated the question of appropriate fees, and the first patent law in 1790 set the rate at the minimal sum of $3.70 plus copy costs. In 1793 the fees were increased to $30, and were maintained at this level until 1861. In that year, they were raised to $35, and the term of the patent was changed from fourteen years (with the possibility of an extension) to seventeen years (with no extensions.) The 1869 Report of the Commissioner of Patents compared the $35 fee for a US patent to the significantly higher charges in European countries such as Britain, France, Russia ($450), Belgium ($420) and Austria ($350). The Commissioner speculated that both the private and social costs of patenting were lower in a system of impartial specialized examiners, than under a system where similar services were performed on a fee-per-service basis by private solicitors. He pointed out that in the U.S. the fees were not intended to exact a price for the patent privilege or to raise revenues for the state – the disclosure of information was the sole price for the patent property right – rather, they were imposed merely to cover the administrative expenses of the Office.

The basic parameters of the U.S. patent system were transparent and predictable, in itself an aid to those who wished to obtain patent rights. In addition, American legislators were concerned with ensuring that information about the stock of patented knowledge was readily available and diffused rapidly. As early as 1805 Congress stipulated that the Secretary of State should publish an annual list of patents granted the preceding year, and after 1832 also required the publication in newspapers of notices regarding expired patents. The Patent Office itself was a source of centralized information on the state of the arts. However, Congress was also concerned with the question of providing for decentralized access to patent materials. The Patent Office maintained repositories throughout the country, where inventors could forward their patent models at the expense of the Patent Office. Rural inventors could apply for patents without significant obstacles, because applications could be submitted by mail free of postage.

American laws employed the language of the English statute in granting patents to “the first and true inventor.” Nevertheless, unlike in England, the phrase was used literally, to grant patents for inventions that were original in the world, not simply within U.S. borders. American patent laws provided strong protection for citizens of the United States, but varied over time in its treatment of foreign inventors. Americans could not obtain patents for imported discoveries, but the earliest statutes of 1793, 1800 and 1832, restricted patent property to citizens or to residents who declared that they intended to become citizens. As such, while an American could not appropriate patent rights to a foreign invention, he could freely use the idea without any need to bear licensing or similar costs that would otherwise have been due if the inventor had been able to obtain a patent in this country. In 1836, the stipulations on citizenship or residency were removed, but were replaced with discriminatory patent fees: foreigners could obtain a patent in the U.S. for a fee of three hundred dollars, or five hundred if they were British. After 1861 patent rights (with the exception of caveats) were available to all applicants on the same basis without regard to nationality.

The American patent system was based on the presumption that social welfare coincided with the individual welfare of inventors. Accordingly, legislators rejected restrictions on the rights of American inventors. However, the 1832 and 1836 laws stipulated that foreigners had to exploit their patented invention within eighteen months. These clauses seem to have been interpreted by the courts in a fairly liberal fashion, since alien patentees “need not prove that they hawked the patented improvement to obtain a market for it, or that they endeavored to sell it to any person, but that it rested upon those who sought to defeat the patent to prove that the plaintiffs neglected or refused to sell the patented invention for reasonable prices when application was made to them to purchase.” Such provisions proved to be temporary aberrations and were not included in subsequent legislation. Working requirements or compulsory licenses were regarded as unwarranted infringements of the rights of “meritorious inventors,” and incompatible with the philosophy of U.S. patent grants. Patentees were not required to pay annuities to maintain their property, there were no opposition proceedings, and once granted a patent could not be revoked unless there was proven evidence of fraud.

One of the advantages of a system that secures property rights is that it facilitates contracts and trade. Assignments provide a straightforward index of the effectiveness of the American system, since trade in inventions would hardly proliferate if patent rights were uncertain or worthless. An extensive national network of licensing and assignments developed early on, aided by legal rulings that overturned contracts for useless or fraudulent patents. In 1845 the Patent Office recorded 2,108 assignments, which can be compared to the cumulative stock of 7188 patents that were still in force in that year. By the 1870s the number of assignments averaged over 9000 assignments per year, and this increased in the next decade to over 12,000 transactions recorded annually. This flourishing market for patented inventions provided an incentive for further inventive activity for inventors who were able to appropriate the returns from their efforts, and also linked patents and productivity growth.

Property rights are worth little unless they can be legally enforced in a consistent, certain, and predictable manner. A significant part of the explanation for the success of the American intellectual property system relates to the efficiency with which the laws were interpreted and implemented. United States federal courts from their inception attempted to establish a store of doctrine that fulfilled the intent of the Constitution to secure the rights of intellectual property owners. The judiciary acknowledged that inventive efforts varied with the extent to which inventors could appropriate the returns on their discoveries, and attempted to ensure that patentees were not unjustly deprived of the benefits from their inventions. Numerous reported decisions before the early courts declared that, rather than unwarranted monopolies, patent rights were “sacred” and to be regarded as the just recompense to inventive ingenuity. Early courts had to grapple with a number of difficult issues, such as the appropriate measure of damages, disputes between owners of conflicting patents, and how to protect the integrity of contracts when the law altered. Changes inevitably occurred when litigants and judiciary both adapted to a more complex inventive and economic environment. However, the system remained true to the Constitution in the belief that the defense of rights in patented invention was important in fostering industrial and economic development.

Economists such as Joseph Schumpeter have linked market concentration and innovation, and patent rights are often felt to encourage the establishment of monopoly enterprises. Thus, an important aspect of the enforcement of patents and intellectual property in general depends on competition or antitrust policies. The attitudes of the judiciary towards patent conflicts are primarily shaped by their interpretation of the monopoly aspect of the patent grant. The American judiciary in the early nineteenth century did not recognize patents as monopolies, arguing that patentees added to social welfare through innovations which had never existed before, whereas monopolists secured to themselves rights that already belong to the public. Ultimately, the judiciary came to openly recognize that the enforcement and protection of all property rights involved trade-offs between individual monopoly benefits and social welfare.

The passage of the Sherman Act in 1890 was associated with a populist emphasis on the need to protect the public from corporate monopolies, including those based on patent protection, and raised the prospect of conflicts between patent policies and the promotion of social welfare through industrial competition. Firms have rarely been charged directly with antitrust violations based on patent issues. At the same time, a number of landmark restraint of trade lawsuits have involved technological innovators. In the early decades of the 20th century these included innovative enterprises such as John Deere & Co., American Can and International Harvester, through to the numerous cases since 1970 against IBM, Xerox, Eastman Kodak and, most recently, Intel and Microsoft. The evidence suggests that, holding other factors constant, more innovative firms and those with larger patent stocks are more likely to be charged with antitrust violations. A growing fraction of cases involve firms jointly charged with antitrust violations that are linked to patent based market power and to concerns about “innovation markets.”

The Japanese Patent System

Japan emerged from the Meiji era as a follower nation which deliberately designed institutions to try to emulate those of the most advanced industrial countries. Accordingly, in 1886 Takahashi Korekiyo was sent on a mission to examine patent systems in Europe and the United States. The Japanese envoy was not favorably impressed with the European countries in this regard. Instead, he reported: ” … we have looked about us to see what nations are the greatest, so that we could be like them; … and we said, `What is it that makes the United States such a great nation?’ and we investigated and we found it was patents, and we will have patents.” The first national patent statute in Japan was passed in 1888, and copied many features of the U.S. system, including the examination procedures.

However, even in the first statute, differences existed that reflected Japanese priorities and the “wise eclecticism of Japanese legislators.” For instance, patents were not granted to foreigners, protection could not be obtained for fashion, food products, or medicines, patents that were not worked within three years could be revoked, and severe remedies were imposed for infringement, including penal servitude. After Japan became a signatory of the Paris Convention a new law was passed in 1899, which amended existing legislation to accord with the agreements of the Convention, and extended protection to foreigners. The influence of the German laws were evident in subsequent reforms in 1909 (petty or utility patents were protected) and 1921 (protection was removed from chemical products, work for hire doctrines were adopted, and an opposition procedure was introduced). The Act of 1921 also permitted the state to revoke a patent grant on payment of appropriate compensation if it was deemed in the public interest. Medicines, food and chemical products could not be patented, but protection could be obtained for processes relating to their manufacture.

The modern Japanese patent system is an interesting amalgam of features drawn from the major patent institutions in the world. Patent applications are filed, and the applicants then have seven years within which they can request an examination. Before 1996 examined patents were published prior to the actual grant, and could be opposed before the final grant; but at present, opposition can only occur in the first six months after the initial grant. Patents are also given for utility models or incremental inventions which are required to satisfy a lower standard of novelty and nonobviousness and can be more quickly commercialized. It has been claimed that the Japanese system favors the filing of a plethora of narrowly defined claims for utility models that build on the more substantive contributions of patent grants, leading to the prospect of an anti-commons through “patent flooding.” Others argue that utility models aid diffusion and innovation in the early stages of the patent term, and that the pre-grant publication of patent specifications also promotes diffusion.

Harmonization of International Patent Laws

Today very few developed countries would seriously consider eliminating statutory protection for intellectual property, but in the second half of the nineteenth century the “patent controversy” pitted advocates of patent rights against an effective abolitionist movement. For a short period the latter group was strong enough to obtain support in favor of dismantling the patent systems in countries such as England, and in 1863 the Congress of German Economists declared “patents of invention are injurious to common welfare.” The movement achieved its greatest victory in Holland, which repealed its patent legislation in 1869. The abolitionists based their arguments on the benefits of free trade and competition and viewed patents as part of a protectionist strategy analogous to tariffs. Instead of monopoly awards to inventors, their efforts could be rewarded by alternative policies, such as stipends from the government, payments from private industry or associations formed for that purpose, or simply through the lead time that the first inventor acquired over competitors by virtue of his prior knowledge.

The decisive victory of the patent proponents shifted the focus of interest to the other extreme, and led to efforts to attain uniformity in intellectual property rights regimes across countries. Part of the impetus for change occurred because the costs of discordant national rules became more burdensome as the volume of international trade in industrial products grew over time. Americans were also concerned about the lack of protection accorded to their exhibits in the increasingly more prominent World’s Fairs. Indeed, the first international patent convention was held in Austria in 1873, at the suggestion of U.S. policy makers, who wanted to be certain that their inventors would be adequately protected at the International Exposition in Vienna that year. It also yielded an opportunity to protest the provisions in Austrian law which discriminated against foreigners, including a requirement that patents had to be worked within one year or risk invalidation. The Vienna Convention adopted several resolutions, including a recommendation that the United States opposed, in favor of compulsory licenses if they were deemed in the public interest. However, the convention followed U.S. lead and did not approve compulsory working requirements.

International conventions proliferated in subsequent years, and their tenor tended to reflect the opinions of the conveners. Their objective was not to reach compromise solutions that would reflect the needs and wishes of all participants, but rather to promote preconceived ideas. The overarching goal was to pursue uniform international patent laws, although there was little agreement about the finer points of these laws. It became clear that the goal of complete uniformity was not practicable, given the different objectives, ideologies and economic circumstances of participants. Nevertheless, in 1884 the International Union for the Protection of Industrial Property was signed by Belgium, Portugal, France, Guatemala, Italy, the Netherlands, San Salvador, Serbia, Spain and Switzerland. The United States became a member in 1887, and a significant number of developing countries followed suit, including Brazil, Bulgaria, Cuba, the Dominican Republic, Ceylon, Mexico, Trinidad and Tobago and Indonesia, among others.

The United States was the most prolific patenting nation in the world, many of the major American enterprises owed their success to patents and were expanding into international markets, and the U.S. patent system was recognized as the most successful. It is therefore not surprising that patent harmonization implied convergence towards the American model despite resistance from other nations. Countries such as Germany were initially averse to extending equal protection to foreigners because they feared that their domestic industry would be overwhelmed by American patents. Ironically, because its patent laws were the most liberal towards patentees, the United States found itself with weaker bargaining abilities than nations who could make concessions by changing their provisions. The U.S. pressed for the adoption of reciprocity (which would ensure that American patentees were treated as favorably abroad as in the United States) but this principle was rejected in favor of “national treatment” (American patentees were to be granted the same rights as nationals of the foreign country). This likely influenced the U.S. tendency to use bilateral trade sanctions rather than multilateral conventions to obtain reforms in international patent policies.

It was commonplace in the nineteenth century to rationalize and advocate close links between trade policies, protection, and international laws regarding intellectual property. These links were evident at the most general philosophical level, and at the most specific, especially in terms of compulsory working requirements and provisions to allow imports by the patentee. For instance, the 1880 Paris Convention considered the question of imports of the patented product by the patentee. According to the laws of France, Mexico and Tunisia, such importation would result in the repeal of the patent grant. The Convention inserted an article that explicitly ruled out forfeiture of the patent under these circumstances, which led some French commentators to argue that “the laws on industrial property… will be truly disastrous if they do not have a counterweight in tariff legislation.” The movement to create an international patent system elucidated the fact that intellectual property laws do not exist in a vacuum, but are part of a bundle of rights that are affected by other laws and policies.

Conclusion

Appropriate institutions to promote creations in the material and intellectual sphere are especially critical because ideas and information are public goods that are characterized by nonrivalry and nonexclusion. Once the initial costs are incurred, ideas can be reproduced at zero marginal cost and it may be difficult to exclude others from their use. Thus, in a competitive market, public goods may suffer from underprovision or may never be created because of a lack of incentive on the part of the original provider who bears the initial costs but may not be able to appropriate the benefits. Market failure can be ameliorated in several ways, for instance through government provision, rewards or subsidies to original creators, private patronage, and through the creation of intellectual property rights.

Patents allow the initial producers a limited period during which they are able to benefit from a right of exclusion. If creativity is a function of expected profits, these grants to inventors have the potential to increase social production possibilities at lower cost. Disclosure requirements promote diffusion, and the expiration of the temporary monopoly right ultimately adds to the public domain. Overall welfare is enhanced if the social benefits of diffusion outweigh the deadweight and social costs of temporary exclusion. This period of exclusion may be costly for society, especially if future improvements are deterred, and if rent-seeking such as redistributive litigation results in wasted resources. Much attention has also been accorded to theoretical features of the optimal system, including the breadth, longevity, and height of patent and copyright grants.

However, strongly enforced rights do not always benefit the producers and owners of intellectual property rights, especially if there is a prospect of cumulative invention where follow-on inventors build on the first discovery. Thus, more nuanced models are ambivalent about the net welfare benefits of strong exclusive rights to inventions. Indeed, network models imply that the social welfare of even producers may increase from weak enforcement if more extensive use of the product increases the value to all users. Under these circumstances, the patent owner may benefit from the positive externalities created by piracy. In the absence of royalties, producers may appropriate returns through ancillary means, such as the sale of complementary items or improved reputation. In a variant of the durable-goods monopoly problem, it has been shown that piracy can theoretically increase the demand for products by ensuring that producers can credibly commit to uniform prices over time. Also in this vein, price and/or quality discrimination of non-private goods across pirates and legitimate users can result in net welfare benefits for society and for the individual firm. If the cost of imitation increases with quality, infringement can also benefit society if it causes firms to adopt a strategy of producing higher quality commodities.

Economic theorists who are troubled by the imperfections of intellectual property grants have proposed alternative mechanisms that lead to more satisfactory mathematical solutions. Theoretical analyses have advanced our understanding in this area, but such models by their nature cannot capture many complexities. They tend to overlook such factors as the potential for greater corruption or arbitrariness in the administration of alternatives to patents. Similarly, they fail to appreciate the role of private property rights in conveying information and facilitating markets, and their value in reducing risk and uncertainty for independent inventors with few private resources. The analysis becomes even less satisfactory when producers belong to different countries than consumers. Thus, despite the flurry of academic research on the economics of intellectual property, we have not progressed far beyond Fritz Machlup’s declaration that our state of knowledge does not allow to us to either recommend the introduction or the removal of such systems. Existing studies leave a wide area of ambiguity about the causes and consequences of institutional structures in general, and their evolution across time and region.

In the realm of intellectual property, questions from four centuries ago are still current, ranging from its philosophical underpinnings, to whether patents and copyrights constitute optimal policies towards intellectual inventions, to the growing concerns of international political economy. A number of scholars are so impressed with technological advances in the twenty-first century that they argue we have reached a critical juncture where we need completely new institutions. Throughout their history, patent and copyright regimes have confronted and accommodated technological innovations that were no less significant and contentious for their time. An economist from the nineteenth century would have been equally familiar with considerations about whether uniformity in intellectual property rights across countries harmed or benefited global welfare, and whether piracy might be to the advantage of developing countries. Similarly, the link between trade and intellectual property rights that informs the TRIPS (trade-related aspects of intellectual property rights) agreement was quite standard two centuries ago.

Today the majority of patents are filed in developed countries by the residents of developed countries, most notably those of Japan and the United States. The developing countries of the twenty-first century are under significant political pressure to adopt stronger patent laws and enforcement, even though few patents are filed by residents of the developing countries. Critics of intellectual property rights point to costs, such as monopoly rents and higher barriers to entry, administrative costs, outflows of royalty payments to foreign entities, and a lack of indigenous innovation. Other studies, however, have more optimistic findings regarding the role of patents in economic and social development. They suggest that stronger protection can encourage more foreign direct investment, greater access to technology, and increased benefits from trade openness. Moreover, both economic history and modern empirical research indicate that stronger patent rights and more effective markets in invention can, by encouraging and enabling the inventiveness of ordinary citizens of developing countries, help to increase social and economic welfare.

Patents Statistics for France, Britain, the United States and Germany, 1790-1960
YEAR FRANCE BRITAIN U.S. GERMANY
1790 . 68 3 .
1791 34 57 33 .
1792 29 85 11 .
1793 4 43 20 .
1794 0 55 22 .
1795 1 51 12 .
1796 8 75 44 .
1797 4 54 51 .
1798 10 77 28 .
1799 22 82 44 .
1800 16 96 41 .
1801 34 104 44 .
1802 29 107 65 .
1803 45 73 97 .
1804 44 60 84 .
1805 63 95 57 .
1806 101 99 63 .
1807 66 94 99 .
1808 61 95 158 .
1809 52 101 203 .
1810 93 108 223 .
1811 66 115 215 0
1812 96 119 238 2
1813 88 142 181 2
1814 53 96 210 1
1815 77 102 173 10
1816 115 118 206 10
1817 162 103 174 16
1818 153 132 222 18
1819 138 101 156 10
1820 151 97 155 10
1821 180 109 168 11
1822 175 113 200 8
1823 187 138 173 22
1824 217 180 228 25
1825 321 250 304 17
1826 281 131 323 67
1827 333 150 331 69
1828 388 154 368 87
1829 452 130 447 59
1830 366 180 544 57
1831 220 150 573 34
1832 287 147 474 46
1833 431 180 586 76
1834 576 207 630 66
1835 556 231 752 73
1836 582 296 702 65
1837 872 256 426 46
1838 1312 394 514 104
1839 730 411 404 125
1840 947 440 458 156
1841 925 440 490 162
1842 1594 371 488 153
1843 1397 420 493 160
1844 1863 450 478 158
1845 2666 572 473 256
1846 2750 493 566 252
1847 2937 493 495 329
1848 1191 388 583 256
1849 1953 514 984 253
1850 2272 523 883 308
1851 2462 455 752 274
1852 3279 1384 885 272
1853 4065 2187 844 287
1854 4563 1878 1755 276
1855 5398 2046 1881 287
1856 5761 1094 2302 393
1857 6110 2028 2674 414
1858 5828 1954 3455 375
1859 5439 1977 4160 384
1860 6122 2063 4357 550
1861 5941 2047 3020 551
1862 5859 2191 3214 630
1863 5890 2094 3773 633
1864 5653 2024 4630 557
1865 5472 2186 6088 609
1866 5671 2124 8863 549
1867 6098 2284 12277 714
1868 6103 2490 12526 828
1869 5906 2407 12931 616
1870 3850 2180 12137 648
1871 2782 2376 11659 458
1872 4875 2771 12180 958
1873 5074 2974 11616 1130
1874 5746 3162 12230 1245
1875 6007 3112 13291 1382
1876 6736 3435 14169 1947
1877 7101 3317 12920 1604
1878 7981 3509 12345 4200
1879 7828 3524 12165 4410
1880 7660 3741 12902 3960
1881 7813 3950 15500 4339
1882 7724 4337 18091 4131
1883 8087 3962 21162 4848
1884 8253 9983 19118 4459
1885 8696 8775 23285 4018
1886 9011 9099 21767 4008
1887 8863 9226 20403 3882
1888 8669 9309 19551 3923
1889 9287 10081 23324 4406
1890 9009 10646 25313 4680
1891 9292 10643 22312 5550
1892 9902 11164 22647 5900
1893 9860 11600 22750 6430
1894 10433 11699 19855 6280
1895 10257 12191 20856 5720
1896 11430 12473 21822 5410
1897 12550 14210 22067 5440
1898 12421 14167 20377 5570
1899 12713 14160 23278 7430
1900 12399 13710 24644 8784
1901 12103 13062 25546 10508
1902 12026 13764 27119 10610
1903 12469 15718 31029 9964
1904 12574 15089 30258 9189
1905 12953 14786 29775 9600
1906 13097 14707 31170 13430
1907 13170 16272 35859 13250
1908 13807 16284 32735 11610
1909 13466 15065 36561 11995
1910 16064 15269 35141 12100
1911 15593 17164 32856 12640
1912 15737 15814 36198 13080
1913 15967 16599 33917 13520
1914 12161 15036 39892 12350
1915 5056 11457 43118 8190
1916 3250 8424 43892 6271
1917 4100 9347 40935 7399
1918 4400 10809 38452 7340
1919 10500 12301 36797 7766
1920 18950 14191 37060 14452
1921 17700 17697 37798 15642
1922 18300 17366 38369 20715
1923 19200 17073 38616 20526
1924 19200 16839 42584 18189
1925 18000 17199 46432 15877
1926 18200 17333 44733 15500
1927 17500 17624 41717 15265
1928 22000 17695 42357 15598
1929 24000 18937 45267 20202
1930 24000 20888 45226 26737
1931 24000 21949 51761 25846
1932 21850 21150 53504 26201
1933 20000 17228 48807 21755
1934 19100 16890 44452 17011
1935 18000 17675 40663 16139
1936 16700 17819 39831 16750
1937 16750 17614 37738 14526
1938 14000 19314 38102 15068
1939 15550 17605 43118 16525
1940 10100 11453 42323 14647
1941 8150 11179 41171 14809
1942 10000 7962 38514 14648
1943 12250 7945 31101 14883
1944 11650 7712 28091 .
1945 7360 7465 25712 .
1946 11050 8971 21859 .
1947 13500 11727 20191 .
1948 13700 15558 24007 .
1949 16700 20703 35224 .
1950 17800 13509 43219 .
1951 25200 13761 44384 27767
1952 20400 21380 43717 37179
1953 43000 17882 40546 37113
1954 34000 17985 33910 19140
1955 23000 20630 30535 14760
1956 21900 19938 46918 18150
1957 23000 25205 42873 20467
1958 24950 18531 48450 19837
1959 41600 18157 52509 22556
1960 35000 26775 47286 19666

Additional Reading

Khan, B. Zorina. The Democratization of Invention: Patents and Copyrights in American Economic Development. New York: Cambridge University Press, 2005.

Khan, B. Zorina, and Kenneth L. Sokoloff. “Institutions and Technological Innovation during Early Economic Growth, 1790-1930.” NBER Working Paper No. 10966. Cambridge, MA: December 2004. (Available at www.nber.org.)

Bibliography

Besen, Stanley M., and Leo J. Raskind, “Introduction to the Law and Economics of Intellectual Property.” Journal of Economic Perspectives 5, no. 1 (1991): 3-27.

Bugbee, Bruce. The Genesis of American Patent and Copyright Law. Washington, DC: Public Affairs Press, 1967.

Coulter, Moureen. Property in Ideas: The Patent Question in Mid-Victorian England. Kirksville, MO: Thomas Jefferson Press, 1991

Dutton, H. I. The Patent System and Inventive Activity during the Industrial Revolution, 1750-1852, Manchester, UK: Manchester University Press, 1984.

Epstein, R. “Industrial Inventions: Heroic or Systematic?” Quarterly Journal of Economics 40 (1926): 232-72.

Gallini, Nancy T. “The Economics of Patents: Lessons from Recent U.S. Patent Reform.” Journal of Economic Perspectives 16, no. 2 (2002): 131–54.

Gilbert, Richard and Carl Shapiro. “Optimal Patent Length and Breadth.” Rand Journal of Economics 21 (1990): 106-12.

Gilfillan, S. Colum. The Sociology of Invention. Cambridge, MA: Follett, 1935.

Gomme, A. A. Patents of Invention: Origin and Growth of the Patent System in Britain, London: Longmans Green, 1946.

Harding, Herbert. Patent Office Centenary, London: Her Majesty’s Stationery Office, 1953.

Hilaire-Pérez, Liliane. Inventions et Inventeurs en France et en Angleterre au XVIIIe siècle. Lille: Université de Lille, 1994.

Hilaire-Pérez, Liliane. L’invention technique au siècle des Lumières. Paris: Albin Michel, 2000.

Jeremy, David J., Transatlantic Industrial Revolution: The Diffusion of Textile Technologies between Britain and America, 1790-1830s. Cambridge, MA: MIT Press, 1981.

Khan, B. Zorina. “Property Rights and Patent Litigation in Early Nineteenth-Century America.” Journal of Economic History 55, no. 1 (1995): 58-97.

Khan, B. Zorina. “Married Women’s Property Right Laws and Female Commercial Activity.” Journal of Economic History 56, no. 2 (1996): 356-88.

Khan, B. Zorina. “Federal Antitrust Agencies and Public Policy towards Patents and Innovation.” Cornell Journal of Law and Public Policy 9 (1999): 133-69.

Khan, B. Zorina, “`Not for Ornament’: Patenting Activity by Women Inventors.” Journal of Interdisciplinary History 33, no. 2 (2000): 159-95.

Khan, B. Zorina. “Technological Innovations and Endogenous Changes in U.S. Legal Institutions, 1790-1920.” NBER Working Paper No. 10346. Cambridge, MA: March 2004. (available at www.nber.org)

Khan, B. Zorina, and Kenneth L. Sokoloff. “‘Schemes of Practical Utility’: Entrepreneurship and Innovation among ‘Great Inventors’ in the United States, 1790-1865.” Journal of Economic History 53, no. 2 (1993): 289-307.

Khan, B. Zorina, and Kenneth L. Sokoloff. “Entrepreneurship and Technological Change in Historical Perspective.” Advances in the Study of Entrepreneurship, Innovation, and Economic Growth 6 (1993): 37-66.

Khan, B. Zorina, and Kenneth L. Sokoloff. “Two Paths to Industrial Development and Technological Change.” In Technological Revolutions in Europe, 1760-1860, edited by Maxine Berg and Kristine Bruland. London: Edward Elgar, London, 1997.

Khan, B. Zorina, and Kenneth L. Sokoloff. “The Early Development of Intellectual Property Institutions in the United States.” Journal of Economic Perspectives 15, no. 3 (2001): 233-46.

Khan, B. Zorina, and Kenneth L. Sokoloff. “Innovation of Patent Systems in the Nineteenth Century: A Comparative Perspective.” Unpublished manuscript (2001).

Khan, B. Zorina, and Kenneth L. Sokoloff. “Institutions and Democratic Invention in Nineteenth-century America.” American Economic Review Papers and Proceedings 94 (2004): 395-401.

Khan, B. Zorina, and Kenneth L. Sokoloff. “Institutions and Technological Innovation during Early Economic Growth: Evidence from the Great Inventors of the United States, 1790-1930.” In Institutions and Economic Growth, edited by Theo Eicher and Cecilia Garcia-Penalosa. Cambridge, MA: MIT Press, 2006.

Lamoreaux, Naomi R. and Kenneth L. Sokoloff. “Long-Term Change in the Organization of Inventive Activity.” Science, Technology and the Economy 93 (1996): 1286-92.

Lamoreaux, Naomi R. and Kenneth L. Sokoloff. “The Geography of Invention in the American Glass Industry, 1870-1925.” Journal of Economic History 60, no. 3 (2000): 700-29.

Lamoreaux, Naomi R. and Kenneth L. Sokoloff. “Market Trade in Patents and the Rise of a Class of Specialized Inventors in the Nineteenth-century United States.” American Economic Review 91, no. 2 (2001): 39-44.

Landes, David S. Unbound Prometheus: Technological Change and Industrial Development in Western Europe from 1750 to the Present. Cambridge: Cambridge University Press, 1969.

Lerner, Josh. “Patent Protection and Innovation over 150 Years.” NBER Working Paper No. 8977. Cambridge, MA: June 2002.

Levin, Richard, A. Klevorick, R. Nelson and S. Winter. “Appropriating the Returns from Industrial Research and Development.” Brookings Papers on Economic Activity 3 (1987): 783-820.

Lo, Shih-Tse. “Strengthening Intellectual Property Rights: Evidence from the 1986 Taiwanese Patent Reforms.” Ph.D. diss., University of California at Los Angeles, 2005.

Machlup, Fritz. An Economic Review of the Patent System. Washington, DC: U.S. Government Printing Office, 1958.

Machlup, Fritz. “The Supply of Inventors and Inventions.” In The Rate and Direction of Inventive Activity, edited by R. Nelson. Princeton: Princeton University Press, 1962.

Machlup, Fritz, and Edith Penrose. “The Patent Controversy in the Nineteenth Century.” Journal of Economic History 10, no. 1 (1950): 1-29.

Macleod, Christine. Inventing the Industrial Revolution. Cambridge: Cambridge University Press, 1988.

McCloy, Shelby T. French Inventions of the Eighteenth Century. Lexington: University of Kentucky Press, 1952.

Mokyr, Joel. The Lever of Riches: Technological Creativity and Economic Growth. New York: Oxford University Press, 1990.

Moser, Petra. “How Do Patent Laws Influence Innovation? Evidence from Nineteenth-century World Fairs.” American Economic Review 95, no. 4 (2005): 1214-36.

O’Dell, T. H. Inventions and Official Secrecy: A History of Secret Patents in the United Kingdom, Oxford: Clarendon Press, 1994.

Penrose, Edith. The Economics of the International Patent System. Baltimore: John Hopkins University Press, 1951.

Sáiz González, Patricio. Invención, patentes e innovación en la Espaňa contemporánea. Madrid: OEPM, 1999.

Schmookler, Jacob. “Economic Sources of Inventive Activity.” Journal of Economic History 22 (1962): 1-20.

Schmookler, Jacob. Invention and Economic Growth. Cambridge, MA: Harvard University Press, 1966.

Schmookler, Jacob, and Zvi Griliches. “Inventing and Maximizing.” American Economic Review (1963): 725-29.

Schiff, Eric. Industrialization without National Patents: The Netherlands, 1869-1912; Switzerland, 1850-1907. Princeton: Princeton University Press, 1971.

Sokoloff, Kenneth L. “Inventive Activity in Early Industrial America: Evidence from Patent Records, 1790-1846.” Journal of Economic History 48, no. 4 (1988): 813-50.

Sokoloff, Kenneth L. “Invention, Innovation, and Manufacturing Productivity Growth in the Antebellum Northeast.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 345-78. Chicago: University of Chicago Press, 1992.

Sokoloff, Kenneth L., and B. Zorina Khan. “The Democratization of Invention in during Early Industrialization: Evidence from the United States, 1790-1846.” Journal of Economic History 50, no. 2 (1990): 363-78.

Sutthiphisal, Dhanoos. “Learning-by-Producing and the Geographic Links between Invention and Production.” Unpublished manuscript, McGill University, 2005.

Takeyama, Lisa N. “The Welfare Implications of Unauthorized Reproduction of Intellectual Property in the Presence of Demand Network Externalities.” Journal of Industrial Economics 42, no. 2 (1994): 155-66.

U.S. Patent Office. Annual Report of the Commissioner of Patents. Washington, DC: various years.

Van Dijk, T. “Patent Height and Competition in Product Improvements.” Journal of Industrial Economics 44, no. 2 (1996): 151-67.

Vojacek, Jan. A Survey of the Principal National Patent Systems. New York: Prentice-Hall, 1936.

Woodcroft, Bennet. Alphabetical Index of Patentees of Inventions [1617-1852]. New York: A. Kelley, 1854, reprinted 1969.

Woodcroft, Bennet. Titles of Patents of Invention: Chronologically Arranged from March 2, 1617 to October 1, 1852. London: Queen’s Printing Office, 1854.

Citation: Khan, B. “An Economic History of Patent Institutions”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/an-economic-history-of-patent-institutions/

Monetary Unions

Benjamin J. Cohen, University of California at Santa Barbara

Monetary tradition has long assumed that, in principle, each sovereign state issues and manages its own exclusive currency. In practice, however, there have always been exceptions — countries that elected to join together in a monetary union of some kind. Not all monetary unions have stood the test of time; in fact, many past initiatives have long since passed into history. Yet interest in monetary union persists, stimulated in particular by the example of the European Union’s Economic and Monetary Union (EMU), which has replaced a diversity of national monies with one joint currency called the euro. Today, the possibility of monetary union is actively discussed in many parts of the world.

A monetary union may be defined as a group of two or more states sharing a common currency or equivalent. Although some sources extend the definition to include the monetary regimes of national federations such as the United States or of imperial agglomerations such as the old Austro-Hungarian Empire, the conventional practice is to limit the term to agreements among units that are recognized as fully sovereign states under international law. The antithesis of a monetary union, of course, is a national currency with an independent central bank and a floating exchange rate.

In the strictest sense of the term, monetary union means complete abandonment of separate national currencies and full centralization of monetary authority in a single joint institution. In reality, considerable leeway exists for variations of design along two key dimensions. These dimensions are institutional provisions for (1) the issuing of currency and (2) the management of decisions. Currencies may continue to be issued by individual governments, tied together in an exchange-rate union. Alternatively, currencies may be replaced not by a joint currency but rather by the money of a larger partner — an arrangement generically labeled dollarization after the United States dollar, the money that is most widely used for this purpose. Similarly, monetary authority may continue to be exercised in some degree by individual governments or, alternatively, may be delegated not to a joint institution but rather to a single partner such as the United States.

In political terms, monetary unions divide into two categories, depending on whether national monetary sovereignty is shared or surrendered. Unions based on a joint currency or an exchange-rate union in effect pool monetary authority to some degree. They are a form of partnership or alliance of nominal equals. Unions created by dollarization are more hierarchical, a subordinate follower-leader type of regime.

The greatest attraction of a monetary union is that it reduces transactions costs as compared with a collection of separate national currencies. With a single money or equivalent, there is no need to incur the expense of currency conversion or hedging against exchange risk in transactions among the partners. But there are also two major economic disadvantages for governments to consider.

First, individual partners lose control of both the money supply and exchange rate as policy instruments to cope with domestic or external disturbances. Against a monetary union’s efficiency gains at the microeconomic level, governments must compare the cost of sacrificing autonomy of monetary policy at the macroeconomic level.

Second, individual partners lose the capacity derived from an exclusive national currency to augment public spending at will via money creation — a privilege known as seigniorage. Technically defined as the excess of the nominal value of a currency over its cost of production, seigniorage can be understood as an alternative source of revenue for the state beyond what can be raised by taxes or by borrowing from financial markets. Sacrifice of the seigniorage privilege must also be compared against a monetary union’s efficiency gains.

The seriousness of these two losses will depend on the type of monetary union adopted. In an alliance-type union, where authority is not surrendered but pooled, monetary control is delegated to the union’s joint institution, to be shared and in some manner collectively managed by all the countries involved. Hence each partner’s loss is simultaneously also each other’s gain. Though individual states may no longer have much latitude to act unilaterally, each government retains a voice in decision-making for the group as a whole. Losses will be greater with dollarization, which by definition transfers all monetary authority to the dominant power. Some measure of seigniorage may be retained by subordinate partners, but only with the consent of the leader.

The idea of monetary union among sovereign states was widely promoted in the nineteenth century, mainly in Europe, despite the fact that most national currencies were already tied together closely by the fixed exchange rates of the classical gold standard. Further efficiency gains could be realized, proponents argued, while little would be lost at a time when activist monetary policy was still unknown.

“Universal Currency” Fails, 1867

Most ambitious was a projected “universal currency” to be based on equivalent gold coins issued by the three biggest financial powers of the day: Britain, France, and the United States. As it happened, the gold content of French coins at the time was such that a 25-franc piece — not then in existence but easily mintable — would if brought into existence have contained 112.008 grains of gold, very close to both the English sovereign (containing 113.001 grains) and American half-eagle, equal to five dollars (containing 116.1 grains). Why not, then, seek some sort of standardization of coinage among the three countries to achieve the equivalent of one single money? That was the proposal of a major monetary conference sponsored by the French Government to coincide with an international exposition in Paris in 1867. Delegates from some 20 countries, with the critical exception of Britain’s representatives, enthusiastically supported creation of a universal currency based on a 25-franc piece and called for appropriate reductions in the gold content of the sovereign and half-eagle. In the end, however, no action was taken by either London or Washington, and for lack of sustained political support the idea ultimately faded away.

Latin Monetary Union

Two years before the 1867 conference, however, the French Government did succeed in gaining agreement for a more limited initiative — the Latin Monetary Union (LMU). Joining Belgium, Italy, and Switzerland together with France, the LMU was intended to standardize the existing gold and silver coinages of all four countries. Greece subsequently adhered to the terms of the LMU in 1868, though not becoming a formal member until 1876. In practical terms, a monetary partnership among these countries had already begun to coalesce even earlier as a result of independent decisions by Belgium, Greece, Italy, and Switzerland to model their currency systems on that of France. Each state chose to adopt a basic unit equal in value to the French franc — actually called a franc in Belgium and Switzerland — with equivalent subsidiary units defined according to the French-inspired decimal system. Starting in the 1850s, however, serious Gresham’s Law type problems developed as a result of differences in the weight and fineness of silver coins circulating in each country. The LMU established uniform standards for national coinages and, by making each member’s money legal tender throughout the Union, effectively created a wider area for the circulation of a harmonized supply of specie coins. In substance a formal exchange-rate union was created, with the authority for management of participating currencies remaining with each separate government.

As a group, members were distinguished from other countries by the reciprocal obligation of their central banks to accept one another’s coins at par and without limit. Soon after its founding, however, beginning in the late 1860s, the LMU was subjected to considerable strain owing to a global glut of silver production. The resulting depreciation of silver eventually led to a suspension of silver coinage by all the partners, effectively transforming the LMU from a bimetallic standard into what came to be called a “limping gold standard.” Even so, the arrangement managed to hold together until the generalized breakdown of global monetary relations during World War I. The LMU was not formally dissolved until 1927, following Switzerland’s decision to withdraw during the previous year.

Scandinavian Monetary Union

A similar arrangement also emerged in northern Europe — the Scandinavian Monetary Union (SMU), formed in 1873 by Sweden and Denmark and joined two years later by Norway. The Scandanavian Monetary Union too was an exchange-rate union designed to standardize existing coinages, although unlike the LMU the SMU was based from the start on a monometallic gold standard. The Union established the krone (crown) as a uniform unit of account, with national currencies permitted full circulation as legal tender in all three countries. As in the LMU, members of the SMU were distinguished from outsiders by the reciprocal obligation to accept one another’s currencies at par and without limit; also as in the LMU, mutual acceptability was initially limited to gold and silver coins only. In 1885, however, the three members went further, agreeing to accept one another’s bank notes and drafts as well, thus facilitating free intercirculation of all paper currency and resulting eventually in the total disappearance of exchange-rate quotations among the three moneys. By the turn of the century the SMU had come to function, in effect, as a single unit for all payments purposes, until relations were disrupted by the suspension of convertibility and floating of individual currencies at the start of World War I. Despite subsequent efforts during and after the war to restore at least some elements of the Union, particularly following the members’ return to the gold standard in the mid-1920s, the agreement was finally abandoned following the global financial crisis of 1931.

German Monetary Union

Repeated efforts to standardize coinages were made as well by various German states prior to Germany’s political union, but with rather less success. Early accords, following the start of the Zollverein (the German region’s customs union) in 1834, ostensibly established a German Monetary Union — technically, like the LMU and SMU, also an exchange-rate union — but in fact divided the area into two quite distinct currency alliances: one encompassing most northern states, using the thaler as its basic monetary unit; and a second including states in the south, based on the florin (also known as the guilder or gulden). Free intercirculation of coins was guaranteed in both groups but not at par: the exchange rate between the two units of account was fixed at one thaler for 1.75 florins (formally, 14: 24.5) rather than one-for-one. Moreover, states remained free to mint non-standardized coins in addition to their basic units, and many important German states (e.g., Bremen, Hamburg, and Schleswig-Holstein) chose to stay outside the agreement altogether. Nor were matters helped much by the short-lived Vienna Coinage Treaty signed with Austria in 1857, which added yet a third currency, Austria’s own florin, to the mix with a value slightly higher than that of the south German unit. The Austro-German Monetary Union was dissolved less than a decade later, following Austria’s defeat in the 1866 Austro-Prussian War. A full merger of all the currencies of the German states did not finally arrive until after consolidation of modern Germany, under Prussian leadership, in 1871.

The only truly successful monetary union in Europe prior to EMU came in 1922 with the birth of the Belgium-Luxembourg Economic Union (BLEU), which remained in force for more than seven decades until blended into EMU in 1999. Following severance of its traditional ties with the German Zollverein after World War I, Luxembourg elected to link itself commercially and financially with Belgium, agreeing to a comprehensive economic union including a merger of their separate money systems. Reflecting the partners’ considerable disparity of size (Belgium’s population is roughly thirty times Luxembourg’s), Belgian francs under BLEU formed the largest part of the money stock of Luxembourg as well as Belgium, and alone enjoyed full status as legal tender in both countries. Only Belgium, moreover, had a full-scale central bank. The Luxembourg franc was issued by a more modest institution, the Luxembourg Monetary Institute, was limited in supply, and served as legal tender just within Luxembourg itself. Despite the existence of formal joint decision-making bodies, Luxembourg in effect existed largely as an appendage of the Belgian monetary system until both nations joined their EU partners in creating the euro.

Monetary Disintegration

Europe in the twentieth century has also seen the disintegration of several monetary unions, usually as a by-product of political dissent or dissolution. A celebrated instance occurred after World War I when the Austro-Hungarian Empire was dismembered by the Treaty of Versailles. Almost immediately, in an abrupt and quite chaotic manner, new currencies were introduced by each successor state — including Czechoslovakia, Hungary, Yugoslavia, and ultimately even shrunken Austria itself — to replace the old imperial Austrian crown. Comparable examples have also been provided more recently, after the end of the Cold War, following fragmentation along ethnic lines of both the Czechoslovak and Yugoslav federations. Most spectacular was the collapse of the former ruble zone following the break-up of the seven-decade-old Soviet Union in late 1991. Out of the rubble of the ruble no fewer than a dozen new currencies emerged to take their place on the world stage.

Outside Europe, the idea of monetary union was promoted mainly in the context of colonial or other dependency relationships, including both alliance-type and dollarization arrangements. Though most imperial regimes were quickly abandoned in favor of newly created national currencies once decolonization began after World War II, a few have survived in modified form to the present day.

British Colonies

Alliance-type arrangements emerged in the colonial domains of both Britain and France, the two biggest imperial powers of the nineteenth century. First to act were the British, who after some experimentation succeeded in creating a series of common currency zones, each closely tied to the pound sterling through the mechanism of a currency board. With a currency board, exchange rates were firmly pegged to the pound and full sterling backing was required for any new issue of the colonial money. Joint currencies were created first in West Africa (1912) and East Africa (1919) and later for British possessions in Southeast Asia (1938) and the Caribbean (1950). In southern Africa, an equivalent zone was established during the 1920s based on the South African pound (later the rand), which became the sole legal tender in three of Britain’s nearby possessions, Bechuanaland (later Botswana), British Basutoland (later Lesotho), and Swaziland, as well as in South West Africa (later Namibia), a former German colony administered by South Africa under a League of Nations mandate. Of Britain’s various arrangements, only two still exist in some form.

East Caribbean

One is in the Caribbean, where Britain’s monetary legacy has proved remarkably durable. The British Caribbean Currency Board evolved first into the Eastern Caribbean Currency Authority in 1965 and then the Eastern Caribbean Central Bank in 1983, issuing one currency, the Eastern Caribbean dollar, to serve as legal tender for all participants. Included in the Eastern Caribbean Currency Union (ECCU) are the six independent microstates of Antigua and Barbuda, Dominica, Grenada, St. Kitts and Nevis, St. Lucia, and St. Vincent and the Grenadines, plus two islands that are still British dependencies, Anguilla and Montserrat. Embedded in a broadening network of other related agreements among the same governments (the Eastern Caribbean Common Market, the Organization of Eastern Caribbean States), the ECCU has functioned without serious difficulty since its formal establishment in 1965.

Southern Africa

The other is in southern Africa, where previous links have been progressively formalized, first in 1974 as the Rand Monetary Area, later in 1986 under the label Common Monetary Area (CMA), though, significantly, without the participation of diamond-rich Botswana, which has preferred to rely on its own national money. The CMA started as a monetary union tightly based on the rand, a local form of dollarization reflecting South Africa’s economic dominance of the region. But with the passage of time the degree of hierarchy has diminished considerably, as the three remaining junior partners have asserted their growing sense of national identity. Especially since the 1970s, the arrangement has been transformed into a looser exchange-rate union as each of South Africa’s partners introduced its own distinct national currency. One of them, Swaziland, has even gone so far as to withdraw the rand’s legal-tender status within its own borders. Moreover, though all three continue to peg their moneys to the rand at par, they are no longer bound by currency board-like provisions on money creation and may now in principle vary their exchange rates at will.

Africa’s CFA Franc Zone

In the French Empire monetary union did not arrive until 1945, when the newly restored government in Paris decided to consolidate the diverse currencies of its many African dependencies into one money, le franc des Colonies Françaises d’Afrique (CFA francs). Subsequently, in the early 1960s, as independence came to France’s African domains, the old colonial franc was replaced by two new regional currencies, each cleverly named to preserve the CFA franc appellation: for the eight present members of the West African Monetary Union, le franc de la Communauté Financière de l’Afrique, issued by the Central Bank of West African States; and for the six members of the Central African Monetary Area, le franc de la Coopération Financière Africaine, issued by the Bank of Central African States. Together the two groups comprise the Communauté Financière Africaine (African Financial Community). Though each of the two currencies is legal tender only within its own region, the two are equivalently defined and have always been jointly managed under the aegis of the French Ministry of Finance as integral parts of a single monetary union, popularly known as the CFA Franc Zone.

Elsewhere imperial powers preferred some version of a dollarization-type regime, promoting use of their own currencies in colonial possessions to reinforce dependency relationships — though few of these hierarchical arrangements survived the arrival of decolonization. The only major exceptions are to be found among smaller countries with special ties to the United States. Most prominently, these include Panama and Liberia, two states that owe their very existence to U.S. initiatives. Immediately after gaining its independence in 1903 with help from Washington, Panama adopted America’s greenback as its national currency in lieu of a money of its own. In similar fashion during World War II, Liberia — a nation founded by former American slaves — made the dollar its sole legal tender, replacing the British West African colonial coinage that had previously dominated the local money supply. Other long-time dollarizers include the Marshall Islands, Micronesia, and Palau, Pacific Ocean microstates that were all once administered by the United States under United Nations trusteeships. Most recently, the dollar replaced failed local currencies in Ecuador in 2000 and in El Salvador in 2001 and was adopted by East Timor when that state gained its independence in 2002.

Europe’s Monetary Union

The most dramatic episode in the history of monetary unions is of course EMU, in many ways a unique undertaking — a group of fully independent states, all partners in the European Union, that have voluntarily agreed to replace existing national currencies with one newly created money, the euro. The euro was first introduced in 1999 in electronic form (a “virtual” currency), with notes and coins following in 2002. Moreover, even while retaining political sovereignty, member governments have formally delegated all monetary sovereignty to a single joint authority, the European Central Bank. These are not former overseas dependencies like the members of ECCU or the CFA Franc Zone, inheriting arrangements that had originated in colonial times; nor are they small fragile economies like Ecuador or El Salvador, surrendering monetary sovereignty to an already proven and popular currency like the dollar. Rather, these are established states of long standing and include some of the biggest national economies in the world, engaged in a gigantic experiment of unprecedented proportions. Not surprisingly, therefore, EMU has stimulated growing interest in monetary union in many parts of the world. Despite the failure of many past initiatives, the future could see yet more joint currency ventures among sovereign states.

Bartel, Robert J. “International Monetary Unions: The XIXth Century Experience.” Journal of European Economic History 3, no. 3 (1974): 689-704.

Bordo, Michael, and Lars Jonung. Lessons for EMU from the History of Monetary Unions. London: Institute of Economic Affairs, 2000.

Capie, Forrest. “Monetary Unions in Historical Perspective: What Future for the Euro in the International Financial System.” In Ideas for the Future of the International Monetary System, edited by Michele Fratianni, Dominick Salvatore, and Paolo Savona., 77-95. Boston: Kluwer Academic Publishers, 1999.

Cohen, Benjamin J. “Beyond EMU: The Problem of Sustainability.” In The Political Economy of European Monetary Unification, second edition, edited by Barry Eichengreen and Jeffry A. Frieden, 179-204.Boulder, CO: Westview Press, 2001.

Cohen, Benjamin J. The Geography of Money. Ithaca, NY: Cornell University Press, 1998.

De Cecco, Marcello. “European Monetary and Financial Cooperation before the First World War.” Rivista di Storia Economica 9 (1992): 55-76.

Graboyes, Robert F. “The EMU: Forerunners and Durability.” Federal Reserve Bank of Richmond Economic Review 76, no. 4 (1990): 8-17.

Hamada, Koichi, and David Porteous. L’Intégration Monétaire dans Une Perspective Historique.” Revue d’Économie Financière 22 (1992): 77-92.

Helleiner, Eric. The Making of National Money: Territorial Currencies in Historical Perspective. Ithaca, NY: Cornell University Press, 2003.

Perlman, M. “In Search of Monetary Union.” Journal of European Economic History 22, no. 2 (1993): 313-332.

Vanthoor, Wim F.V. European Monetary Union Since 1848: A Political and Historical Analysis. Brookfield, VT: Edward Elgar, 1996.

Citation: Cohen, Benjamin. “Monetary Unions”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/monetary-unions/