EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

The Sterling Area

Jerry Mushin, Victoria University of Wellington

1931-39

One of the consequences of the economic crisis of 1929–33 was that a large number of countries abandoned the gold standard. This meant that their governments no longer guaranteed, in gold terms, their currencies’ values. The United Kingdom (and the Irish Free State, whose currency had a rigidly fixed exchange rate with the British pound) left the gold standard in 1931. To reduce the fluctuation of exchange rates, many of the countries that left the gold standard decided to stabilize their currencies with respect to the value of the British pound (which is also known as sterling). These countries became known, initially unofficially, as the Sterling Area (and also as the Sterling Bloc). Sterling Area countries tended (as they had before the end of the gold standard) to hold their reserves in the form of sterling balances in London.

The countries that formed the Sterling Area generally had at least one of two characteristics. The UK had strong historical links with these countries and/or was a major market for their exports. Membership of the Sterling Area was not constant. By 1933, it comprised most of the British Empire, and Denmark, Egypt, Estonia, Finland, Iran, Iraq, Latvia, Lithuania, Norway, Portugal, Siam (Thailand), Sweden, and other countries. Despite being parts of the British Empire, Canada, Hong Kong, and Newfoundland did not join the Sterling Area. However, Hong Kong joined the Sterling Area after the Second World War. Other countries, including Argentina, Brazil, Bolivia, Greece, Japan, and Yugoslavia, stabilized their exchange rates with respect to the British pound for several years and (especially Argentina and Japan) often held significant reserves in sterling but, partly because they enforced exchange control, were not regarded as part of the Sterling Area.

Following the 1931 crisis, the UK introduced restrictions on overseas lending. This provided an additional incentive for Sterling Area membership. Countries that pegged their currencies to the British pound, and held their official external reserves largely in sterling assets, had preferential access to the British capital market. The British pound was perceived to have a relatively stable value and to be widely acceptable.

Membership of the Sterling Area also involved an effective pooling of non-sterling (especially U.S.dollar) reserves, which were frequently a scarce resource. This was of mutual benefit; the surpluses of some countries financed the deficits of others. The UK could perhaps be regarded as the banker for the other members of the Sterling Area.

Following the gold standard crisis in the early 1930s, the Sterling Area was one of three major currency groups. The gold bloc, comprising Belgium, France, Italy, Luxembourg, Netherlands, Switzerland, and Poland (and the colonial territories of four of these), consisted of those countries that, in 1933, expressed a formal determination to continue to operate the gold standard. However, this bloc began to collapse from 1935. The third group of countries was known as the exchange-control countries. The members of this bloc, comprising Austria, Bulgaria, Czechoslovakia, Germany, Greece, Hungary, Turkey, and Yugoslavia, regulated the currency market and imposed tariffs and import restrictions. Germany was the dominant member of this bloc.

1939-45

In September 1939, at the start of the Second World War, the British government introduced exchange controls. However, there were no restrictions on payments between Sterling Area countries. The value of the pound was fixed at US$4.03, which was a devaluation of about 14%. Partly as a result of these measures, most of the Sterling Area countries without a British connection withdrew. Egypt, Faroe Islands, Iceland, and Iraq remained members, and the Free French (non-Vichy) territories became members, of the Sterling Area.

1945-72

There were three main changes in the Sterling Area after the Second World War. First, its membership was precisely defined, as the Scheduled Territories, in the Exchange Control Act, 1947. It was previously unclear whether certain countries were members. Second, the Sterling Area became more discriminatory. Members tended not to restrict trade with other Sterling Area countries while applying restrictions to trade with other countries. The intention was to economize on the use of United States dollars, and other non-sterling currencies, which were in short supply. Third, war finances had increased many countries’ sterling balances in London without increasing the reserves held by the British government. This exposed the reserves to heavier pressures than they had had to withstand before the war.

In 1947, the Sterling Area was defined as all members of the Commonwealth except Canada and Newfoundland, all British territories, Burma, Iceland, Iraq, Irish Republic, Jordan, Kuwait and the other Persian Gulf sheikhdoms, and Libya. In the rest of the world, which was categorized as the Prescribed Territories, controls prevented the conversion of British pounds to U.S. dollars (and to currencies that were pegged to the U.S. dollar). Formal convertibility of British pounds into U.S. dollars, which was introduced in 1958, applied only to non-residents of the Sterling Area (Schenk, 2010).

Following the 1949 devaluation of the British pound, by 30.5% from US$4.03 to US$2.80, much of the rest of the world, and almost all of the Sterling Area, devalued too. This indicates the major international trading role of the British economy. A notable exception, which did not devalue immediately, was Pakistan. Most currencies’ sterling parities did not change, so this destroyed the intended effect of the British devaluation.

The world economy had changed by the time of the next sterling crisis. The immediate international impact of the 1967 devaluation of the British pound, by 14.3% from US$2.80 to US$2.40, reflects the diminished significance of the Sterling Area. In marked contrast to the response to the 1949 devaluation, only fourteen members of the International Monetary Fund devalued their currencies following the British devaluation of 1967. A significant proportion of Sterling Area countries, including Australia, India, Pakistan, and South Africa, did not devalue. Many of the other Sterling Area countries, including Ceylon (Sri Lanka), Hong Kong, Iceland, Fiji, and New Zealand, devalued by different percentages, which changed their currencies’ sterling parities. Outside the Sterling Area, a small number of countries devalued; most of these devalued by percentages that were different to the British devaluation. The effect was that a large number of sterling parities were changed by the 1967 devaluation.

The Sterling Area showed obvious signs of decline even before the 1967 devaluation. For example, Nigeria ended its sterling parity in 1962 and Ghana ended its sterling parity in 1965. In 1964, sterling was 83% of the official reserves of overseas Sterling Area countries, but this share had decreased to 75% in 1966 and to 65% in 1967 (Schenk, 2010). The role of the UK in the Sterling Area was frequently seen, especially by France, as an obstacle in the British application to join the European Economic Community.

The reserves of the overseas members of the Sterling Area suffered a capital loss following the 1967 devaluation. This encouraged diversification of reserves into other types of assets. The British government responded by negotiating the Basel Agreements with other governments in the Sterling Area (Yeager, 1976). Each country in the Sterling Area undertook to limit its holdings of non-sterling assets and, in return, the U.S.dollar value of its sterling assets was guaranteed. These agreements restrained, but did not halt, the downward trend of holdings of sterling reserves. The Basel Agreements were partly underwritten by other central banks, which were concerned for international monetary stability, and were arranged with the assistance of the Bank for International Settlements.

1972-79

In 1972, the UK ended the fixed exchange rate, in U.S. dollars, of the pound. In 1971 or in 1972, most other Sterling Area countries ended their fixed exchange rates with respect to the British pound. Some of these countries, including Australia, Hong Kong, Jamaica, Jordan, Kenya, Malaysia, New Zealand, Pakistan, Singapore, South Africa, Sri Lanka, Tanzania, Uganda, and Zambia, pegged their currencies to the U.S. dollar. The minority of Sterling Area members that retained their sterling parities included Bangladesh, Gambia, Irish Republic, Seychelles, and the Eastern Caribbean Currency Union. Other countries in the Sterling Area introduced floating exchange rates.

Also in 1972, the UK extended to Sterling Area countries the exchange controls on capital transactions that had previously applied only to other countries. This decision, combined with the changes in sterling parities, meant that the Sterling Area effectively ceased to exist in 1972.

In 1979, when it joined the European Monetary System, the Irish Republic ended its fixed exchange rate with respect to the British pound. Membership of the EMS, which the UK did not join until 1990, required the ending of the link between the British pound and the Irish Republic pound. Also in 1979, the UK abolished all of its remaining exchange controls.

Overview

The Sterling Area was a zone of relative stability of exchange rates but not a monetary union. It did not have a single central bank. Distinct national currencies circulated within its boundaries, and their exchange rates, although fixed with respect to the British pound, were occasionally changed. For example, although the New Zealand pound was devalued in 1949 by the same percentage as the British pound, it was revalued in 1948 and devalued in 1967, both relative to the British pound. The other important feature of the Sterling Area is that capital movements between its members were generally unregulated.

The decline of the Sterling Area was related to the decline of the British pound as a reserve currency. In 1950, more than 55% of the world’s reserves were in sterling (Schenk, 2010). In 2011, the proportion was about 2% (International Monetary Fund).

In addition to the UK, the vestige of the Sterling Area now consists only of Falkland Islands, Gibraltar, Guernsey, Isle of Man, Jersey, and St. Helena, and is of purely local significance. No other countries now fix their exchange rates in terms of the British pound. Since 1985, no members of the International Monetary Fund have specified fixed exchange rates in British pounds. In one generation, the British pound has evolved from a pivotal role in the world economy to its present minor role.

References and other important sources:

Aldcroft, Derek and Michael Oliver. ExchangeRate Régimes in the Twentieth Century. Edward Elgar Publishing, Cheltenham, 1998.

Conan, Arthur. The Problem of Sterling. Macmillan Press, London, 1966.

Day, Alan. Outline of Monetary Economics. Oxford University Press, 1966.

McMahon, Christopher. Sterling in the Sixties. Oxford University Press, 1964.

Sayers, Richard. Modern Banking [7th ed]. Oxford University Press, 1967.

Scammell, W.M. The International Economy since 1945 [2nd ed]. Macmillan Press, London, 1983.

Schenk, Catherine. The Decline of Sterling: Managing the Retreat of an International Currency, 1945–92. Cambridge University Press, 2010.

Tew, Brian. The Evolution of the International Monetary System, 1945–88. Hutchinson and Co, London, 1988.

Wells, Sidney. International Economics. George Allen and Unwin Ltd, London, 1971.

Yeager, Leland. International Monetary Relations: Theory, History, and Policy [2nd ed]. Harper and Row Publishers, New York, 1976.

Women Workers in the British Industrial Revolution

Joyce Burnette, Wabash College

Historians disagree about whether the British Industrial Revolution (1760-1830) was beneficial for women. Frederick Engels, writing in the late nineteenth century, thought that the Industrial Revolution increased women’s participation in labor outside the home, and claimed that this change was emancipating. 1 More recent historians dispute the claim that women’s labor force participation rose, and focus more on the disadvantages women experienced during this time period.2 One thing is certain: the Industrial Revolution was a time of important changes in the way that women worked.

The Census

Unfortunately, the historical sources on women’s work are neither as complete nor as reliable as we would like. Aggregate information on the occupations of women is available only from the census, and while census data has the advantage of being comprehensive, it is not a very good measure of work done by women during the Industrial Revolution. For one thing, the census does not provide any information on individual occupations until 1841, which is after the period we wish to study.3 Even then the data on women’s occupations is questionable. For the 1841 census, the directions for enumerators stated that “The professions &c. of wives, or of sons or daughters living with and assisting their parents but not apprenticed or receiving wages, need not be inserted.” Clearly this census would not give us an accurate measure of female labor force participation. Table One illustrates the problem further; it shows the occupations of men and women recorded in the 1851 census, for 20 occupational categories. These numbers suggest that female labor force participation was low, and that 40 percent of occupied women worked in domestic service. However, economic historians have demonstrated that these numbers are misleading. First, many women who were actually employed were not listed as employed in the census. Women who appear in farm wage books have no recorded occupation in the census.4 At the same time, the census over-estimates participation by listing in the “domestic service” category women who were actually family members. In addition, the census exaggerates the extent to which women were concentrated in domestic service occupations because many women listed as “maids”, and included in the domestic servant category in the aggregate tables, were really agricultural workers.5

Table One

Occupational Distribution in the 1851 Census of Great Britain

Occupational Category Males (thousands) Females (thousands) Percent Female
Public Administration

64

3

4.5

Armed Forces

63

0

0.0

Professions

162

103

38.9

Domestic Services

193

1135

85.5

Commercial

91

0

0.0

Transportation & Communications

433

13

2.9

Agriculture

1788

229

11.4

Fishing

36

1

2.7

Mining

383

11

2.8

Metal Manufactures

536

36

6.3

Building & Construction

496

1

0.2

Wood & Furniture

152

8

5.0

Bricks, Cement, Pottery, Glass

75

15

16.7

Chemicals

42

4

8.7

Leather & Skins

55

5

8.3

Paper & Printing

62

16

20.5

Textiles

661

635

49.0

Clothing

418

491

54.0

Food, Drink, Lodging

348

53

13.2

Other

445

75

14.4

Total Occupied

6545

2832

30.2

Total Unoccupied

1060

5294

83.3

Source: B.R. Mitchell, Abstract of British Historical Statistics, Cambridge: Cambridge University Press, 1962, p. 60.

Domestic Service

Domestic work – cooking, cleaning, caring for children and the sick, fetching water, making and mending clothing – took up the bulk of women’s time during the Industrial Revolution period. Most of this work was unpaid. Some families were well-off enough that they could employ other women to do this work, as live-in servants, as charring women, or as service providers. Live-in servants were fairly common; even middle-class families had maids to help with the domestic chores. Charring women did housework on a daily basis. In London women were paid 2s.6d. per day for washing, which was more than three times the 8d. typically paid for agricultural labor in the country. However, a “day’s work” in washing could last 20 hours, more than twice as long as a day’s work in agriculture.6 Other women worked as laundresses, doing the washing in their own homes.

Cottage Industry

Before factories appeared, most textile manufacture (including the main processes of spinning and weaving) was carried out under the “putting-out” system. Since raw materials were expensive, textile workers rarely had enough capital to be self-employed, but would take raw materials from a merchant, spin or weave the materials in their homes, and then return the finished product and receive a piece-rate wage. This system disappeared during the Industrial Revolution as new machinery requiring water or steam power appeared, and work moved from the home to the factory.

Before the Industrial Revolution, hand spinning had been a widespread female employment. It could take as many as ten spinners to provide one hand-loom weaver with yarn, and men did not spin, so most of the workers in the textile industry were women. The new textile machines of the Industrial Revolution changed that. Wages for hand-spinning fell, and many rural women who had previously spun found themselves unemployed. In a few locations, new cottage industries such as straw-plaiting and lace-making grew and took the place of spinning, but in other locations women remained unemployed.

Another important cottage industry was the pillow-lace industry, so called because women wove the lace on pins stuck in a pillow. In the late-eighteenth century women in Bedford could earn 6s. a week making lace, which was about 50 percent more than women earned in argiculture. However, this industry too disappeared due to mechanization. Following Heathcote’s invention of the bobbinet machine (1809), cheaper lace could be made by embroidering patterns on machine-made lace net. This new type of lace created a new cottage industry, that of “lace-runners” who emboidered patterns on the lace.

The straw-plaiting industry employed women braiding straw into bands used for making hats and bonnets. The industry prospered around the turn of the century due to the invention of a simple tool for splitting the straw and war, which cut off competition from Italy. At this time women could earn 4s. to 6s. per week plaiting straw. This industry also declined, though, following the increase in free trade with the Continent in the 1820s.

Factories

A defining feature of the Industrial Revolution was the rise of factories, particularly textile factories. Work moved out of the home and into a factory, which used a central power source to run its machines. Water power was used in most of the early factories, but improvements in the steam engine made steam power possible as well. The most dramatic productivity growth occurred in the cotton industry. The invention of James Hargreaves’ spinning jenny (1764), Richard Arkwright’s “throstle” or “water frame” (1769), and Samuel Crompton’s spinning mule (1779, so named because it combined features of the two earlier machines) revolutionized spinning. Britain began to manufacture cotton cloth, and declining prices for the cloth encouraged both domestic consumption and export. Machines also appeared for other parts of the cloth-making process, the most important of which was Edmund Cartwright’s powerloom, which was adopted slowly because of imperfections in the early designs, but was widely used by the 1830s. While cotton was the most important textile of the Industrial Revolution, there were advances in machinery for silk, flax, and wool production as well.7

The advent of new machinery changed the gender division of labor in textile production. Before the Industrial Revolution, women spun yarn using a spinning wheel (or occasionally a distaff and spindle). Men didn’t spin, and this division of labor made sense because women were trained to have more dexterity than men, and because men’s greater strength made them more valuable in other occupations. In contrast to spinning, handloom weaving was done by both sexes, but men outnumbered women. Men monopolized highly skilled preparation and finishing processes such as wool combing and cloth-dressing. With mechanization, the gender division of labor changed. Women used the spinning jenny and water frame, but mule spinning was almost exclusively a male occupation because it required more strength, and because the male mule-spinners actively opposed the employment of female mule-spinners. Women mule-spinners in Glasgow, and their employers, were the victims of violent attacks by male spinners trying to reduce the competition in their occupation.8 While they moved out of spinning, women seem to have increased their employment in weaving (both in handloom weaving and eventually in powerloom factories). Both sexes were employed as powerloom operators.

Table Two

Factory Workers in 1833: Females as a Percent of the Workforce

Industry Ages 12 and under Ages 13-20 Ages 21+ All Ages
Cotton 51.8 65.0 52.2 58.0
Wool 38.6 46.2 37.7 40.9
Flax 54.8 77.3 59.5 67.4
Silk 74.3 84.3 71.3 78.1
Lace 38.7 57.4 16.6 36.5
Potteries 38.1 46.9 27.1 29.4
Dyehouse 0.0 0.0 0.0 0.0
Glass 0.0 0.0 0.0 0.0
Paper - 100.0 39.2 53.6
Whole Sample 52.8 66.4 48.0 56.8

Source: “Report from Dr. James Mitchell to the Central Board of Commissioners, respecting the Returns made from the Factories, and the Results obtained from them.” British Parliamentary Papers, 1834 (167) XIX. Mitchell collected data from 82 cotton factories, 65 wool factories, 73 flax factories, 29 silk factories, 7 potteries, 11 lace factories, one dyehouse, one “glass works”, and 2 paper mills throughout Great Britain.

While the highly skilled and highly paid task of mule-spinning was a male occupation, many women and girls were engaged in other tasks in textile factories. For example, the wet-spinning of flax, introduced in Leeds in 1825, employed mainly teenage girls. Girls often worked as assistants to mule-spinners, piecing together broken threads. In fact, females were a majority of the factory labor force. Table Two shows that 57 percent of factory workers were female, most of them under age 20. Women were widely employed in all the textile industries, and constituted the majority of workers in cotton, flax, and silk. Outside of textiles, women were employed in potteries and paper factories, but not in dye or glass manufacture. Of the women who worked in factories, 16 percent were under age 13, 51 percent were between the ages of 13 and 20, and 33 percent were age 21 and over. On average, girls earned the same wages as boys. Children’s wages rose from about 1s.6d. per week at age 7 to about 5s. per week at age 15. Beginning at age 16, and a large gap between male and female wages appeared. At age 30, women factory workers earned only one-third as much as men.

Figure One

Distribution of Male and Female Factory Employment by Age, 1833

Figure 1

Source: “Report from Dr. James Mitchell to the Central Board of Commissioners, respecting the Returns made from the Factories, and the Results obtained from them.” British Parliamentary Papers, 1834 (167) XIX.

The y-axis shows the percentage of total employment within each sex that is in that five-year age category.

Figure Two

Wages of Factory Workers in 1833

Figure 2

Source: “Report from Dr. James Mitchell to the Central Board of Commissioners, respecting the Returns made from the Factories, and the Results obtained from them.” British Parliamentary Papers, 1834 (167) XIX.

Agriculture

Wage Workers

Wage-earners in agriculture generally fit into one of two broad categories – servants who were hired annually and received part of their wage in room and board, and day-laborers who lived independently and were paid a daily or weekly wage. Before industrialization servants comprised between one-third and one-half of labor in agriculture.9 For servants the value of room and board was a substantial portion of their compensation, so the ratio of money wages is an under-estimate of the ratio of total wages (see Table Three). Most servants were young and unmarried. Because servants were paid part of their wage in kind, as board, the use of the servant contract tended to fall when food prices were high. During the Industrial Revolution the use of servants seems to have fallen in the South and East.10 The percentage of servants who were female also declined in the first half of the nineteenth century.11

Table Three

Wages of Agricultural Servants (£ per year)

Year Location Male Money Wage Male In-Kind Wage Female Money Wage Female In-Kind Wage Ratio of Money Wages Ratio of Total Wages
1770 Lancashire

7

9

3

6

0.43

0.56

1770 Oxfordshire

10

12

4

8

0.40

0.55

1770 Staffordshire

11

9

4

6

0.36

0.50

1821 Yorkshire

16.5

27

7

18

0.42

0.57

Source: Joyce Burnette, “An Investigation of the Female-Male Wage Gap during the Industrial Revolution in Britain,” Economic History Review 50 (May 1997): 257-281.

While servants lived with the farmer and received food and lodging as part of their wage, laborers lived independently, received fewer in-kind payments, and were paid a daily or a weekly wage. Though the majority of laborers were male, some were female. Table Four shows the percentage of laborers who were female at various farms in the late-18th and early-19th centuries. These numbers suggest that female employment was widespread, but varied considerably from one location to the next. Compared to men, female laborers generally worked fewer days during the year. The employment of female laborers was concentrated around the harvest, and women rarely worked during the winter. While men commonly worked six days per week, outside of harvest women generally averaged around four days per week.

Year Location Percent Female
1772-5 Oakes in Norton, Derbyshire

17

1774-7 Dunster Castle Farm, Somerset

27

1785-92 Dunster Castle Farm, Somerset

40

1794-5 Dunster Castle Farm, Somerset

42

1801-3 Dunster Castle Farm, Somerset

35

1801-4 Nettlecombe Barton, Somerset

10

1814-6 Nettlecombe Barton, Somerset

7

1826-8 Nettlecombe Barton, Somerset

5

1828-39 Shipton Moyne, Gloucestershire

19

1831-45 Oakes in Norton, Derbyshire

6

1836-9 Dunster Castle Farm, Somerset

26

1839-40 Lustead, Norfolk

6

1846-9 Dunster Castle Farm, Somerset

29

Sources: Joyce Burnette, “Labourers at the Oakes: Changes in the Demand for Female Day-Laborers at a Farm near Sheffield During the Agricultural Revolution,” Journal of Economic History 59 (March 1999): 41-67; Helen Speechley, Female and Child Agricultural Day Labourers in Somerset, c. 1685-1870, dissertation, Univ. of Exeter, 1999. Sotheron-Estcourt accounts, G.R.O. D1571; Ketton-Cremer accounts, N.R.O. WKC 5/250

The wages of female day-laborers were fairly uniform; generally a farmer paid the same wage to all the adult women he hired. Women’s daily wages were between one-third and one-half of male wages. Women generally worked shorter days, though, so the gap in hourly wages was not quite this large.12 In the less populous counties of Northumberland and Durham, male laborers were required to provide a “bondager,” a woman (usually a family member) who was available for day-labor whenever the employer wanted her.13

Table Five

Wages of Agricultural Laborers

Year Location Male Wage (d./day) Female Wage (d./day) Ratio
1770 Yorkshire 5 12 0.42
1789 Hertfordshire 6 16 0.38
1797 Warwickshire 6 14 0.43
1807 Oxfordshire 9 23 0.39
1833 Cumberland 12 24 0.50
1833 Essex 10 22 0.45
1838 Worcester 9 18 0.50

Source: Joyce Burnette, “An Investigation of the Female-Male Wage Gap during the Industrial Revolution in Britain,” Economic History Review 50 (May 1997): 257-281.

Various sources suggest that women’s employment in agriculture declined during the early nineteenth century. Enclosure increased farm size and changed the patterns of animal husbandry, both of which seem to have led to reductions in female employment.14 More women were employed during harvest than during other seasons, but women’s employment during harvest declined as the scythe replaced the sickle as the most popular harvest tool. While women frequently harvested with the sickle, they did not use the heavier scythe.15 Female employment fell the most in the East, where farms increasingly specialized in grain production. Women had more work in the West, which specialized more in livestock and dairy farming.16

Non-Wage-Earners

During the eighteenth century there were many opportunities for women to be productively employed in farm work on their own account, whether they were wives of farmers on large holdings, or wives of landless laborers. In the early nineteenth century, however, many of these opportunities disappeared, and women’s participation in agricultural production fell.

In a village that had a commons, even if the family merely rented a cottage the wife could be self-employed in agriculture because she could keep a cow, or other animals, on the commons. By careful management of her stock, a woman might earn as much during the year as her husband earned as a laborer. Women also gathered fuel from the commons, saving the family considerable expense. The enclosure of the commons, though, eliminated these opportunities. In an enclosure, land was re-assigned so as to eliminate the commons and consolidate holdings. Even when the poor had clear legal rights to use the commons, these rights were not always compensated in the enclosure agreement. While enclosure occurred at different times for different locations, the largest waves of enclosures occurred in the first two decades of the nineteenth century, meaning that, for many, opportunities for self-employment in agriculture declined as the same time as employment in cottage industry declined. 17

Only a few opportunities for agricultural production remained for the landless laboring family. In some locations landlords permitted landless laborers to rent small allotments, on which they could still grow some of their own food. The right to glean on fields after harvest seems to have been maintained at least through the middle of the nineteenth century, by which time it had become one of the few agricultural activities available to women in some areas. Gleaning was a valuable right; the value of the grain gleaned was often between 5 and 10 percent of the family’s total annual income.18

In the eighteenth century it was common for farmers’ wives to be actively involved in farm work, particularly in managing the dairy, pigs, and poultry. The diary was an important source of income for many farms, and its success depended on the skill of the mistress, who usually ran the operation with no help from men. In the nineteenth century, however, farmer’s wives were more likely to withdraw from farm management, leaving the dairy to the management of dairymen who paid a fixed fee for the use of the cows.19 While poor women withdrew from self-employment in agriculture because of lost opportunities, farmer’s wives seem to have withdraw because greater prosperity allowed them to enjoy more leisure.

It was less common for women to manage their own farms, but not unknown. Commercial directories list numerous women farmers. For example, the 1829 Directory of the County of Derby lists 3354 farmers, of which 162, or 4.8%, were clearly female.20 While the commercial directories themselves do not indicate to what extent these women were actively involved in their farms, other evidence suggests that at least some women farmers were actively involved in the work of the farm.21

Self-Employed

During the Industrial Revolution period women were also active businesswomen in towns. Among business owners listed in commercial directories, about 10 percent were female. Table Seven shows the percentage female in all the trades with at least 25 people listed in the 1788 Manchester commercial directory. Single women, married women, and widows are included in these numbers. Sometimes these women were widows carrying on the businesses of their deceased husbands, but even in this case that does not mean they were simply figureheads. Widows often continued their husband’s businesses because they had been active in management of the business while their husband was alive, and wished to continue.22 Sometimes married women were engaged in trade separately from their husbands. Women most commonly ran shops and taverns, and worked as dressmakers and milliners, but they were not confined to these areas, and appear in most of the trades listed in commercial directories. Manchester, for example, had six female blacksmiths and five female machine makers in 1846. Between 1730 and 1800 there were 121 “rouping women” selling off estates in Edinburgh. 23

Table Six

Business Owners Listed in Commercial Directories

Date City Male Female Unknown Gender Percent Female
1788 Manchester

2033

199

321

8.9

1824-5 Manchester

4185

297

1671

6.6

1846 Manchester

11,942

1222

2316

9.3

1850 Birmingham

15,054

2020

1677

11.8

1850 Derby

2415

332

194

12.1

Sources: Lewis’s Manchester Directory for 1788 (reprinted by Neil Richardson, Manchester, 1984); Pigot and Dean’s Directory for Manchester, Salford, &c. for 1824-5 (Manchester 1825); Slater’s National Commercial Directory of Ireland (Manchester, 1846); Slater’s Royal National and Commercial Directory (Manchester, 1850)

Table Seven

Women in Trades in Manchester, 1788

Trade Men Women Gender Unknown Percent Female
Apothecary/ Surgeon / Midwife

29

1

5

3.3

Attorney

39

0

3

0.0

Boot and Shoe makers

87

0

1

0.0

Butcher

33

1

1

2.9

Calenderer

31

4

5

11.4

Corn & Flour Dealer

45

4

5

8.2

Cotton Dealer

23

0

2

0.0

Draper, Mercer, Dealer of Cloth

46

15

19

24.6

Dyer

44

3

18

6.4

Fustian Cutter / Shearer

54

2

0

3.6

Grocers & Tea Dealers

91

16

12

15.0

Hairdresser & Peruke maker

34

1

0

2.9

Hatter

45

3

4

6.3

Joiner

34

0

1

0.0

Liquor dealer

30

4

14

11.8

Manufacturer, cloth

257

4

118

1.5

Merchant

58

1

18

1.7

Publichouse / Inn / Tavern

126

13

2

9.4

School master / mistress

18

10

0

35.7

Shopkeeper

107

16

4

13.0

Tailor

59

0

1

0.0

Warehouse

64

0

14

0.0

Source: Lewis’s Manchester Directory for 1788 (reprinted by Neil Richardson, Manchester, 1984)

Guilds often controlled access to trades, admitting only those who had served an apprenticeship and thus earned the “freedom” of the trade. Women could obtain “freedom” not only by apprenticeship, but also by widowhood. The widow of a tradesman was often considered knowledgeable enough in the trade that she was given the right to carry on the trade even without an apprenticeship. In the eighteenth century women were apprenticed to a wide variety of trades, including butchery, bookbinding, brush making, carpentry, ropemaking and silversmithing.24 Between the eighteenth and nineteenth centuries the number of females apprenticed to trades declined, possibly suggesting reduced participation by women. However, the power of the guilds and the importance of apprenticeship were also declining during this time, so the decline in female apprenticeships may not have been an important barrier to employment.25

Many women worked in the factories of the Industrial Revolution, and a few women actually owned factories. In Keighley, West Yorkshire, Ann Illingworth, Miss Rachael Leach, and Mrs. Betty Hudson built and operated textile mills.26 In 1833 Mrs. Doig owned a powerloom factory in Scotland, which employed 60 workers.27

While many women did successfully enter trades, there were obstacles to women’s employment that kept their numbers low. Women generally received less education than men (though education of the time was of limited practical use). Women may have found it more difficult than men to raise the necessary capital because English law did not consider a married woman to have any legal existence; she could not sue or be sued. A married woman was a feme covert and technically could not make any legally binding contracts, a fact which may have discouraged others from loaning money to or making other contracts with married women. However, this law was not as limiting in practice as it would seem to be in theory because a married woman engaged in trade on her own account was treated by the courts as a feme sole and was responsible for her own debts.28

The professionalization of certain occupations resulted in the exclusion of women from work they had previously done. Women had provided medical care for centuries, but the professionalization of medicine in the early-nineteenth century made it a male occupation. The Royal College of Physicians admitted only graduates of Oxford and Cambridge, schools to which women were not admitted until the twentieth century. Women were even replaced by men in midwifery. The process began in the late-eighteenth century, when we observe the use of the term “man-midwife,” an oxymoronic title suggestive of changing gender roles. In the nineteenth century the “man-midwife” disappeared, and women were replaced by physicians or surgeons for assisting childbirth. Professionalization of the clergy was also effective in excluding women. While the Church of England did not allow women ministers, the Methodists movement had many women preachers during its early years. However, even among the Methodists female preachers disappeared when lay preachers were replaced with a professional clergy in the early nineteenth century.29

In other occupations where professionalization was not as strong, women remained an important part of the workforce. Teaching, particularly in the lower grades, was a common profession for women. Some were governesses, who lived as household servants, but many opened their own schools and took in pupils. The writing profession seems to have been fairly open to women; the leading novelists of the period include Jane Austen, Charlotte and Emily Brontë, Fanny Burney, George Eliot (the pen name of Mary Ann Evans), Elizabeth Gaskell, and Frances Trollope. Female non-fiction writers of the period include Jane Marcet, Hannah More, and Mary Wollstonecraft.

Other Occupations

The occupations listed above are by no means a complete listing of the occupations of women during the Industrial Revolution. Women made buttons, nails, screws, and pins. They worked in the tin plate, silver plate, pottery and Birmingham “toy” trades (which made small articles like snuff boxes). Women worked in the mines until The Mines Act of 1842 prohibited them from working underground, but afterwards women continued to pursue above-ground mining tasks.

Married Women in the Labor Market

While there are no comprehensive sources of information on the labor force participation of married women, household budgets reported by contemporary authors give us some information on women’s participation.30 For the period 1787 to 1815, 66 percent of married women in working-class households had either a recorded occupation or positive earnings. For the period 1816-20 the rate fell to 49 percent, but in 1821-40 it recovered to 62 percent. Table Eight gives participation rates of women by date and occupation of the husband.

Table Eight

Participation Rates of Married Women

 

High-Wage Agriculture

Low-Wage Agriculture

Mining

Factory

Outwork

Trades

All

1787-1815

55

85

40

37

46

63

66

1816-1820

34

NA

28

4

42

30

49

1821-1840

22

85

33

86

54

63

62

Source: Sara Horrell and Jane Humphries, “Women’s Labour Force Participation and the Transition to the male-Breadwinner Family, 1790-1865,” Economic History Review 48 (February 1995): 89-117

While many wives worked, the amount of their earnings was small relative to their husband’s earnings. Annual earnings of married women who did work averaged only about 28 percent of their husband’s earnings. Because not all women worked, and because children usually contributed more to the family budget than their mothers, for the average family the wife contributed only around seven percent of total family income.

Childcare

Women workers used a variety of methods to care for their children. Sometimes childcare and work were compatible, and women took their children with them to the fields or shops where they worked.31 Sometimes women working at home would give their infants opiates such as “Godfrey’s Cordial” in order to keep the children quiet while their mothers worked.32 The movement of work into factories increased the difficulty of combining work and childcare. In most factory work the hours were rigidly set, and women who took the jobs had to accept the twelve or thirteen hour days. Work in the factories was very disciplined, so the women could not bring their children to the factory, and could not take breaks at will. However, these difficulties did not prevent women with small children from working.

Nineteenth-century mothers used older siblings, other relatives, neighbors, and dame schools to provide child care while they worked.33 Occasionally mothers would leave young children home alone, but this was dangerous enough that only a few did so.34 Children as young as two might be sent to dame schools, in which women would take children into their home and provide child care, as well as some basic literacy instruction.35 In areas where lace-making or straw-plaiting thrived, children were sent from about age seven to “schools” where they learned the trade.36

Mothers might use a combination of different types of childcare. Elizabeth Wells, who worked in a Leicester worsted factory, had five children, ages 10, 8, 6, 2, and four months. The eldest, a daughter, stayed home to tend the house and care for the infant. The second child worked, and the six-year-old and two-year-old were sent to “an infant school.”37 Mary Wright, an “over-looker” in the rag-cutting room of a Buckinghamshire paper factory, had five children. The eldest worked in the rag-cutting room with her, the youngest was cared for at home, and the middle three were sent to a school; “for taking care of an infant she pays 1s.6d. a-week, and 3d. a-week for the three others. They go to a school, where they are taken care of and taught to read.”38

The cost of childcare was substantial. At the end of the eighteenth century the price of child-care was about 1s. a week, which was about a quarter of a woman’s weekly earnings in agriculture.39 In the 1840s mothers paid anywhere from 9d. to 2s.6d. per week for child care, out of a wage of around 7s. per week.40

For Further Reading

Burnette, Joyce. “An Investigation of the Female-Male Wage Gap during the Industrial Revolution in Britain.” Economic History Review 50 (1997): 257-281.

Davidoff, Leonore, and Catherine Hall. Family Fortunes: Men and Women of the English Middle Class, 1780-1850. Chicago: University of Chicago Press, 1987.

Honeyman, Katrina. Women, Gender and Industrialisation in England, 1700-1870. New York: St. Martin’s Press, 2000.

Horrell, Sara, and Jane Humphries. “Women’s Labour Force Participation and the Transition to the Male-Breadwinner Family, 1790-1865.” Economic History Review 48 (1995): 89-117.

Humphries, Jane. “Enclosures, Common Rights, and Women: The Proletarianization of Families in the Late Eighteenth and Early Nineteenth Centuries.” Journal of Economic History 50 (1990): 17-42.

King, Peter. “Customary Rights and Women’s Earnings: The Importance of Gleaning to the Rural Labouring Poor, 1750-1850.” Economic History Review 44 (1991): 461-476

Kussmaul, Ann. Servants in Husbandry in Early Modern England. Cambridge: Cambridge University Press, 1981.

Pinchbeck, Ivy. Women Workers and the Industrial Revolution, 1750-1850, London: Routledge, 1930.

Sanderson, Elizabeth. Women and Work in Eighteenth-Century Edinburgh. New York: St. Martin’s Press, 1996.

Snell, K.D.M. Annals of the Labouring Poor: Social Change and Agrarian England, 1660-1900. Cambridge: Cambridge University Press, 1985.

Valenze, Deborah. Prophetic Sons and Daughters: Female Preaching and Popular Religion in Industrial England. Princeton University Press, 1985.

Valenze, Deborah. The First Industrial Woman. Oxford: Oxford University Press, 1995.

1 “Since large-scale industry has transferred the woman from the house to the labour market and the factory, and makes her, often enough, the bread-winner of the family, the last remnants of male domination in the proletarian home have lost all foundation – except, perhaps, for some of that brutality towards women which became firmly rooted with the establishment of monogamy. . . .It will then become evidence that the first premise for the emancipation of women is the reintroduction of the entire female sex into public industry.” Frederick Engels, The Origin of the Family, Private Property and the State, in Karl Marx and Frederick Engels: Selected Works, New York: International Publishers, 1986, p. 508, 510.

2 Ivy Pinchbeck (Women Workers and the Industrial Revolution, Routledge, 1930) claimed that higher incomes allowed some women to withdraw from the labor force. While she saw some disadvantages resulting from this withdrawal, particularly the loss of independence, she thought that overall women benefited from having more time to devote to their homes and families. Davidoff and Hall (Family Fortunes: Man and Women of the English Middle Class, 1780-1850, Univ. of Chicago Press, 1987) agree that women withdrew from work, but they see the change as a negative result of gender discrimination. Similarly, Horrell and Humphries (“Women’s Labour Force Participation and the Transition to the Male-Breadwinner Family, 1790-1865,” Economic History Review, Feb. 1995, XLVIII:89-117) do not find that rising incomes caused declining labor force participation, and they believe that declining demand for female workers caused the female exodus from the workplace.

3 While the British census began in 1801, individual enumeration did not begin until 1841. For a detailed description of the British censuses of the nineteenth century, see Edward Higgs, Making Sense of the Census, London: HMSO, 1989.

4 For example, Helen Speechley, in her dissertation, showed that seven women who worked for wages at a Somerset farm had no recorded occupation in the 1851 census See Helen Speechley, Female and Child Agricultural Day Labourers in Somerset, c. 1685-1870, dissertation, Univ. of Exeter, 1999.

5 Edward Higgs finds that removing family members from the “servants” category reduced the number of servants in Rochdale in 1851. Enumerators did not clearly distinguish between the terms “housekeeper” and “housewife.” See Edward Higgs, “Domestic Service and Household Production” in Angela John, ed., Unequal Opportunities, Oxford: Basil Blackwell, and “Women, Occupations and Work in the Nineteenth Century Censuses,” History Workshop, 1987, 23:59-80. In contrast, the censuses of the early 20th century seem to be fairly accurate; see Tim Hatton and Roy Bailey, “Women’s Work in Census and Survey, 1911-1931,” Economic History Review, Feb. 2001, LIV:87-107.

6 A shilling was equal to 12 pence, so if women earned 2s.6d. for 20 hours, they earned 1.5d. per hour. Women agricultural laborers earned closer to 1d. per hour, so the London wage was higher. See Dorothy George, London Life in the Eighteenth-Century, London: Kegan Paul, Trench, Trubner & Co., 1925, p. 208, and Patricia Malcolmson, English Laundresses, Univ. of Illinois Press, 1986, p. 25. .

7 On the technology of the Industrial Revolution, see David Landes, The Unbound Prometheus, Cambridge Univ. Press, 1969, and Joel Mokyr, The Lever of Riches, Oxford Univ. Press, 1990.

8 A petition from Glasgow cotton manufactures makes the following claim, “In almost every department of the cotton spinning business, the labour of women would be equally efficient with that of men; yet in several of these departments, such measures of violence have been adopted by the combination, that the women who are willing to be employed, and who are anxious by being employed to earn the bread of their families, have been driven from their situations by violence. . . . Messrs. James Dunlop and Sons, some years ago, erected cotton mills in Calton of Glasgow, on which they expended upwards of [£]27,000 forming their spinning machines, (Chiefly with the view of ridding themselves of the combination [the male union],) of such reduced size as could easily be wrought by women. They employed women alone, as not being parties to the combination, and thus more easily managed, and less insubordinate than male spinners. These they paid at the same rate of wages, as were paid at other works to men. But they were waylaid and attacked, in going to, and returning from their work; the houses in which they resided, were broken open in the night. The women themselves were cruelly beaten and abused; and the mother of one of them killed; . . . And these nefarious attempts were persevered in so systematically, and so long, that Messrs. Dunlop and sons, found it necessary to dismiss all female spinners from their works, and to employ only male spinners, most probably the very men who had attempted their ruin.” First Report from the Select Committee on Artizans and Machinery, British Parliamentary Papers, 1824 vol. V, p. 525.

9 Ann Kussmaul, Servants in Husbandry in Early Modern England, Cambridge Univ. Press, 1981, Ch. 1

10 See Ivy Pinchbeck, Women Workers and the Industrial Revolution, Routledge, 1930, Ch. 1, and K.D.M. Snell, Annals of the Labouring Poor, Cambridge Univ. Press, 1985, Ch. 2.

11 For the period 1574 to 1821 about 45 percent of servants were female, but this fell to 32 percent in 1851. See Ann Kussmaul, Servants in Husbandry in Early Modern England, Cambridge Univ. Press, 1981, Ch. 1.

12 Men usually worked 12-hour days, and women averaged closer to 10 hours. See Joyce Burnette, “An Investigation of the Female-Male Wage Gap during the Industrial Revolution in Britain,” Economic History Review, May 1997, 50:257-281.

13 See Ivy Pinchbeck, Women Workers and the Industrial Revolution, Routledge, 1930, p. 65.

14 See Robert Allen, Enclosure and the Yeoman, Clarendon Press, 1992, and Joyce Burnette, “Labourers at the Oakes: Changes in the Demand for Female Day-Laborers at a Farm near Sheffield During the Agricultural Revolution,” Journal of Economics History, March 1999, 59:41-67.

15 While the scythe had been used for mowing grass for hay or cheaper grains for some time, the sickle was used for harvesting wheat until the nineteenth century. Thus adoption of the scythe for harvesting wheat seems to be a response to changing prices rather than invention of a new technology. The scythe required less labor to harvest a given acre, but left more grain on the ground, so as grain prices fell relative to wages, farmers substituted the scythe for the sickle. See E.J.T. Collins, “Harvest Technology and Labour Supply in Britain, 1790-1870,” Economic History Review, Dec. 1969, XXIII:453-473.

16 K.D.M. Snell, Annals of the Labouring Poor, Cambridge, 1985.

17 See Jane Humphries, “Enclosures, Common Rights, and Women: The Proletarianization of Families in the Late Eighteenth and Early Nineteenth Centuries,” Journal of Economic History, March 1990, 50:17-42, and J.M. Neeson, Commoners: Common Rights, Enclosure and Social Change in England, 1700-1820, Cambridge Univ. Press, 1993.

18 See Peter King, “Customary Rights and Women’s Earnings: The Importance of Gleaning to the Rural Labouring Poor, 1750-1850,” Economic History Review, 1991, XLIV:461-476.

19 Pinchbeck, Women Workers and the Industrial Revolution, Routledge, 1930, p. 41-42 See also Deborah Valenze, The First Industrial Woman, Oxford Univ. Press, 1995

20 Stephen Glover, The Directory of the County of Derby, Derby: Henry Mozley and Son, 1829.

21 Eden gives an example of gentlewomen who, on the death of their father, began to work as farmers. He notes, “not seldom, in one and the same day, they have divided their hours in helping to fill the dung-cart, and receiving company of the highest rank and distinction.” (F.M. Eden, The State of the Poor, vol. i., p. 626.) One woman farmer who was clearly an active manager celebrated her success in a letter sent to the Annals of Agriculture, (quoted by Pinchbeck, Women Workers and the Industrial Revolution, Routledge, 1930, p. 30): “I bought a small estate, and took possession of it in the month of July, 1803. . . . As a woman undertaking to farm is generally a subject of ridicule, I bought the small estate by way of experiment: the gentlemen of the county have now complimented me so much on having set so good and example to the farmers, that I have determined on taking a very large farm into my hands.” The Annals of Agriculture give a number of examples of women farmers cited for their experiments or their prize-winning crops.

22 Tradesmen considered themselves lucky to find a wife who was good at business. In his autobiography James Hopkinson, a cabinetmaker, said of his wife, “I found I had got a good and suitable companion one with whom I could take sweet council and whose love and affections was only equall’d by her ability as a business woman.” Victorian Cabinet Maker: The Memoirs of James Hopkinson, 1819-1894, 1968, p. 96.

23 See Elizabeth Sanderson, Women and Work in Eighteenth-Century Edinburgh, St. Martin’s Press, 1996.

24 See K.D.M. Snell, Annals of the Labouring Poor, Cambridge Univ. Press, 1985, Table 6.1.

25 The law requiring a seven-year apprenticeship before someone could work in a trade was repealed in 1814.

26 See Francois Crouzet, The First Industrialists, Cambridge Univ. Press, 1985, and M.L. Baumber, From Revival to Regency: A History of Keighley and Haworth, 1740-1820, Crabtree Ltd., Keighley, 1983.

27 First Report of the Central Board of His Majesty’s Commissioners for inquiry into the Employment of Children in Factories, with Minutes of Evidence, British Parliamentary Papers, 1833 (450) XX, A1, p. 120. \

28 For example, in the case of “LaVie and another Assignees against Philips and another Assignees,” the court upheld the right of a woman to operate as feme sole. In 1764 James Cox and his wife Jane were operating separate businesses, and both went bankrupt within the space of two months. Jane’s creditors sued James’s creditors for the recovery of five fans, goods from her shop that had been taken for James’s debts. The court ruled that, since Jane was trading as a feme sole, her husband did not own the goods in her shop, and thus James’s creditors had no right to seize them. See William Blackstone, Reports of Cases determined in the several Courts of Westminster-Hall, from 1746 to 1779, London, 1781, p. 570-575.

29 See Deborah Valenze, Prophetic Sons and Daughters: Female Preaching and Popular Religion in Industrial England, Princeton Univ. Press, 1985.

30 See Sara Horrell and Jane Humphries, “Women’s Labour Force Participation and the Transition to the male-Breadwinner Family, 1790-1865,” Economic History Review, Feb. 1995, XLVIII:89-117.

31 In his autobiography James Hopkinson says of his wife, “How she laboured at the press and assisted me in the work of my printing office, with a child in her arms, I have no space to tell, nor in fact have I space to allude to the many ways she contributed to my good fortune.” James Hopkinson, Victorian Cabinet Marker: The Memoirs of James Hopkinson, 1819-1894, J.B. Goodman, ed., Routledge & Kegan Paul, 1968, p. 96. A 1739 poem by Mary Collier suggests that carrying babies into the field was fairly common; it contains these lines:

Our tender Babes into the Field we bear,

And wrap them in our Cloaths to keep them warm,

While round about we gather up the Corn;

. . .

When Night comes on, unto our Home we go,

Our Corn we carry, and our Infant too.

Mary Collier, The Woman’s Labour, Augustan Reprint Society, #230, 1985, p. 10. A 1835 Poor Law report stated that in Sussex, “the custom of the mother of a family carrying her infant with her in its cradle into the field, rather than lose the opportunity of adding her earnings to the general stock, though partially practiced before, is becoming very much more general now.” (Quoted in Pinchbeck, Women Workers and the Industrial Revolution, Routledge, 1930, p. 85.)

32 Sarah Johnson of Nottingham claimed that she ” Knows it is quite a common custom for mothers to give Godfrey’s and the Anodyne cordial to their infants, ‘it is quite too common.’ It is given to infants at the breast; it is not given because the child is ill, but ‘to compose it to rest, to sleep it,’ so that the mother may get to work. ‘Has seen an infant lay asleep on its mother’s lap whilst at the lace-frame for six or eight hours at a time.’ This has been from the effects of the cordial.” [Reports from Assistant Handloom-Weavers' Commissioners, British Parliamentary Papers, 1840 (43) XXIII, p. 157] Mary Colton, a lace worker from Nottingham, described her use of the drug to parliamentary investigators thus: ‘Was confined of an illegitimate child in November, 1839. When the child was a week old she gave it a half teaspoonful of Godfrey’s twice a-day. She could not afford to pay for the nursing of the child, and so gave it Godfrey’s to keep it quiet, that she might not be interrupted at the lace piece; she gradually increased the quantity by a drop or two at a time until it reached a teaspoonful; when the infant was four months old it was so “wankle” and thin that folks persuaded her to give it laudanum to bring it on, as it did other children. A halfpenny worth, which was about a teaspoonful and three-quarters, was given in two days; continued to give her this quantity since February, 1840, until this last past (1841), and then reduced the quantity. She now buys a halfpenny worth of laudanum and a halfpenny worth of Godfrey’s mixed, which lasts her three days. . . . If it had not been for her having to sit so close to work she would never have given the child Godfrey’s. She has tried to break it off many times but cannot, for if she did, she should not have anything to eat.” [Children's Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary
Papers
, 1843 (431) XIV, p. 630].

33 Elizabeth Leadbeater, who worked for a Birmingham brass-founder, worked while she was nursing and had her mother look after the infant. [Children's Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 710.] Mrs. Smart, an agricultural worker from Calne, Wiltshire, noted, “Sometimes I have had my mother, and sometimes my sister, to take care of the children, or I could not have gone out.” [Reports of Special Assistant Poor Law Commissioners on the Employment of Women and Children in Agriculture, British Parliamentary Papers, 1843 (510) XII, p. 65.] More commonly, though, older siblings provided the childcare. “Older siblings” generally meant children of nine or ten years old, and included boys as well as girls. Mrs. Britton of Calne, Wiltshire, left her children in the care of her eldest boy. [Reports of Special Assistant Poor Law Commissioners on the Employment of Women and Children in Agriculture, British Parliamentary Papers, 1843 (510) XII, p. 66] In a family from Presteign, Wales, containing children aged 9, 7, 5, 3, and 1, we find that “The oldest children nurse the youngest.” [F.M. Eden, State of the Poor, London: Davis, 1797, vol. iii, p. 904] When asked what income a labourer’s wife and children could earn, some respondents to the 1833 “Rural Queries” assumed that the eldest child would take care of the others, leaving the mother free to work. The returns from Bengeworth, Worcester, report that, “If the Mother goes to field work, the eldest Child had need to stay at home, to tend the younger branches of the Family.” Ewhurst, Surrey, reported that “If the Mother were employed, the elder Children at home would probably be required to attend to the younger Children.” [Report of His Majesty's Commissioners for Inquiry in the Administration and Practical Operation of the Poor Law, Appendix B,
"Rural Queries," British Parliamentary Papers, 1834 (44) XXX, p. 488 and 593]

34 Parents heard of incidents, such as one reported in the Times (Feb. 6, 1819):

A shocking accident occurred at Llandidno, near Conway, on Tuesday night, during the absence of a miner and his wife, who had gone to attend a methodist meeting, and locked the house door, leaving two children within; the house by some means took fire, and was, together with the unfortunate children, consumed to ashes; the eldest only four years old!

Mothers were aware of these dangers. One mother who admitted to leaving her children at home worried greatly about the risks:

I have always left my children to themselves, and, God be praised! nothing has ever happened to them, though I thought it dangerous. I have many a time come home, and have thought it a mercy to find nothing has happened to them. . . . Bad accidents often happen. [Reports of Special Assistant Poor Law Commissioners on the Employment of Women and Children in Agriculture, British Parliamentary Papers, 1843 (510) XII, p. 68.]

Leaving young children home without child care had real dangers, and the fact that most working mothers paid for childcare suggests that they did not consider leaving young children alone to be an acceptable option.

35 In 1840 an observer of Spitalfields noted, “In this neighborhood, where the women as well as the men are employed in the manufacture of silk, many children are sent to small schools, not for instruction, but to be taken care of whilst their mothers are at work.”[ Reports from Assistant Handloom-Weavers' Commissioners, British Parliamentary Papers, 1840 (43) XXIII, p. 261] In 1840 the wife of a Gloucester weaver earned 2s. a week from running a school; she had twelve students and charged each 2d. a week. [Reports from Assistant Handloom Weavers' Commissioners, British Parliamentary Papers, 1840 (220) XXIV, p. 419] In 1843 the lace-making schools of the midlands generally charged 3d. per week. [Children's Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 46, 64, 71, 72]

36 At one straw-plaiting school in Hertfordshire,

Children commence learning the trade about seven years old: parents pay 3d. a-week for each child, and for this they are taught the trade and taught to read. The mistress employs about from 15 to 20 at work in a room; the parents get the profits of the children’s labour.[ Children's Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 64]

At these schools there was very little instruction; some time was devoted to teaching the children to read, but they spent most of their time working. One mistress complained that the children worked too much and learned too little, “In my judgment I think the mothers task the children too much; the mistress is obliged to make them perform it, otherwise they would put them to other schools.” Ann Page of Newport Pagnell, Buckinghamshire, had “eleven scholars” and claimed to “teach them all reading once a-day.” [Children's Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 66, 71] The standard rate of 3d. per week seems to have been paid for supervision of the children rather than for the instruction.

37 First Report of the Central Board of His Majesty’s Commissioners for Inquiring into the Employment of Children in Factories, British Parliamentary Papers, 1833 (450) XX, C1 p. 33.

38 Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 46.

39 David Davies, The Case of Labourers in Husbandry Stated and Considered, London: Robinson, 1795, p.14. Agricultural wages for this time period are found in Eden, State of the Poor, London: Davis, 1797.

40 In 1843 parliamentary investigator Alfred Austin reports, “Where a girl is hired to take care of children, she is paid about 9d. a week, and has her food besides, which is a serious deduction from the wages of the woman at work.”[ Reports of Special Assistant Poor Law Commissioners on the Employment of Women and Children in Agriculture, British Parliamentary Papers,1843 (510) XII, p.26] Agricultural wages in the area were 8d. per day, so even without the cost of food, the cost of child care was about one-fifth a woman’s wage. One Scottish woman earned 7s. per week in a coal mine and paid 2s.6d., or 36 percent of her income, for the care of her children.[ B.P.P. 1844 (592) XVI, p. 6] In 1843 Mary Wright, a “over-looker” at a Buckinghamshire paper factory, paid even more for child care; she told parliamentary investigators that “for taking care of an infant she pays 1s.6d. a-week, and 3d. a-week for three others.” [Children's Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 46] She earned 10s.6d. per week, so her total child-care payments were 21 percent of her wage. Engels put the cost of child care at 1s. or 18d. a week. [Engels, [1845] 1926, p. 143] Factory workers often made 7s. a week, so again these women may have paid around one-fifth of their earnings for child care. Some estimates suggest even higher fractions of women’s income went to child care. The overseer of Wisbech, Cambridge, suggests a higher fraction; he reports, “The earnings of the Wife we consider comparatively small, in cases where she has a large family to attend to; if she has one or two children, she has to pay half, or perhaps more of her earnings for a person to take care of them.” [Report of His Majesty's Commissioners for Inquiry in the Administration and Practical Operation of the Poor Law, Appendix B, "Rural Queries,"
British Parliamentary Papers, 1834 (44) XXX, p. 76
]

Antebellum Banking in the United States

Howard Bodenhorn, Lafayette College

The first legitimate commercial bank in the United States was the Bank of North America founded in 1781. Encouraged by Alexander Hamilton, Robert Morris persuaded the Continental Congress to charter the bank, which loaned to the cash-strapped Revolutionary government as well as private citizens, mostly Philadelphia merchants. The possibilities of commercial banking had been widely recognized by many colonists, but British law forbade the establishment of commercial, limited-liability banks in the colonies. Given that many of the colonists’ grievances against Parliament centered on economic and monetary issues, it is not surprising that one of the earliest acts of the Continental Congress was the establishment of a bank.

The introduction of banking to the U.S. was viewed as an important first step in forming an independent nation because banks supplied a medium of exchange (banknotes1 and deposits) in an economy perpetually strangled by shortages of specie money and credit, because they animated industry, and because they fostered wealth creation and promoted well-being. In the last case, contemporaries typically viewed banks as an integral part of a wider system of government-sponsored commercial infrastructure. Like schools, bridges, road, canals, river clearing and harbor improvements, the benefits of banks were expected to accrue to everyone even if dividends accrued only to shareholders.

Financial Sector Growth

By 1800 each major U.S. port city had at least one commercial bank serving the local mercantile community. As city banks proved themselves, banking spread into smaller cities and towns and expanded their clientele. Although most banks specialized in mercantile lending, others served artisans and farmers. In 1820 there were 327 commercial banks and several mutual savings banks that promoted thrift among the poor. Thus, at the onset of the antebellum period (defined here as the period between 1820 and 1860), urban residents were familiar with the intermediary function of banks and used bank-supplied currencies (deposits and banknotes) for most transactions. Table 1 reports the number of banks and the value of loans outstanding at year end between 1820 and 1860. During the era, the number of banks increased from 327 to 1,562 and total loans increased from just over $55.1 million to $691.9 million. Bank-supplied credit in the U.S. economy increased at a remarkable annual average rate of 6.3 percent. Growth in the financial sector, then outpaced growth in aggregate economic activity. Nominal gross domestic product increased an average annual rate of about 4.3 percent over the same interval. This essay discusses how regional regulatory structures evolved as the banking sector grew and radiated out from northeastern cities to the hinterlands.

Table 1

Number of Banks and Total Loans, 1820-1860

Year Banks Loans ($ millions)
1820 327 55.1
1821 273 71.9
1822 267 56.0
1823 274 75.9
1824 300 73.8
1825 330 88.7
1826 331 104.8
1827 333 90.5
1828 355 100.3
1829 369 103.0
1830 381 115.3
1831 424 149.0
1832 464 152.5
1833 517 222.9
1834 506 324.1
1835 704 365.1
1836 713 457.5
1837 788 525.1
1838 829 485.6
1839 840 492.3
1840 901 462.9
1841 784 386.5
1842 692 324.0
1843 691 254.5
1844 696 264.9
1845 707 288.6
1846 707 312.1
1847 715 310.3
1848 751 344.5
1849 782 332.3
1850 824 364.2
1851 879 413.8
1852 913 429.8
1853 750 408.9
1854 1208 557.4
1855 1307 576.1
1856 1398 634.2
1857 1416 684.5
1858 1422 583.2
1859 1476 657.2
1860 1562 691.9

Sources: Fenstermaker (1965); U.S. Comptroller of the Currency (1931).

Adaptability

As important as early American banks were in the process of capital accumulation, perhaps their most notable feature was their adaptability. Kuznets (1958) argues that one measure of the financial sector’s value is how and to what extent it evolves with changing economic conditions. Put in place to perform certain functions under one set of economic circumstances, how did it alter its behavior and service the needs of borrowers as circumstances changed. One benefit of the federalist U.S. political system was that states were given the freedom to establish systems reflecting local needs and preferences. While the political structure deserves credit in promoting regional adaptations, North (1994) credits the adaptability of America’s formal rules and informal constraints that rewarded adventurism in the economic, as well as the noneconomic, sphere. Differences in geography, climate, crop mix, manufacturing activity, population density and a host of other variables were reflected in different state banking systems. Rhode Island’s banks bore little resemblance to those in far away Louisiana or Missouri, or even those in neighboring Connecticut. Each state’s banks took a different form, but their purpose was the same; namely, to provide the state’s citizens with monetary and intermediary services and to promote the general economic welfare. This section provides a sketch of regional differences. A more detailed discussion can be found in Bodenhorn (2002).

State Banking in New England

New England’s banks most resemble the common conception of the antebellum bank. They were relatively small, unit banks; their stock was closely held; they granted loans to local farmers, merchants and artisans with whom the bank’s managers had more than a passing familiarity; and the state took little direct interest in their daily operations.

Of the banking systems put in place in the antebellum era, New England’s is typically viewed as the most stable and conservative. Friedman and Schwartz (1986) attribute their stability to an Old World concern with business reputations, familial ties, and personal legacies. New England was long settled, its society well established, and its business community mature and respected throughout the Atlantic trading network. Wealthy businessmen and bankers with strong ties to the community — like the Browns of Providence or the Bowdoins of Boston — emphasized stability not just because doing so benefited and reflected well on them, but because they realized that bad banking was bad for everyone’s business.

Besides their reputation for soundness, the two defining characteristics of New England’s early banks were their insider nature and their small size. The typical New England bank was small compared to banks in other regions. Table 2 shows that in 1820 the average Massachusetts country bank was about the same size as a Pennsylvania country bank, but both were only about half the size of a Virginia bank. A Rhode Island bank was about one-third the size of a Massachusetts or Pennsylvania bank and a mere one-sixth as large as Virginia’s banks. By 1850 the average Massachusetts bank declined relatively, operating on about two-thirds the paid-in capital of a Pennsylvania country bank. Rhode Island’s banks also shrank relative to Pennsylvania’s and were tiny compared to the large branch banks in the South and West.

Table 2

Average Bank Size by Capital and Lending in 1820 and 1850 Selected States and Cities

(in $ thousands)

1820

Capital

Loans 1850 Capital Loans
Massachusetts $374.5 $480.4 $293.5 $494.0
except Boston 176.6 230.8 170.3 281.9
Rhode Island 95.7 103.2 186.0 246.2
except Providence 60.6 72.0 79.5 108.5
New York na na 246.8 516.3
except NYC na na 126.7 240.1
Pennsylvania 221.8 262.9 340.2 674.6
except Philadelphia 162.6 195.2 246.0 420.7
Virginia1,2 351.5 340.0 270.3 504.5
South Carolina2 na na 938.5 1,471.5
Kentucky2 na na 439.4 727.3

Notes: 1 Virginia figures for 1822. 2 Figures represent branch averages.

Source: Bodenhorn (2002).

Explanations for New England Banks’ Relatively Small Size

Several explanations have been offered for the relatively small size of New England’s banks. Contemporaries attributed it to the New England states’ propensity to tax bank capital, which was thought to work to the detriment of large banks. They argued that large banks circulated fewer banknotes per dollar of capital. The result was a progressive tax that fell disproportionately on large banks. Data compiled from Massachusetts’s bank reports suggest that large banks were not disadvantaged by the capital tax. It was a fact, as contemporaries believed, that large banks paid higher taxes per dollar of circulating banknotes, but a potentially better benchmark is the tax to loan ratio because large banks made more use of deposits than small banks. The tax to loan ratio was remarkably constant across both bank size and time, averaging just 0.6 percent between 1834 and 1855. Moreover, there is evidence of constant to modestly increasing returns to scale in New England banking. Large banks were generally at least as profitable as small banks in all years between 1834 and 1860, and slightly more so in many.

Lamoreaux (1993) offers a different explanation for the modest size of the region’s banks. New England’s banks, she argues, were not impersonal financial intermediaries. Rather, they acted as the financial arms of extended kinship trading networks. Throughout the antebellum era banks catered to insiders: directors, officers, shareholders, or business partners and kin of directors, officers, shareholders and business partners. Such preferences toward insiders represented the perpetuation of the eighteenth-century custom of pooling capital to finance family enterprises. In the nineteenth century the practice continued under corporate auspices. The corporate form, in fact, facilitated raising capital in greater amounts than the family unit could raise on its own. But because the banks kept their loans within a relatively small circle of business connections, it was not until the late nineteenth century that bank size increased.2

Once the kinship orientation of the region’s banks was established it perpetuated itself. When outsiders could not obtain loans from existing insider organizations, they formed their own insider bank. In doing so the promoters assured themselves of a steady supply of credit and created engines of economic mobility for kinship networks formerly closed off from many sources of credit. State legislatures accommodated the practice through their liberal chartering policies. By 1860, Rhode Island had 91 banks, Maine had 68, New Hampshire 51, Vermont 44, Connecticut 74 and Massachusetts 178.

The Suffolk System

One of the most commented on characteristic of New England’s banking system was its unique regional banknote redemption and clearing mechanism. Established by the Suffolk Bank of Boston in the early 1820s, the system became known as the Suffolk System. With so many banks in New England, each issuing it own form of currency, it was sometimes difficult for merchants, farmers, artisans, and even other bankers, to discriminate between real and bogus banknotes, or to discriminate between good and bad bankers. Moreover, the rural-urban terms of trade pulled most banknotes toward the region’s port cities. Because country merchants and farmers were typically indebted to city merchants, country banknotes tended to flow toward the cities, Boston more so than any other. By the second decade of the nineteenth century, country banknotes became a constant irritant for city bankers. City bankers believed that country issues displaced Boston banknotes in local transactions. More irritating though was the constant demand by the city banks’ customers to accept country banknotes on deposit, which placed the burden of interbank clearing on the city banks.3

In 1803 the city banks embarked on a first attempt to deal with country banknotes. They joined together, bought up a large quantity of country banknotes, and returned them to the country banks for redemption into specie. This effort to reduce country banknote circulation encountered so many obstacles that it was quickly abandoned. Several other schemes were hatched in the next two decades, but none proved any more successful than the 1803 plan.

The Suffolk Bank was chartered in 1818 and within a year embarked on a novel scheme to deal with the influx of country banknotes. The Suffolk sponsored a consortium of Boston bank in which each member appointed the Suffolk as its lone agent in the collection and redemption of country banknotes. In addition, each city bank contributed to a fund used to purchase and redeem country banknotes. When the Suffolk collected a large quantity of a country bank’s notes, it presented them for immediate redemption with an ultimatum: Join in a regular and organized redemption system or be subject to further unannounced redemption calls.4 Country banks objected to the Suffolk’s proposal, because it required them to keep noninterest-earning assets on deposit with the Suffolk in amounts equal to their average weekly redemptions at the city banks. Most country banks initially refused to join the redemption network, but after the Suffolk made good on a few redemption threats, the system achieved near universal membership.

Early interpretations of the Suffolk system, like those of Redlich (1949) and Hammond (1957), portray the Suffolk as a proto-central bank, which acted as a restraining influence that exercised some control over the region’s banking system and money supply. Recent studies are less quick to pronounce the Suffolk a successful experiment in early central banking. Mullineaux (1987) argues that the Suffolk’s redemption system was actually self-defeating. Instead of making country banknotes less desirable in Boston, the fact that they became readily redeemable there made them perfect substitutes for banknotes issued by Boston’s prestigious banks. This policy made country banknotes more desirable, which made it more, not less, difficult for Boston’s banks to keep their own notes in circulation.

Fenstermaker and Filer (1986) also contest the long-held view that the Suffolk exercised control over the region’s money supply (banknotes and deposits). Indeed, the Suffolk’s system was self-defeating in this regard as well. By increasing confidence in the value of a randomly encountered banknote, people were willing to hold increases in banknotes issues. In an interesting twist on the traditional interpretation, a possible outcome of the Suffolk system is that New England may have grown increasingly financial backward as a direct result of the region’s unique clearing system. Because banknotes were viewed as relatively safe and easily redeemed, the next big financial innovation — deposit banking — in New England lagged far behind other regions. With such wide acceptance of banknotes, there was no reason for banks to encourage the use of deposits and little reason for consumers to switch over.

Summary: New England Banks

New England’s banking system can be summarized as follows: Small unit banks predominated; many banks catered to small groups of capitalists bound by personal and familial ties; banking was becoming increasingly interconnected with other lines of business, such as insurance, shipping and manufacturing; the state took little direct interest in the daily operations of the banks and its supervisory role amounted to little more than a demand that every bank submit an unaudited balance sheet at year’s end; and that the Suffolk developed an interbank clearing system that facilitated the use of banknotes throughout the region, but had little effective control over the region’s money supply.

Banking in the Middle Atlantic Region

Pennsylvania

After 1810 or so, many bank charters were granted in New England, but not because of the presumption that the bank would promote the commonweal. Charters were granted for the personal gain of the promoter and the shareholders and in proportion to the personal, political and economic influence of the bank’s founders. No New England state took a significant financial stake in its banks. In both respects, New England differed markedly from states in other regions. From the beginning of state-chartered commercial banking in Pennsylvania, the state took a direct interest in the operations and profits of its banks. The Bank of North America was the obvious case: chartered to provide support to the colonial belligerents and the fledgling nation. Because the bank was popularly perceived to be dominated by Philadelphia’s Federalist merchants, who rarely loaned to outsiders, support for the bank waned.5 After a pitched political battle in which the Bank of North America’s charter was revoked and reinstated, the legislature chartered the Bank of Pennsylvania in 1793. As its name implies, this bank became the financial arm of the state. Pennsylvania subscribed $1 million of the bank’s capital, giving it the right to appoint six of thirteen directors and a $500,000 line of credit. The bank benefited by becoming the state’s fiscal agent, which guaranteed a constant inflow of deposits from regular treasury operations as well as western land sales.

By 1803 the demand for loans outstripped the existing banks’ supply and a plan for a new bank, the Philadelphia Bank, was hatched and its promoters petitioned the legislature for a charter. The existing banks lobbied against the charter, and nearly sank the new bank’s chances until it established a precedent that lasted throughout the antebellum era. Its promoters bribed the legislature with a payment of $135,000 in return for the charter, handed over one-sixth of its shares, and opened a line of credit for the state.

Between 1803 and 1814, the only other bank chartered in Pennsylvania was the Farmers and Mechanics Bank of Philadelphia, which established a second substantive precedent that persisted throughout the era. Existing banks followed a strict real-bills lending policy, restricting lending to merchants at very short terms of 30 to 90 days.6 Their adherence to a real-bills philosophy left a growing community of artisans, manufacturers and farmers on the outside looking in. The Farmers and Mechanics Bank was chartered to serve excluded groups. At least seven of its thirteen directors had to be farmers, artisans or manufacturers and the bank was required to lend the equivalent of 10 percent of its capital to farmers on mortgage for at least one year. In later years, banks were established to provide services to even more narrowly defined groups. Within a decade or two, most substantial port cities had banks with names like Merchants Bank, Planters Bank, Farmers Bank, and Mechanics Bank. By 1860 it was common to find banks with names like Leather Manufacturers Bank, Grocers Bank, Drovers Bank, and Importers Bank. Indeed, the Emigrant Savings Bank in New York City served Irish immigrants almost exclusively. In the other instances, it is not known how much of a bank’s lending was directed toward the occupational group included in its name. The adoption of such names may have been marketing ploys as much as mission statements. Only further research will reveal the answer.

New York

State-chartered banking in New York arrived less auspiciously than it had in Philadelphia or Boston. The Bank of New York opened in 1784, but operated without a charter and in open violation of state law until 1791 when the legislature finally sanctioned it. The city’s second bank obtained its charter surreptitiously. Alexander Hamilton was one of the driving forces behind the Bank of New York, and his long-time nemesis, Aaron Burr, was determined to establish a competing bank. Unable to get a charter from a Federalist legislature, Burr and his colleagues petitioned to incorporate a company to supply fresh water to the inhabitants of Manhattan Island. Burr tucked a clause into the charter of the Manhattan Company (the predecessor to today’s Chase Manhattan Bank) granting the water company the right to employ any excess capital in financial transactions. Once chartered, the company’s directors announced that $500,000 of its capital would be invested in banking.7 Thereafter, banking grew more quickly in New York than in Philadelphia, so that by 1812 New York had seven banks compared to the three operating in Philadelphia.

Deposit Insurance

Despite its inauspicious banking beginnings, New York introduced two innovations that influenced American banking down to the present. The Safety Fund system, introduced in 1829, was the nation’s first experiment in bank liability insurance (similar to that provided by the Federal Deposit Insurance Corporation today). The 1829 act authorized the appointment of bank regulators charged with regular inspections of member banks. An equally novel aspect was that it established an insurance fund insuring holders of banknotes and deposits against loss from bank failure. Ultimately, the insurance fund was insufficient to protect all bank creditors from loss during the panic of 1837 when eleven failures in rapid succession all but bankrupted the insurance fund, which delayed noteholder and depositor recoveries for months, even years. Even though the Safety Fund failed to provide its promised protections, it was an important episode in the subsequent evolution of American banking. Several Midwestern states instituted deposit insurance in the early twentieth century, and the federal government adopted it after the banking panics in the 1930s resulted in the failure of thousands of banks in which millions of depositors lost money.

“Free Banking”

Although the Safety Fund was nearly bankrupted in the late 1830s, it continued to insure a number of banks up to the mid 1860s when it was finally closed. No new banks joined the Safety Fund system after 1838 with the introduction of free banking — New York’s second significant banking innovation. Free banking represented a compromise between those most concerned with the underlying safety and stability of the currency and those most concerned with competition and freeing the country’s entrepreneurs from unduly harsh and anticompetitive restraints. Under free banking, a prospective banker could start a bank anywhere he saw fit, provided he met a few regulatory requirements. Each free bank’s capital was invested in state or federal bonds that were turned over to the state’s treasurer. If a bank failed to redeem even a single note into specie, the treasurer initiated bankruptcy proceedings and banknote holders were reimbursed from the sale of the bonds.

Actually Michigan preempted New York’s claim to be the first free-banking state, but Michigan’s 1837 law was modeled closely after a bill then under debate in New York’s legislature. Ultimately, New York’s influence was profound in this as well, because free banking became one of the century’s most widely copied financial innovations. By 1860 eighteen states adopted free banking laws closely resembling New York’s law. Three other states introduced watered-down variants. Eventually, the post-Civil War system of national banking adopted many of the substantive provisions of New York’s 1838 act.

Both the Safety Fund system and free banking were attempts to protect society from losses resulting from bank failures and to entice people to hold financial assets. Banks and bank-supplied currency were novel developments in the hinterlands in the early nineteenth century and many rural inhabitants were skeptical about the value of small pieces of paper. They were more familiar with gold and silver. Getting them to exchange one for the other was a slow process, and one that relied heavily on trust. But trust was built slowly and destroyed quickly. The failure of a single bank could, in a week, destroy the confidence in a system built up over a decade. New York’s experiments were designed to mitigate, if not eliminate, the negative consequences of bank failures. New York’s Safety Fund, then, differed in the details but not in intent, from New England’s Suffolk system. Bankers and legislators in each region grappled with the difficult issue of protecting a fragile but vital sector of the economy. Each region responded to the problem differently. The South and West settled on yet another solution.

Banking in the South and West

One distinguishing characteristic of southern and western banks was their extensive branch networks. Pennsylvania provided for branch banking in the early nineteenth century and two banks jointly opened about ten branches. In both instances, however, the branches became a net liability. The Philadelphia Bank opened four branches in 1809 and by 1811 was forced to pass on its semi-annual dividends because losses at the branches offset profits at the Philadelphia office. At bottom, branch losses resulted from a combination of ineffective central office oversight and unrealistic expectations about the scale and scope of hinterland lending. Philadelphia’s bank directors instructed branch managers to invest in high-grade commercial paper or real bills. Rural banks found a limited number of such lending opportunities and quickly turned to mortgage-based lending. Many of these loans fell into arrears and were ultimately written when land sales faltered.

Branch Banking

Unlike Pennsylvania, where branch banking failed, branch banks throughout the South and West thrived. The Bank of Virginia, founded in 1804, was the first state-chartered branch bank and up to the Civil War branch banks served the state’s financial needs. Several small, independent banks were chartered in the 1850s, but they never threatened the dominance of Virginia’s “Big Six” banks. Virginia’s branch banks, unlike Pennsylvania’s, were profitable. In 1821, for example, the net return to capital at the Farmers Bank of Virginia’s home office in Richmond was 5.4 percent. Returns at its branches ranged from a low of 3 percent at Norfolk (which was consistently the low-profit branch) to 9 percent in Winchester. In 1835, the last year the bank reported separate branch statistics, net returns to capital at the Farmers Bank’s branches ranged from 2.9 and 11.7 percent, with an average of 7.9 percent.

The low profits at the Norfolk branch represent a net subsidy from the state’s banking sector to the political system, which was not immune to the same kind of infrastructure boosterism that erupted in New York, Pennsylvania, Maryland and elsewhere. In the immediate post-Revolutionary era, the value of exports shipped from Virginia’s ports (Norfolk and Alexandria) slightly exceeded the value shipped from Baltimore. In the 1790s the numbers turned sharply in Baltimore’s favor and Virginia entered the internal-improvements craze and the battle for western shipments. Banks represented the first phase of the state’s internal improvements plan in that many believed that Baltimore’s new-found advantage resulted from easier credit supplied by the city’s banks. If Norfolk, with one of the best natural harbors on the North American Atlantic coast, was to compete with other port cities, it needed banks and the state required three of the state’s Big Six branch banks to operate branches there. Despite its natural advantages, Norfolk never became an important entrepot and it probably had more bank capital than it required. This pattern was repeated elsewhere. Other states required their branch banks to serve markets such as Memphis, Louisville, Natchez and Mobile that might, with the proper infrastructure grow into important ports.

State Involvement and Intervention in Banking

The second distinguishing characteristic of southern and western banking was sweeping state involvement and intervention. Virginia, for example, interjected the state into the banking system by taking significant stakes in its first chartered banks (providing an implicit subsidy) and by requiring them, once they established themselves, to subsidize the state’s continuing internal improvements programs of the 1820s and 1830s. Indiana followed such a strategy. So, too, did Kentucky, Louisiana, Mississippi, Illinois, Kentucky, Tennessee and Georgia in different degrees. South Carolina followed a wholly different strategy. On one hand, it chartered several banks in which it took no financial interest. On the other, it chartered the Bank of the State of South Carolina, a bank wholly owned by the state and designed to lend to planters and farmers who complained constantly that the state’s existing banks served only the urban mercantile community. The state-owned bank eventually divided its lending between merchants, farmers and artisans and dominated South Carolina’s financial sector.

The 1820s and 1830s witnessed a deluge of new banks in the South and West, with a corresponding increase in state involvement. No state matched Louisiana’s breadth of involvement in the 1830s when it chartered three distinct types of banks: commercial banks that served merchants and manufacturers; improvement banks that financed various internal improvements projects; and property banks that extended long-term mortgage credit to planters and other property holders. Louisiana’s improvement banks included the New Orleans Canal and Banking Company that built a canal connecting Lake Ponchartrain to the Mississippi River. The Exchange and Banking Company and the New Orleans Improvement and Banking Company were required to build and operate hotels. The New Orleans Gas Light and Banking Company constructed and operated gas streetlights in New Orleans and five other cities. Finally, the Carrollton Railroad and Banking Company and the Atchafalaya Railroad and Banking Company were rail construction companies whose bank subsidiaries subsidized railroad construction.

“Commonwealth Ideal” and Inflationary Banking

Louisiana’s 1830s banking exuberance reflected what some historians label the “commonwealth ideal” of banking; that is, the promotion of the general welfare through the promotion of banks. Legislatures in the South and West, however, never demonstrated a greater commitment to the commonwealth ideal than during the tough times of the early 1820s. With the collapse of the post-war land boom in 1819, a political coalition of debt-strapped landowners lobbied legislatures throughout the region for relief and its focus was banking. Relief advocates lobbied for inflationary banking that would reduce the real burden of debts taken on during prior flush times.

Several western states responded to these calls and chartered state-subsidized and state-managed banks designed to reinflate their embattled economies. Chartered in 1821, the Bank of the Commonwealth of Kentucky loaned on mortgages at longer than customary periods and all Kentucky landowners were eligible for $1,000 loans. The loans allowed landowners to discharge their existing debts without being forced to liquidate their property at ruinously low prices. Although the bank’s notes were not redeemable into specie, they were given currency in two ways. First, they were accepted at the state treasury in tax payments. Second, the state passed a law that forced creditors to accept the notes in payment of existing debts or agree to delay collection for two years.

The commonwealth ideal was not unique to Kentucky. During the depression of the 1820s, Tennessee chartered the State Bank of Tennessee, Illinois chartered the State Bank of Illinois and Louisiana chartered the Louisiana State Bank. Although they took slightly different forms, they all had the same intent; namely, to relieve distressed and embarrassed farmers, planters and land owners. What all these banks shared in common was the notion that the state should promote the general welfare and economic growth. In this instance, and again during the depression of the 1840s, state-owned banks were organized to minimize the transfer of property when economic conditions demanded wholesale liquidation. Such liquidation would have been inefficient and imposed unnecessary hardship on a large fraction of the population. To the extent that hastily chartered relief banks forestalled inefficient liquidation, they served their purpose. Although most of these banks eventually became insolvent, requiring taxpayer bailouts, we cannot label them unsuccessful. They reinflated economies and allowed for an orderly disposal of property. Determining if the net benefits were positive or negative requires more research, but for the moment we are forced to accept the possibility that the region’s state-owned banks of the 1820s and 1840s advanced the commonweal.

Conclusion: Banks and Economic Growth

Despite notable differences in the specific form and structure of each region’s banking system, they were all aimed squarely at a common goal; namely, realizing that region’s economic potential. Banks helped achieve the goal in two ways. First, banks monetized economies, which reduced the costs of transacting and helped smooth consumption and production across time. It was no longer necessary for every farm family to inventory their entire harvest. They could sell most of it, and expend the proceeds on consumption goods as the need arose until the next harvest brought a new cash infusion. Crop and livestock inventories are prone to substantial losses and an increased use of money reduced them significantly. Second, banks provided credit, which unleashed entrepreneurial spirits and talents. A complete appreciation of early American banking recognizes the banks’ contribution to antebellum America’s economic growth.

Bibliographic Essay

Because of the large number of sources used to construct the essay, the essay was more readable and less cluttered by including a brief bibliographic essay. A full bibliography is included at the end.

Good general histories of antebellum banking include Dewey (1910), Fenstermaker (1965), Gouge (1833), Hammond (1957), Knox (1903), Redlich (1949), and Trescott (1963). If only one book is read on antebellum banking, Hammond’s (1957) Pulitzer-Prize winning book remains the best choice.

The literature on New England banking is not particularly large, and the more important historical interpretations of state-wide systems include Chadbourne (1936), Hasse (1946, 1957), Simonton (1971), Spencer (1949), and Stokes (1902). Gras (1937) does an excellent job of placing the history of a single bank within the larger regional and national context. In a recent book and a number of articles Lamoreaux (1994 and sources therein) provides a compelling and eminently readable reinterpretation of the region’s banking structure. Nathan Appleton (1831, 1856) provides a contemporary observer’s interpretation, while Walker (1857) provides an entertaining if perverse and satirical history of a fictional New England bank. Martin (1969) provides details of bank share prices and dividend payments from the establishment of the first banks in Boston through the end of the nineteenth century. Less technical studies of the Suffolk system include Lake (1947), Trivoli (1979) and Whitney (1878); more technical interpretations include Calomiris and Kahn (1996), Mullineaux (1987), and Rolnick, Smith and Weber (1998).

The literature on Middle Atlantic banking is huge, but the better state-level histories include Bryan (1899), Daniels (1976), and Holdsworth (1928). The better studies of individual banks include Adams (1978), Lewis (1882), Nevins (1934), and Wainwright (1953). Chaddock (1910) provides a general history of the Safety Fund system. Golembe (1960) places it in the context of modern deposit insurance, while Bodenhorn (1996) and Calomiris (1989) provide modern analyses. A recent revival of interest in free banking has brought about a veritable explosion in the number of studies on the subject, but the better introductory ones remain Rockoff (1974, 1985), Rolnick and Weber (1982, 1983), and Dwyer (1996).

The literature on southern and western banking is large and of highly variable quality, but I have found the following to be the most readable and useful general sources: Caldwell (1935), Duke (1895), Esary (1912), Golembe (1978), Huntington (1915), Green (1972), Lesesne (1970), Royalty (1979), Schweikart (1987) and Starnes (1931).

References and Further Reading

Adams, Donald R., Jr. Finance and Enterprise in Early America: A Study of Stephen Girard’s Bank, 1812-1831. Philadelphia: University of Pennsylvania Press, 1978.

Alter, George, Claudia Goldin and Elyce Rotella. “The Savings of Ordinary Americans: The Philadelphia Saving Fund Society in the Mid-Nineteenth-Century.” Journal of Economic History 54, no. 4 (December 1994): 735-67.

Appleton, Nathan. A Defence of Country Banks: Being a Reply to a Pamphlet Entitled ‘An Examination of the Banking System of Massachusetts, in Reference to the Renewal of the Bank Charters.’ Boston: Stimpson & Clapp, 1831.

Appleton, Nathan. Bank Bills or Paper Currency and the Banking System of Massachusetts with Remarks on Present High Prices. Boston: Little, Brown and Company, 1856.

Berry, Thomas Senior. Revised Annual Estimates of American Gross National Product: Preliminary Estimates of Four Major Components of Demand, 1789-1889. Richmond: University of Richmond Bostwick Paper No. 3, 1978.

Bodenhorn, Howard. “Zombie Banks and the Demise of New York’s Safety Fund.” Eastern Economic Journal 22, no. 1 (1996): 21-34.

Bodenhorn, Howard. “Private Banking in Antebellum Virginia: Thomas Branch & Sons of Petersburg.” Business History Review 71, no. 4 (1997): 513-42.

Bodenhorn, Howard. A History of Banking in Antebellum America: Financial Markets and Economic Development in an Era of Nation-Building. Cambridge and New York: Cambridge University Press, 2000.

Bodenhorn, Howard. State Banking in Early America: A New Economic History. New York: Oxford University Press, 2002.

Bryan, Alfred C. A History of State Banking in Maryland. Baltimore: Johns Hopkins University Press, 1899.

Caldwell, Stephen A. A Banking History of Louisiana. Baton Rouge: Louisiana State University Press, 1935.

Calomiris, Charles W. “Deposit Insurance: Lessons from the Record.” Federal Reserve Bank of Chicago Economic Perspectives 13 (1989): 10-30.

Calomiris, Charles W., and Charles Kahn. “The Efficiency of Self-Regulated Payments Systems: Learnings from the Suffolk System.” Journal of Money, Credit, and Banking 28, no. 4 (1996): 766-97.

Chadbourne, Walter W. A History of Banking in Maine, 1799-1930. Orono: University of Maine Press, 1936.

Chaddock, Robert E. The Safety Fund Banking System in New York, 1829-1866. Washington, D.C.: Government Printing Office, 1910.

Daniels, Belden L. Pennsylvania: Birthplace of Banking in America. Harrisburg: Pennsylvania Bankers Association, 1976.

Davis, Lance, and Robert E. Gallman. “Capital Formation in the United States during the Nineteenth Century.” In Cambridge Economic History of Europe (Vol. 7, Part 2), edited by Peter Mathias and M.M. Postan, 1-69. Cambridge: Cambridge University Press, 1978.

Davis, Lance, and Robert E. Gallman. “Savings, Investment, and Economic Growth: The United States in the Nineteenth Century.” In Capitalism in Context: Essays on Economic Development and Cultural Change in Honor of R.M. Hartwell, edited by John A. James and Mark Thomas, 202-29. Chicago: University of Chicago Press, 1994.

Dewey, Davis R. State Banking before the Civil War. Washington, D.C.: Government Printing Office, 1910.

Duke, Basil W. History of the Bank of Kentucky, 1792-1895. Louisville: J.P. Morton, 1895.

Dwyer, Gerald P., Jr. “Wildcat Banking, Banking Panics, and Free Banking in the United States.” Federal Reserve Bank of Atlanta Economic Review 81, no. 3 (1996): 1-20.

Engerman, Stanley L., and Robert E. Gallman. “U.S. Economic Growth, 1783-1860.” Research in Economic History 8 (1983): 1-46.

Esary, Logan. State Banking in Indiana, 1814-1873. Indiana University Studies No. 15. Bloomington: Indiana University Press, 1912.

Fenstermaker, J. Van. The Development of American Commercial Banking, 1782-1837. Kent, Ohio: Kent State University, 1965.

Fenstermaker, J. Van, and John E. Filer. “Impact of the First and Second Banks of the United States and the Suffolk System on New England Bank Money, 1791-1837.” Journal of Money, Credit, and Banking 18, no. 1 (1986): 28-40.

Friedman, Milton, and Anna J. Schwartz. “Has the Government Any Role in Money?” Journal of Monetary Economics 17, no. 1 (1986): 37-62.

Gallman, Robert E. “American Economic Growth before the Civil War: The Testimony of the Capital Stock Estimates.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 79-115. Chicago: University of Chicago Press, 1992.

Goldsmith, Raymond. Financial Structure and Development. New Haven: Yale University Press, 1969.

Golembe, Carter H. “The Deposit Insurance Legislation of 1933: An Examination of its Antecedents and Purposes.” Political Science Quarterly 76, no. 2 (1960): 181-200.

Golembe, Carter H. State Banks and the Economic Development of the West. New York: Arno Press, 1978.

Gouge, William M. A Short History of Paper Money and Banking in the United States. Philadelphia: T.W. Ustick, 1833.

Gras, N.S.B. The Massachusetts First National Bank of Boston, 1784-1934. Cambridge, MA: Harvard University Press, 1937.

Green, George D. Finance and Economic Development in the Old South: Louisiana Banking, 1804-1861. Stanford: Stanford University Press, 1972.

Hammond, Bray. Banks and Politics in America from the Revolution to the Civil War. Princeton: Princeton University Press, 1957.

Hasse, William F., Jr. A History of Banking in New Haven, Connecticut. New Haven: privately printed, 1946.

Hasse, William F., Jr. A History of Money and Banking in Connecticut. New Haven: privately printed, 1957.

Holdsworth, John Thom. Financing an Empire: History of Banking in Pennsylvania. Chicago: S.J. Clarke Publishing Company, 1928.

Huntington, Charles Clifford. A History of Banking and Currency in Ohio before the Civil War. Columbus: F. J. Herr Printing Company, 1915.

Knox, John Jay. A History of Banking in the United States. New York: Bradford Rhodes & Company, 1903.

Kuznets, Simon. “Foreword.” In Financial Intermediaries in the American Economy, by Raymond W. Goldsmith. Princeton: Princeton University Press, 1958.

Lake, Wilfred. “The End of the Suffolk System.” Journal of Economic History 7, no. 4 (1947): 183-207.

Lamoreaux, Naomi R. Insider Lending: Banks, Personal Connections, and Economic Development in Industrial New England. Cambridge: Cambridge University Press, 1994.

Lesesne, J. Mauldin. The Bank of the State of South Carolina. Columbia: University of South Carolina Press, 1970.

Lewis, Lawrence, Jr. A History of the Bank of North America: The First Bank Chartered in the United States. Philadelphia: J.B. Lippincott & Company, 1882.

Lockard, Paul A. Banks, Insider Lending and Industries of the Connecticut River Valley of Massachusetts, 1813-1860. Unpublished Ph.D. thesis, University of Massachusetts, 2000.

Martin, Joseph G. A Century of Finance. New York: Greenwood Press, 1969.

Moulton, H.G. “Commercial Banking and Capital Formation.” Journal of Political Economy 26 (1918): 484-508, 638-63, 705-31, 849-81.

Mullineaux, Donald J. “Competitive Monies and the Suffolk Banking System: A Contractual Perspective.” Southern Economic Journal 53 (1987): 884-98.

Nevins, Allan. History of the Bank of New York and Trust Company, 1784 to 1934. New York: privately printed, 1934.

New York. Bank Commissioners. “Annual Report of the Bank Commissioners.” New York General Assembly Document No. 74. Albany, 1835.

North, Douglass. “Institutional Change in American Economic History.” In American Economic Development in Historical Perspective, edited by Thomas Weiss and Donald Schaefer, 87-98. Stanford: Stanford University Press, 1994.

Rappaport, George David. Stability and Change in Revolutionary Pennsylvania: Banking, Politics, and Social Structure. University Park, PA: The Pennsylvania State University Press, 1996.

Redlich, Fritz. The Molding of American Banking: Men and Ideas. New York: Hafner Publishing Company, 1947.

Rockoff, Hugh. “The Free Banking Era: A Reexamination.” Journal of Money, Credit, and Banking 6, no. 2 (1974): 141-67.

Rockoff, Hugh. “New Evidence on the Free Banking Era in the United States.” American Economic Review 75, no. 4 (1985): 886-89.

Rolnick, Arthur J., and Warren E. Weber. “Free Banking, Wildcat Banking, and Shinplasters.” Federal Reserve Bank of Minneapolis Quarterly Review 6 (1982): 10-19.

Rolnick, Arthur J., and Warren E. Weber. “New Evidence on the Free Banking Era.” American Economic Review 73, no. 5 (1983): 1080-91.

Rolnick, Arthur J., Bruce D. Smith, and Warren E. Weber. “Lessons from a Laissez-Faire Payments System: The Suffolk Banking System (1825-58).” Federal Reserve Bank of Minneapolis Quarterly Review 22, no. 3 (1998): 11-21.

Royalty, Dale. “Banking and the Commonwealth Ideal in Kentucky, 1806-1822.” Register of the Kentucky Historical Society 77 (1979): 91-107.

Schumpeter, Joseph A. The Theory of Economic Development: An Inquiry into Profit, Capital, Credit, Interest, and the Business Cycle. Cambridge, MA: Harvard University Press, 1934.

Schweikart, Larry. Banking in the American South from the Age of Jackson to Reconstruction. Baton Rouge: Louisiana State University Press, 1987.

Simonton, William G. Maine and the Panic of 1837. Unpublished master’s thesis: University of Maine, 1971.

Sokoloff, Kenneth L. “Productivity Growth in Manufacturing during Early Industrialization.” In Long-Term Factors in American Economic Growth, edited by Stanley L. Engerman and Robert E. Gallman. Chicago: University of Chicago Press, 1986.

Sokoloff, Kenneth L. “Invention, Innovation, and Manufacturing Productivity Growth in the Antebellum Northeast.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 345-78. Chicago: University of Chicago Press, 1992.

Spencer, Charles, Jr. The First Bank of Boston, 1784-1949. New York: Newcomen Society, 1949.

Starnes, George T. Sixty Years of Branch Banking in Virginia. New York: Macmillan Company, 1931.

Stokes, Howard Kemble. Chartered Banking in Rhode Island, 1791-1900. Providence: Preston & Rounds Company, 1902.

Sylla, Richard. “Forgotten Men of Money: Private Bankers in Early U.S. History.” Journal of Economic History 36, no. 2 (1976):

Temin, Peter. The Jacksonian Economy. New York: W. W. Norton & Company, 1969.

Trescott, Paul B. Financing American Enterprise: The Story of Commercial Banking. New York: Harper & Row, 1963.

Trivoli, George. The Suffolk Bank: A Study of a Free-Enterprise Clearing System. London: The Adam Smith Institute, 1979.

U.S. Comptroller of the Currency. Annual Report of the Comptroller of the Currency. Washington, D.C.: Government Printing Office, 1931.

Wainwright, Nicholas B. History of the Philadelphia National Bank. Philadelphia: William F. Fell Company, 1953.

Walker, Amasa. History of the Wickaboag Bank. Boston: Crosby, Nichols & Company, 1857.

Wallis, John Joseph. “What Caused the Panic of 1839?” Unpublished working paper, University of Maryland, October 2000.

Weiss, Thomas. “U.S. Labor Force Estimates and Economic Growth, 1800-1860.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 19-75. Chicago: University of Chicago Press, 1992.

Whitney, David R. The Suffolk Bank. Cambridge, MA: Riverside Press, 1878.

Wright, Robert E. “Artisans, Banks, Credit, and the Election of 1800.” The Pennsylvania Magazine of History and Biography 122, no. 3 (July 1998), 211-239.

Wright, Robert E. “Bank Ownership and Lending Patterns in New York and Pennsylvania, 1781-1831.” Business History Review 73, no. 1 (Spring 1999), 40-60.

1 Banknotes were small demonination IOUs printed by banks and circulated as currency. Modern U.S. money are simply banknotes issued by the Federal Reserve Bank, which has a monopoly privilege in the issue of legal tender currency. In antebellum American, when a bank made a loan, the borrower was typically handed banknotes with a face value equal to the dollar value of the loan. The borrower then spent these banknotes in purchasing goods and services, putting them into circulation. Contemporary law held that banks were required to redeem banknotes into gold and silver legal tender on demand. Banks found it profitable to issue notes because they typically held about 30 percent of the total value of banknotes in circulation as reserves. Thus, banks were able to leverage $30 in gold and silver into $100 in loans that returned about 7 percent interest on average.

2 Paul Lockard (2000) challenges Lamoreaux’s interpretation. In a study of 4 banks in the Connecticut River valley, Lockard finds that insiders did not dominate these banks’ resources. As provocative as Lockard’s findings are, he draws conclusions from a small and unrepresentative sample. Two of his four sample banks were savings banks, which were designed as quasi-charitable organizations designed to encourage savings by the working classes and provide small loans. Thus, Lockard’s sample is effectively reduced to two banks. At these two banks, he identifies about 10 percent of loans as insider loans, but readily admits that he cannot always distinguish between insiders and outsiders. For a recent study of how early Americans used savings banks, see Alter, Goldin and Rotella (1994). The literature on savings banks is so large that it cannot be be given its due here.

3 Interbank clearing involves the settling of balances between banks. Modern banks cash checks drawn on other banks and credit the funds to the depositor. The Federal Reserve system provides clearing services between banks. The accepting bank sends the checks to the Federal Reserve, who credits the sending bank’s accounts and sends the checks back to the bank on which they were drawn for reimbursement. In the antebellum era, interbank clearing involved sending banknotes back to issuing banks. Because New England had so many small and scattered banks, the costs of returning banknotes to their issuers were large and sometimes avoided by recirculating notes of distant banks rather than returning them. Regular clearings and redemptions served an important purpose, however, because they kept banks in touch with the current market conditions. A massive redemption of notes was indicative of a declining demand for money and credit. Because the bank’s reserves were drawn down with the redemptions, it was forced to reduce its volume of loans in accord with changing demand conditions.

4 The law held that banknotes were redeemable on demand into gold or silver coin or bullion. If a bank refused to redeem even a single $1 banknote, the banknote holder could have the bank closed and liquidated to recover his or her claim against it.

5 Rappaport (1996) found that the bank’s loans were about equally divided between insiders (shareholders and shareholders’ family and business associates) and outsiders, but nonshareholders received loans about 30 percent smaller than shareholders. The issue remains about whether this bank was an “insider” bank, and depends largely on one’s definition. Any modern bank which made half of its loans to shareholders and their families would be viewed as an “insider” bank. It is less clear where the line can be usefully drawn for antebellum banks.

6 Real-bills lending followed from a nineteenth-century banking philosophy, which held that bank lending should be used to finance the warehousing or wholesaling of already-produced goods. Loans made on these bases were thought to be self-liquidating in that the loan was made against readily sold collateral actually in the hands of a merchant. Under the real-bills doctrine, the banks’ proper functions were to bridge the gap between production and retail sale of goods. A strict adherence to real-bills tenets excluded loans on property (mortgages), loans on goods in process (trade credit), or loans to start-up firms (venture capital). Thus, real-bills lending prescribed a limited role for banks and bank credit. Few banks were strict adherents to the doctrine, but many followed it in large part.

7 Robert E. Wright (1998) offers a different interpretation, but notes that Burr pushed the bill through at the end of a busy legislative session so that many legislators voted on the bill without having read it thoroughly or at all.

Bertola.Uruguay.final

An Overview of the Economic History of Uruguay
since the 1870s

Luis Bértola, Universidad de la República — Uruguay

Uruguay’s Early History

Without silver or gold, without valuable species, scarcely peopled by gatherers and fishers, the Eastern Strand of the Uruguay River (Banda Oriental was the colonial name; República Oriental del Uruguay is the official name today) was, in the sixteenth and seventeenth centuries, distant and unattractive to the European nations that conquered the region. The major export product was the leather of wild descendants of cattle introduced in the early 1600s by the Spaniards. As cattle preceded humans, the state preceded society: Uruguay’s first settlement was Colonia del Sacramento, a Portuguese military fortress founded in 1680, placed precisely across from Buenos Aires, Argentina. Montevideo, also a fortress, was founded by the Spaniards in 1724. Uruguay was on the border between the Spanish and Portuguese empires, a feature which would be decisive for the creation, with strong British involvement, in 1828-1830, of an independent state.

Montevideo had the best natural harbor in the region, and rapidly became the end-point of the trans-Atlantic routes into the region, the base for a strong commercial elite, and for the Spanish navy in the region. During the first decades after independence, however, Uruguay was plagued by political instability, precarious institution building and economic retardation. Recurrent civil wars with intensive involvement by Britain, France, Portugal-Brazil and Argentina, made Uruguay a center for international conflicts, the most important being the Great War (Guerra Grande), which lasted from 1839 to 1851. At its end Uruguay had only about 130,000 inhabitants.

“Around the middle of the nineteenth century, Uruguay was dominated by the latifundium, with its ill-defined boundaries and enormous herds of native cattle, from which only the hides were exported to Great Britain and part of the meat, as jerky, to Brazil and Cuba. There was a shifting rural population that worked on the large estates and lived largely on the parts of beef carcasses that could not be marketed abroad. Often the landowners were also the caudillos of the Blanco or Colorado political parties, the protagonists of civil wars that a weak government was unable to prevent” (Barrán and Nahum, 1984, 655). This picture still holds, even if it has been excessively stylized, neglecting the importance of subsistence or domestic-market oriented peasant production.

Economic Performance in the Long Run

Despite its precarious beginnings, Uruguay’s per capita gross domestic product (GDP) growth from 1870 to 2002 shows an amazing persistence, with the long-run rate averaging around one percent per year. However, this apparent stability hides some important shifts. As shown in Figure 1, both GDP and population grew much faster before the 1930s; from 1930 to 1960 immigration vanished and population grew much more slowly, while decades of GDP stagnation and fast growth alternated; after the 1960s Uruguay became a net-emigration country, with low natural growth rates and a still spasmodic GDP growth.

GDP growth shows a pattern featured by Kuznets-like swings (Bértola and Lorenzo 2004), with extremely destructive downward phases, as shown in Table 1. This cyclical pattern is correlated with movements of the terms of trade (the relative price of exports versus imports), world demand and international capital flows. In the expansive phases exports performed well, due to increased demand and/or positive terms of trade shocks (1880s, 1900s, 1920s, 1940s and even during the Mercosur years from 1991 to 1998). Capital flows would sometimes follow these booms and prolong the cycle, or even be a decisive force to set the cycle up, as were financial flows in the 1970s and 1990s. The usual outcome, however, has been an overvalued currency, which blurred the debt problem and threatened the balance of trade by overpricing exports. Crises have been the result of a combination of changing trade conditions, devaluation and over-indebtedness, as in the 1880s, early 1910s, late 1920s, 1950s, early 1980s and late 1990s.

Population and per capita GDP of Uruguay, 1870-2002 (1913=100)

Table 1: Swings in the Uruguayan Economy, 1870-2003

Per capita GDP fall (%) Length of recession (years) Time to pre-crisis levels (years) Time to next crisis (years)
1872-1875 26 3 15 16
1888-1890 21 2 19 25
1912-1915 30 3 15 19
1930-1933 36 3 17 24-27
1954/57-59 9 2-5 18-21 27-24
1981-1984 17 3 11 17
1998-2003 21 5

Sources: See Figure 1.

Besides its cyclical movement, the terms of trade showed a sharp positive trend in 1870-1913, a strongly fluctuating pattern around similar levels in 1913-1960 and a deteriorating trend since then. While the volume of exports grew quickly up to the 1920s, it stagnated in 1930-1960 and started to grow again after 1970. As a result, the purchasing power of exports grew fourfold in 1870-1913, fluctuated along with the terms of trade in 1930-1960, and exhibited a moderate growth in 1970-2002.

The Uruguayan economy was very open to trade in the period up to 1913, featuring high export shares, which naturally declined as the rapidly growing population filled in rather empty areas. In 1930-1960 the economy was increasingly and markedly closed to international trade, but since the 1970s the economy opened up to trade again. Nevertheless, exports, which earlier were mainly directed to Europe (beef, wool, leather, linseed, etc.), were increasingly oriented to Argentina and Brazil, in the context of bilateral trade agreements in the 1970s and 1980s and of Mercosur (the trading zone encompassing Argentina, Brazil, Paraguay and Uruguay) in the 1990s.

While industrial output kept pace with agrarian export-led growth during the first globalization boom before World War I, the industrial share in GDP increased in 1930-54, and was mainly domestic-market orientated. Deindustrialization has been profound since the mid-1980s. The service sector was always large: focusing on commerce, transport and traditional state bureaucracy during the first globalization boom; focusing on health care, education and social services, during the import-substituting industrialization (ISI) period in the middle of the twentieth century; and focusing on military expenditure, tourism and finance since the 1970s.

The income distribution changed markedly over time. During the first globalization boom before World War I, an already uneven distribution of income and wealth seems to have worsened, due to massive immigration and increasing demand for land, both rural and urban. However, by the 1920s the relative prices of land and labor changed their previous trend, reducing income inequality. The trend later favored industrialization policies, democratization, introduction of wage councils, and the expansion of the welfare state based on an egalitarian ideology. Inequality diminished in many respects: between sectors, within sectors, between genders and between workers and pensioners. While the military dictatorship and the liberal economic policy implemented since the 1970s initiated a drastic reversal of the trend toward economic equality, the globalizing movements of the 1980s and 1990s under democratic rule didn’t increase equality. Thus, inequality remains at the higher levels reached during the period of dictatorship (1973-85).

Comparative Long-run Performance

If the stable long-run rate of Uruguayan per capita GDP growth hides important internal transformations, Uruguay’s changing position in the international scene is even more remarkable. During the first globalization boom the world became more unequal: the United States forged ahead as the world leader (nearly followed by other settler economies); Asia and Africa lagged far behind. Latin America showed a confusing map, in which countries as Argentina and Uruguay performed rather well, and others, such as the Andean region, lagged far behind (Bértola and Williamson 2003). Uruguay’s strong initial position tended to deteriorate in relation to the successful core countries during the late 1800s, as shown in Figure 2. This trend of negative relative growth was somewhat weak during the first half of the twentieth century, improved significantly during the 1960s, as the import-substituting industrialization model got exhausted, and has continued since the 1970s, despite policies favoring increased integration into the global economy.

 Per capita GDP of Uruguay relative to four core countries, 1870-2002

If school enrollment and literacy rates are reasonable proxies for human capital, in late 1800s both Argentina and Uruguay had a great handicap in relation to the United States, as shown in Table 2. The gap in literacy rates tended to disappear — as well as this proxy’s ability to measure comparative levels of human capital. Nevertheless, school enrollment, which includes college-level and technical education, showed a catching-up trend until the 1960′s, but reverted afterwards.

The gap in life-expectancy at birth has always been much smaller than the other development indicators. Nevertheless, some trends are noticeable: the gap increased in 1900-1930; decreased in 1930-1950; and increased again after the 1970s.

Table 2: Uruguayan Performance in Comparative Perspective, 1870-2000 (US = 100)

1870 1880 1890 1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000
GDP per capita

Uruguay

101 65 63 27 32 27 33 27 26 24 19 18 15 16

Argentina

63 34 38 31 32 29 25 25 24 21 15 16

Brazil

23 8 8 8 8 8 7 9 9 13 11 10
Latin America 13 12 13 10 9 9 9 6 6

USA

100 100 100 100 100 100 100 100 100 100 100 100 100 100
Literacy rates

Uruguay

57 65 72 79 85 91 92 94 95 97 99

Argentina

57 65 72 79 85 91 93 94 94 96 98

Brazil

39 38 37 42 46 51 61 69 76 81 86

Latin America

28 30 34 37 42 47 56 65 71 77 83

USA

100 100 100 100 100 100 100 100 100 100 100
School enrollment

Uruguay

23 31 31 30 34 42 52 46 43

Argentina

28 41 42 36 39 43 55 44 45

Brazil

12 11 12 14 18 22 30 42

Latin America

USA

100 100 100 100 100 100 100 100 100
Life expectancy at birth

Uruguay

102 100 91 85 91 97 97 97 95 96 96

Argentina

81 85 86 90 88 90 93 94 95 96 95
Brazil 60 60 56 58 58 63 79 83 85 88 88
Latin America 65 63 58 58 59 63 71 77 81 88 87
USA 100 100 100 100 100 100 100 100 100 100 100

Sources: Per capita GDP: Maddison (2001) and Astorga, Bergés and FitzGerald (2003). Literacy rates and life expectancy; Astorga, Bergés and FitzGerald (2003). School enrollment; Bértola and Bertoni (1998).

Uruguay during the First Globalization Boom: Challenge and Response

During the post-Great-War reconstruction after 1851, Uruguayan population grew rapidly (fueled by high natural rates and immigration) and so did per capita output. Productivity grew due to several causes including: the steam ship revolution, which critically reduced the price spread between Europe and America and eased access to the European market; railways, which contributed to the unification of domestic markets and reduced domestic transport costs; the diffusion and adaptation to domestic conditions of innovations in cattle-breeding and services; a significant reduction in transaction costs, related to a fluctuating but noticeable process of institutional building and strengthening of the coercive power of the state.

Wool and woolen products, hides and leather were exported mainly to Europe; salted beef (tasajo) to Brazil and Cuba. Livestock-breeding (both cattle and sheep) was intensive in natural resources and dominated by large estates. By the 1880s, the agrarian frontier was exhausted, land properties were fenced and property rights strengthened. Labor became abundant and concentrated in urban areas, especially around Montevideo’s harbor, which played an important role as a regional (supranational) commercial center. By 1908, it contained 40 percent of the nation’s population, which had risen to more than a million inhabitants, and provided the main part of Uruguay’s services, civil servants and the weak and handicraft-dominated manufacturing sector.

By the 1910s, Uruguayan competitiveness started to weaken. As the benefits of the old technological paradigm were eroding, the new one was not particularly beneficial for resource-intensive countries such as Uruguay. International demand shifted away from primary consumption, the population of Europe grew slowly and European countries struggled for self-sufficiency in primary production in a context of soaring world supply. Beginning in the 1920s, the cattle-breeding sector showed a very poor performance, due to lack of innovation away from natural pastures. In the 1930′s, its performance deteriorated mainly due to unfavorable international conditions. Export volumes stagnated until the 1970s, while purchasing power fluctuated strongly following the terms of trade.

Inward-looking Growth and Structural Change

The Uruguayan economy grew inwards until the 1950s. The multiple exchange rate system was the main economic policy tool. Agrarian production was re-oriented towards wool, crops, dairy products and other industrial inputs, away from beef. The manufacturing industry grew rapidly and diversified significantly, with the help of protectionist tariffs. It was light, and lacked capital goods or technology-intensive sectors. Productivity growth hinged upon technology transfers embodied in imported capital goods and an intensive domestic adaptation process of mature technologies. Domestic demand grew also through an expanding public sector and the expansion of a corporate welfare state. The terms of trade substantially impacted protectionism, productivity growth and domestic demand — the government raised money by manipulating exchange rates, so that when export prices rose the state had a greater capacity to protect the manufacturing sector through low exchange rates for capital goods, raw material and fuel imports and to spur productivity increases by imports of capital, while protection allowed industry to pay higher wages and thus expand domestic demand.

However, rent-seeking industries searching for protectionism and a weak clienteslist state, crowded by civil servants recruited in exchange for political favors to the political parties, directed structural change towards a closed economy and inefficient management. The obvious limits to inward looking growth of a country peopled by only about two million inhabitants were exacerbated in the late 1950s as terms of trade deteriorated. The clientelist political system, which was created by both traditional parties while the state was expanding at the national and local level, was now not able to absorb the increasing social conflicts, dyed by stringent ideological confrontation, in a context of stagnation and huge fiscal deficits.

Re-globalization and Regional Integration

The dictatorship (1973-1985) started a period of increasing openness to trade and deregulation which has persisted until the present. Dynamic integration into the world market is still incomplete, however. An attempt to return to cattle-breeding exports, as the engine of growth, was hindered by the oil crises and the ensuing European response, which restricted meat exports to that destination. The export sector was re-orientated towards “non-traditional exports” — i.e., exports of industrial goods made of traditional raw materials, to which low-quality and low-wage labor was added. Exports were also stimulated by means of strong fiscal exemptions and negative real interest rates and were re-orientated to the regional market (Argentina and Brazil) and to other developing regions. At the end of the 1970s, this policy was replaced by the monetarist approach to the balance of payments. The main goal was to defeat inflation (which had continued above 50% since the 1960s) through deregulation of foreign trade and a pre-announced exchange rate, the “tablita.” A strong wave of capital inflows led to a transitory success, but the Uruguayan peso became more and more overvalued, thus limiting exports, encouraging imports and deepening the chronic balance of trade deficit. The “tablita” remained dependent on increasing capital inflows and obviously collapsed when the risk of a huge devaluation became real. Recession and the debt crisis dominated the scene of the early 1980s.

Democratic regimes since 1985 have combined natural resource intensive exports to the region and other emergent markets, with a modest intra-industrial trade mainly with Argentina. In the 1990s, once again, Uruguay was overexposed to financial capital inflows which fueled a rather volatile growth period. However, by the year 2000, Uruguay had a much worse position in relation to the leaders of the world economy as measured by per capita GDP, real wages, equity and education coverage, than it had fifty years earlier.

Medium-run Prospects

In the 1990s Mercosur as a whole and each of its member countries exhibited a strong trade deficit with non-Mercosur countries. This was the result of a growth pattern fueled by and highly dependent on foreign capital inflows, combined with the traditional specialization in commodities. The whole Mercosur project is still mainly oriented toward price competitiveness. Nevertheless, the strongly divergent macroeconomic policies within Mercosur during the deep Argentine and Uruguayan crisis of the beginning of the twenty-first century, seem to have given place to increased coordination between Argentina and Brazil, thus making of the region a more stable environment.

The big question is whether the ongoing political revival of Mercosur will be able to achieve convergent macroeconomic policies, success in international trade negotiations, and, above all, achievements in developing productive networks which may allow Mercosur to compete outside its home market with knowledge-intensive goods and services. Over that hangs Uruguay’s chance to break away from its long-run divergent siesta.

References

Astorga, Pablo, Ame R. Bergés and Valpy FitzGerald. “The Standard of Living in Latin America during the Twentieth Century.” University of Oxford Discussion Papers in Economic and Social History 54 (2004).

Barrán, José P. and Benjamín Nahum. “Uruguayan Rural History.” Latin American Historical Review, 1985.

Bértola, Luis. The Manufacturing Industry of Uruguay, 1913-1961: A Sectoral Approach to Growth, Fluctuations and Crisis. Publications of the Department of Economic History, University of Göteborg, 61; Institute of Latin American Studies of Stockholm University, Monograph No. 20, 1990.

Bértola, Luis and Reto Bertoni. “Educación y aprendizaje en escenarios de convergencia y divergencia.” Documento de Trabajo, no. 46, Unidad Multidisciplinaria, Facultad de Ciencias Sociales, Universidad de la República, 1998.

Bértola, Luis and Fernando Lorenzo. “Witches in the South: Kuznets-like Swings in Argentina, Brazil and Uruguay since the 1870s.” In The Experience of Economic Growth, edited by J.L. van Zanden and S. Heikenen. Amsterdam: Aksant, 2004.

Bértola, Luis and Gabriel Porcile. “Argentina, Brasil, Uruguay y la Economía Mundial: una aproximación a diferentes regímenes de convergencia y divergencia.” In Ensayos de Historia Económica by Luis Bertola. Montevideo: Uruguay en la región y el mundo, 2000.

Bértola, Luis and Jeffrey Williamson. “Globalization in Latin America before 1940.” National Bureau of Economic Research Working Paper, no. 9687 (2003).

Bértola, Luis and others. El PBI uruguayo 1870-1936 y otras estimaciones. Montevideo, 1998.

Maddison, A. Monitoring the World Economy, 1820-1992. Paris: OECD, 1995.

Maddison, A. The World Economy: A Millennial Perspective. Paris: OECD, 2001.

The Economics of the American Revolutionary War

Ben Baack, Ohio State University

By the time of the onset of the American Revolution, Britain had attained the status of a military and economic superpower. The thirteen American colonies were one part of a global empire generated by the British in a series of colonial wars beginning in the late seventeenth century and continuing on to the mid eighteenth century. The British military establishment increased relentlessly in size during this period as it engaged in the Nine Years War (1688-97), the War of Spanish Succession (1702-13), the War of Austrian Succession (1739-48), and the Seven Years War (1756-63). These wars brought considerable additions to the British Empire. In North America alone the British victory in the Seven Years War resulted in France ceding to Britain all of its territory east of the Mississippi River as well as all of Canada and Spain surrendering its claim to Florida (Nester, 2000).

Given the sheer magnitude of the British military and its empire, the actions taken by the American colonists for independence have long fascinated scholars. Why did the colonists want independence? How were they able to achieve a victory over what was at the time the world’s preeminent military power? What were the consequences of achieving independence? These and many other questions have engaged the attention of economic, legal, military, political, and social historians. In this brief essay we will focus only on the economics of the Revolutionary War.

Economic Causes of the Revolutionary War

Prior to the conclusion of the Seven Years War there was little, if any, reason to believe that one day the American colonies would undertake a revolution in an effort to create an independent nation-state. As apart of the empire the colonies were protected from foreign invasion by the British military. In return, the colonists paid relatively few taxes and could engage in domestic economic activity without much interference from the British government. For the most part the colonists were only asked to adhere to regulations concerning foreign trade. In a series of acts passed by Parliament during the seventeenth century the Navigation Acts required that all trade within the empire be conducted on ships which were constructed, owned and largely manned by British citizens. Certain enumerated goods whether exported or imported by the colonies had to be shipped through England regardless of the final port of destination.

Western Land Policies

The movement for independence arose in the colonies following a series of critical decisions made by the British government after the end of the war with France in 1763. Two themes emerge from what was to be a fundamental change in British economic policy toward the American colonies. The first involved western land. With the acquisition from the French of the territory between the Allegheny Mountains and the Mississippi River the British decided to isolate the area from the rest of the colonies. Under the terms of the Proclamation of 1763 and the Quebec Act of 1774 colonists were not allowed to settle here or trade with the Indians without the permission of the British government. These actions nullified the claims to land in the area by a host of American colonies, individuals, and land companies. The essence of the policy was to maintain British control of the fur trade in the West by restricting settlement by the Americans.

Tax Policies

The second fundamental change involved taxation. The British victory over the French had come at a high price. Domestic taxes had been raised substantially during the war and total government debt had increased nearly twofold (Brewer, 1989). Furthermore, the British had decided in1763 to place a standing army of 10,000 men in North America. The bulk of these forces were stationed in newly acquired territory to enforce its new land policy in the West. Forts were to be built which would become the new centers of trade with the Indians. The British decided that the Americans should share the costs of the military buildup in the colonies. The reason seemed obvious. Taxes were significantly higher in Britain than in the colonies. One estimate suggests the per capita tax burden in the colonies ranged from two to four per cent of that in Britain (Palmer, 1959). It was time in the British view that the Americans began to pay a larger share of the expenses of the empire.

Accordingly, a series of tax acts were passed by Parliament the revenue from which was to be used to help pay for the standing army in America. The first was the Sugar Act of 1764. Proposed by England’s Prime Minister the act lowered tariff rates on non-British products from the West Indies as well as strengthened their collection. It was hoped this would reduce the incentive for smuggling and thereby increase tariff revenue (Bullion, 1982). The following year Parliament passed the Stamp Act that imposed a tax commonly used in England. It required stamps for a broad range of legal documents as well as newspapers and pamphlets. While the colonial stamp duties were less than those in England they were expected to generate enough revenue to finance a substantial portion of the cost the new standing army. The same year passage of the Quartering Act imposed essentially a tax in kind by requiring the colonists to provide British military units with housing, provisions, and transportation. In 1767 the Townshend Acts imposed tariffs upon a variety of imported goods and established a Board of Customs Commissioners in the colonies to collect the revenue.

Boycotts

American opposition to these acts was expressed initially in a variety of peaceful forms. While they did not have representation in Parliament, the colonists did attempt to exert some influence in it through petition and lobbying. However, it was the economic boycott that became by far the most effective means of altering the new British economic policies. In 1765 representatives from nine colonies met at the Stamp Act Congress in New York and organized a boycott of imported English goods. The boycott was so successful in reducing trade that English merchants lobbied Parliament for the repeal of the new taxes. Parliament soon responded to the political pressure. During 1766 it repealed both the Stamp and Sugar Acts (Johnson, 1997). In response to the Townshend Acts of 1767 a second major boycott started in 1768 in Boston and New York and subsequently spread to other cities leading Parliament in 1770 to repeal all of the Townshend duties except the one on tea. In addition, Parliament decided at the same time not to renew the Quartering Act.

With these actions taken by Parliament the Americans appeared to have successfully overturned the new British post war tax agenda. However, Parliament had not given up what it believed to be its right to tax the colonies. On the same day it repealed the Stamp Act, Parliament passed the Declaratory Act stating the British government had the full power and authority to make laws governing the colonies in all cases whatsoever including taxation. Policies not principles had been overturned.

The Tea Act

Three years after the repeal of the Townshend duties British policy was once again to emerge as an issue in the colonies. This time the American reaction was not peaceful. It all started when Parliament for the first time granted an exemption from the Navigation Acts. In an effort to assist the financially troubled British East India Company Parliament passed the Tea Act of 1773, which allowed the company to ship tea directly to America. The grant of a major trading advantage to an already powerful competitor meant a potential financial loss for American importers and smugglers of tea. In December a small group of colonists responded by boarding three British ships in the Boston harbor and throwing overboard several hundred chests of tea owned by the East India Company (Labaree, 1964). Stunned by the events in Boston, Parliament decided not to cave in to the colonists as it had before. In rapid order it passed the Boston Port Act, the Massachusetts Government Act, the Justice Act, and the Quartering Act. Among other things these so-called Coercive or Intolerable Acts closed the port of Boston, altered the charter of Massachusetts, and reintroduced the demand for colonial quartering of British troops. Once done Parliament then went on to pass the Quebec Act as a continuation of its policy of restricting the settlement of the West.

The First Continental Congress

Many Americans viewed all of this as a blatant abuse of power by the British government. Once again a call went out for a colonial congress to sort out a response. On September 5, 1774 delegates appointed by the colonies met in Philadelphia for the First Continental Congress. Drawing upon the successful manner in which previous acts had been overturned the first thing Congress did was to organize a comprehensive embargo of trade with Britain. It then conveyed to the British government a list of grievances that demanded the repeal of thirteen acts of Parliament. All of the acts listed had been passed after 1763 as the delegates had agreed not to question British policies made prior to the conclusion of the Seven Years War. Despite all the problems it had created, the Tea Act was not on the list. The reason for this was that Congress decided not to protest British regulation of colonial trade under the Navigation Acts. In short, the delegates were saying to Parliament take us back to 1763 and all will be well.

The Second Continental Congress

What happened then was a sequence of events that led to a significant increase in the degree of American resistance to British polices. Before the Congress adjourned in October the delegates voted to meet again in May of 1775 if Parliament did not meet their demands. Confronted by the extent of the American demands the British government decided it was time to impose a military solution to the crisis. Boston was occupied by British troops. In April a military confrontation occurred at Lexington and Concord. Within a month the Second Continental Congress was convened. Here the delegates decided to fundamentally change the nature of their resistance to British policies. Congress authorized a continental army and undertook the purchase of arms and munitions. To pay for all of this it established a continental currency. With previous political efforts by the First Continental Congress to form an alliance with Canada having failed, the Second Continental Congress took the extraordinary step of instructing its new army to invade Canada. In effect, these actions taken were those of an emerging nation-state. In October as American forces closed in on Quebec the King of England in a speech to Parliament declared that the colonists having formed their own government were now fighting for their independence. It was to be only a matter of months before Congress formally declared it.

Economic Incentives for Pursuing Independence: Taxation

Given the nature of British colonial policies, scholars have long sought to evaluate the economic incentives the Americans had in pursuing independence. In this effort economic historians initially focused on the period following the Seven Years War up to the Revolution. It turned out that making a case for the avoidance of British taxes as a major incentive for independence proved difficult. The reason was that many of the taxes imposed were later repealed. The actual level of taxation appeared to be relatively modest. After all, the Americans soon after adopting the Constitution taxed themselves at far higher rates than the British had prior to the Revolution (Perkins, 1988). Rather it seemed the incentive for independence might have been the avoidance of the British regulation of colonial trade. Unlike some of the new British taxes, the Navigation Acts had remained intact throughout this period.

The Burden of the Navigation Acts

One early attempt to quantify the economic effects of the Navigation Acts was by Thomas (1965). Building upon the previous work of Harper (1942), Thomas employed a counterfactual analysis to assess what would have happened to the American economy in the absence of the Navigation Acts. To do this he compared American trade under the Acts with that which would have occurred had America been independent following the Seven Years War. Thomas then estimated the loss of both consumer and produce surplus to the colonies as a result of shipping enumerated goods indirectly through England. These burdens were partially offset by his estimated value of the benefits of British protection and various bounties paid to the colonies. The outcome of his analysis was that the Navigation Acts imposed a net burden of less than one percent of colonial per capita income. From this he concluded the Acts were an unlikely cause of the Revolution. A long series of subsequent works questioned various parts of his analysis but not his general conclusion (Walton, 1971). The work of Thomas also appeared to be consistent with the observation that the First Continental Congress had not demanded in its list of grievances the repeal of either the Navigation Acts or the Sugar Act.

American Expectations about Future British Policy

Did this mean then that the Americans had few if any economic incentives for independence? Upon further consideration economic historians realized that perhaps more important to the colonists were not the past and present burdens but rather the expected future burdens of continued membership in the British Empire. The Declaratory Act made it clear the British government had not given up what it viewed as its right to tax the colonists. This was despite the fact that up to 1775 the Americans had employed a variety of protest measures including lobbying, petitions, boycotts, and violence. The confluence of not having representation in Parliament while confronting an aggressive new British tax policy designed to raise their relatively low taxes may have made it reasonable for the Americans to expect a substantial increase in the level of taxation in the future (Gunderson, 1976, Reid, 1978). Furthermore a recent study has argued that in 1776 not only did the future burdens of the Navigation Acts clearly exceed those of the past, but a substantial portion would have borne by those who played a major role in the Revolution (Sawers, 1992). Seen in this light the economic incentive for independence would have been avoiding the potential future costs of remaining in the British Empire.

The Americans Undertake a Revolution

1776-77

British Military Advantages

The American colonies had both strengths and weaknesses in terms of undertaking a revolution. The colonial population of well over two million was nearly one third of that in Britain (McCusker and Menard, 1985). The growth in the colonial economy had generated a remarkably high level of per capita wealth and income (Jones, 1980). Yet the hurdles confronting the Americans in achieving independence were indeed formidable. The British military had an array of advantages. With virtual control of the Atlantic its navy could attack anywhere along the American coast at will and would have borne logistical support for the army without much interference. A large core of experienced officers commanded a highly disciplined and well-drilled army in the large-unit tactics of eighteenth century European warfare. By these measures the American military would have great difficulty in defeating the British. Its navy was small. The Continental Army had relatively few officers proficient in large-unit military tactics. Lacking both the numbers and the discipline of its adversary the American army was unlikely to be able to meet the British army on equal terms on the battlefield (Higginbotham, 1977).

British Financial Advantages

In addition, the British were in a better position than the Americans to finance a war. A tax system was in place that had provided substantial revenue during previous colonial wars. Also for a variety of reasons the government had acquired an exceptional capacity to generate debt to fund wartime expenses (North and Weingast, 1989). For the Continental Congress the situation was much different. After declaring independence Congress had set about defining the institutional relationship between it and the former colonies. The powers granted to Congress were established under the Articles of Confederation. Reflecting the political environment neither the power to tax nor the power to regulate commerce was given to Congress. Having no tax system to generate revenue also made it very difficult to borrow money. According to the Articles the states were to make voluntary payments to Congress for its war efforts. This precarious revenue system was to hamper funding by Congress throughout the war (Baack, 2001).

Military and Financial Factors Determine Strategy

It was within these military and financial constraints that the war strategies by the British and the Americans were developed. In terms of military strategies both of the contestants realized that America was simply too large for the British army to occupy all of the cities and countryside. This being the case the British decided initially that they would try to impose a naval blockade and capture major American seaports. Having already occupied Boston, the British during 1776 and 1777 took New York, Newport, and Philadelphia. With plenty of room to maneuver his forces and unable to match those of the British, George Washington chose to engage in a war of attrition. The purpose was twofold. First, by not engaging in an all out offensive Washington reduced the probability of losing his army. Second, over time the British might tire of the war.

Saratoga

Frustrated without a conclusive victory, the British altered their strategy. During 1777 a plan was devised to cut off New England from the rest of the colonies, contain the Continental Army, and then defeat it. An army was assembled in Canada under the command of General Burgoyne and then sent to and down along the Hudson River. It was to link up with an army sent from New York City. Unfortunately for the British the plan totally unraveled as in October Burgoyne’s army was defeated at the battle of Saratoga and forced to surrender (Ketchum, 1997).

The American Financial Situation Deteriorates

With the victory at Saratoga the military side of the war had improved considerably for the Americans. However, the financial situation was seriously deteriorating. The states to this point had made no voluntary payments to Congress. At the same time the continental currency had to compete with a variety of other currencies for resources. The states were issuing their own individual currencies to help finance expenditures. Moreover the British in an effort to destroy the funding system of the Continental Congress had undertaken a covert program of counterfeiting the Continental dollar. These dollars were printed and then distributed throughout the former colonies by the British army and agents loyal to the Crown (Newman, 1957). Altogether this expansion of the nominal money supply in the colonies led to a rapid depreciation of the Continental dollar (Calomiris, 1988, Michener, 1988). Furthermore, inflation may have been enhanced by any negative impact upon output resulting from the disruption of markets along with the destruction of property and loss of able-bodied men (Buel, 1998). By the end of 1777 inflation had reduced the specie value of the Continental to about twenty percent of what it had been when originally issued. This rapid decline in value was becoming a serious problem for Congress in that up to this point almost ninety percent of its revenue had been generated from currency emissions.

1778-83

British Invasion of the South

The British defeat at Saratoga had a profound impact upon the nature of the war. The French government still upset by their defeat by the British in the Seven Years War and encouraged by the American victory signed a treaty of alliance with the Continental Congress in early 1778. Fearing a new war with France the British government sent a commission to negotiate a peace treaty with the Americans. The commission offered to repeal all of the legislation applying to the colonies passed since 1763. Congress rejected the offer. The British response was to give up its efforts to suppress the rebellion in the North and in turn organize an invasion of the South. The new southern campaign began with the taking of the port of Savannah in December. Pursuing their southern strategy the British won major victories at Charleston and Camden during the spring and summer of 1780.

Worsening Inflation and Financial Problems

As the American military situation deteriorated in the South so did the financial circumstances of the Continental Congress. Inflation continued as Congress and the states dramatically increased the rate of issuance of their currencies. At the same time the British continued to pursue their policy of counterfeiting the Continental dollar. In order to deal with inflation some states organized conventions for the purpose of establishing wage and price controls (Rockoff, 1984). With its currency rapidly depreciating in value Congress increasingly relied on funds from other sources such as state requisitions, domestic loans, and French loans of specie. As a last resort Congress authorized the army to confiscate property.

Yorktown

Fortunately for the Americans the British military effort collapsed before the funding system of Congress. In a combined effort during the fall of 1781 French and American forces trapped the British southern army under the command of Cornwallis at Yorktown, Virginia. Under siege by superior forces the British army surrendered on October 19. The British government had now suffered not only the defeat of its northern strategy at Saratoga but also the defeat of its southern campaign at Yorktown. Following Yorktown, Britain suspended its offensive military operations against the Americans. The war was over. All that remained was the political maneuvering over the terms for peace.

The Treaty of Paris

The Revolutionary War officially concluded with the signing of the Treaty of Paris in 1783. Under the terms of the treaty the United States was granted independence and British troops were to evacuate all American territory. While commonly viewed by historians through the lens of political science, the Treaty of Paris was indeed a momentous economic achievement by the United States. The British ceded to the Americans all of the land east of the Mississippi River which they had taken from the French during the Seven Years War. The West was now available for settlement. To the extent the Revolutionary War had been undertaken by the Americans to avoid the costs of continued membership in the British Empire, the goal had been achieved. As an independent nation the United States was no longer subject to the regulations of the Navigation Acts. There was no longer to be any economic burden from British taxation.

THE FORMATION OF A NATIONAL GOVERNMENT

When you start a revolution you have to be prepared for the possibility you might win. This means being prepared to form a new government. When the Americans declared independence their experience of governing at a national level was indeed limited. In 1765 delegates from various colonies had met for about eighteen days at the Stamp Act Congress in New York to sort out a colonial response to the new stamp duties. Nearly a decade passed before delegates from colonies once again got together to discuss a colonial response to British policies. This time the discussions lasted seven weeks at the First Continental Congress in Philadelphia during the fall of 1774. The primary action taken at both meetings was an agreement to boycott trade with England. After having been in session only a month, delegates at the Second Continental Congress for the first time began to undertake actions usually associated with a national government. However, when the colonies were declared to be free and independent states Congress had yet to define its institutional relationship with the states.

The Articles of Confederation

Following the Declaration of Independence, Congress turned to deciding the political and economic powers it would be given as well as those granted to the states. After more than a year of debate among the delegates the allocation of powers was articulated in the Articles of Confederation. Only Congress would have the authority to declare war and conduct foreign affairs. It was not given the power to tax or regulate commerce. The expenses of Congress were to be made from a common treasury with funds supplied by the states. This revenue was to be generated from exercising the power granted to the states to determine their own internal taxes. It was not until November of 1777 that Congress approved the final draft of the Articles. It took over three years for the states to ratify the Articles. The primary reason for the delay was a dispute over control of land in the West as some states had claims while others did not. Those states with claims eventually agreed to cede them to Congress. The Articles were then ratified and put into effect on March 1, 1781. This was just a few months before the American victory at Yorktown. The process of institutional development had proved so difficult that the Americans fought almost the entire Revolutionary War with a government not sanctioned by the states.

Difficulties in the 1780s

The new national government that emerged from the Revolution confronted a host of issues during the 1780s. The first major one to be addressed by Congress was what to do with all of the land acquired in the West. Starting in 1784 Congress passed a series of land ordinances that provided for land surveys, sales of land to individuals, and the institutional foundation for the creation of new states. These ordinances opened the West for settlement. While this was a major accomplishment by Congress, other issues remained unresolved. Having repudiated its own currency and no power of taxation, Congress did not have an independent source of revenue to pay off its domestic and foreign debts incurred during the war. Since the Continental Army had been demobilized no protection was being provided for settlers in the West or against foreign invasion. Domestic trade was being increasingly disrupted during the 1780s as more states began to impose tariffs on goods from other states. Unable to resolve these and other issues Congress endorsed a proposed plan to hold a convention to meet in Philadelphia in May of 1787 to revise the Articles of Confederation.

Rather than amend the Articles, the delegates to the convention voted to replace them entirely with a new form of national government under the Constitution. There are of course many ways to assess the significance of this truly remarkable achievement. One is to view the Constitution as an economic document. Among other things the Constitution specifically addressed many of the economic problems that confronted Congress during and after the Revolutionary War. Drawing upon lessons learned in financing the war, no state under the Constitution would be allowed to coin money or issue bills of credit. Only the national government could coin money and regulate its value. Punishment was to be provided for counterfeiting. The problems associated with the states contributing to a common treasury under the Articles were overcome by giving the national government the coercive power of taxation. Part of the revenue was to be used to pay for the common defense of the United States. No longer would states be allowed to impose tariffs as they had done during the 1780s. The national government was now given the power to regulate both foreign and interstate commerce. As a result the nation was to become a common market. There is a general consensus among economic historians today that the economic significance of the ratification of the Constitution was to lay the institutional foundation for long run growth. From the point of view of the former colonists, however, it meant they had succeeded in transferring the power to tax and regulate commerce from Parliament to the new national government of the United States.

TABLES
Table 1 Continental Dollar Emissions (1775-1779)

Year of Emission Nominal Dollars Emitted (000) Annual Emission As Share of Total Nominal Stock Emitted Specie Value of Annual Emission (000) Annual Emission As Share of Total Specie Value Emitted
1775 $6,000 3% $6,000 15%
1776 19,000 8 15,330 37
1777 13,000 5 4,040 10
1778 63,000 26 10,380 25
1779 140,500 58 5,270 13
Total $241,500 100% $41,020 100%

Source: Bullock (1895), 135.
Table 2 Currency Emissions by the States (1775-1781)

Year of Emission Nominal Dollars Emitted (000) Year of Emission Nominal Dollars Emitted (000)
1775 $4,740 1778 $9,118
1776 13,328 1779 17,613
1777 9,573 1780 66,813
1781 123.376
Total $27,641 Total $216,376

Source: Robinson (1969), 327-28.

References

Baack, Ben. “Forging a Nation State: The Continental Congress and the Financing of the War of American Independence.” Economic History Review 54, no.4 (2001): 639-56.

Brewer, John. The Sinews of Power: War, Money and the English State, 1688- 1783. London: Cambridge University Press, 1989.

Buel, Richard. In Irons: Britain’s Naval Supremacy and the American Revolutionary Economy. New Haven: Yale University Press, 1998.

Bullion, John L. A Great and Necessary Measure: George Grenville and the Genesis of the Stamp Act, 1763-1765. Columbia: University of Missouri Press, 1982.

Bullock, Charles J. “The Finances of the United States from 1775 to 1789, with Especial Reference to the Budget.” Bulletin of the University of Wisconsin 1 no. 2 (1895): 117-273.

Calomiris, Charles W. “Institutional Failure, Monetary Scarcity, and the Depreciation of the Continental.” Journal of Economic History 48 no. 1 (1988): 47-68.

Egnal, Mark. A Mighty Empire: The Origins of the American Revolution. Ithaca: Cornell University Press, 1988.

Ferguson, E. James. The Power of the Purse: A History of American Public Finance, 1776-1790. Chapel Hill: University of North Carolina Press, 1961.

Gunderson, Gerald. A New Economic History of America. New York: McGraw- Hill, 1976.

Harper, Lawrence A. “Mercantilism and the American Revolution.” Canadian Historical Review 23 (1942): 1-15.

Higginbotham, Don. The War of American Independence: Military Attitudes, Policies, and Practice, 1763-1789. Bloomington: Indiana University Press, 1977.

Jensen, Merrill, editor. English Historical Documents: American Colonial Documents to 1776 New York: Oxford university Press, 1969.

Johnson, Allen S. A Prologue to Revolution: The Political Career of George Grenville (1712-1770). New York: University Press, 1997.

Jones, Alice H. Wealth of a Nation to Be: The American Colonies on the Eve of the Revolution. New York: Columbia University Press, 1980.

Ketchum, Richard M. Saratoga: Turning Point of America’s Revolutionary War. New York: Henry Holt and Company, 1997.

Labaree, Benjamin Woods. The Boston Tea Party. New York: Oxford University Press, 1964.

Mackesy, Piers. The War for America, 1775-1783. Cambridge: Harvard University Press, 1964.

McCusker, John J. and Russell R. Menard. The Economy of British America, 1607- 1789. Chapel Hill: University of North Carolina Press, 1985.

Michener, Ron. “Backing Theories and the Currencies of Eighteenth-Century America: A Comment.” Journal of Economic History 48 no. 3 (1988): 682-692.

Nester, William R. The First Global War: Britain, France, and the Fate of North America, 1756-1775. Westport: Praeger, 2000.

Newman, E. P. “Counterfeit Continental Currency Goes to War.” The Numismatist 1 (January, 1957): 5-16.

North, Douglass C., and Barry R. Weingast. “Constitutions and Commitment: The Evolution of Institutions Governing Public Choice in Seventeenth-Century England.” Journal of Economic History 49 No. 4 (1989): 803-32.

O’Shaughnessy, Andrew Jackson. An Empire Divided: The American Revolution and the British Caribbean. Philadelphia: University of Pennsylvania Press, 2000.

Palmer, R. R. The Age of Democratic Revolution: A Political History of Europe and America. Vol. 1. Princeton: Princeton University Press, 1959.

Perkins, Edwin J. The Economy of Colonial America. New York: Columbia University Press, 1988.

Reid, Joseph D., Jr. “Economic Burden: Spark to the American Revolution?” Journal of Economic History 38, no. 1 (1978): 81-100.

Robinson, Edward F. “Continental Treasury Administration, 1775-1781: A Study in the Financial History of the American Revolution.” Ph.D. diss., University of Wisconsin, 1969.

Rockoff, Hugh. Drastic Measures: A History of Wage and Price Controls in the United States. Cambridge: Cambridge University Press, 1984.

Sawers, Larry. “The Navigation Acts Revisited.” Economic History Review 45, no. 2 (1992): 262-84.

Thomas, Robert P. “A Quantitative Approach to the Study of the Effects of British Imperial Policy on Colonial Welfare: Some Preliminary Findings.” Journal of Economic History 25, no. 4 (1965): 615-38.

Tucker, Robert W. and David C. Hendrickson. The Fall of the First British Empire: Origins of the War of American Independence. Baltimore: Johns Hopkins Press, 1982.

Walton, Gary M. “The New Economic History and the Burdens of the Navigation Acts.” Economic History Review 24, no. 4 (1971): 533-42.

The National Recovery Administration

Barbara Alexander, Charles River Associates

This article outlines the history of the National Recovery Administration, one of the most important and controversial agencies in Roosevelt’s New Deal. It discusses the agency’s “codes of fair competition” under which antitrust law exemptions could be granted in exchange for adoption of minimum wages, problems some industries encountered in their subsequent attempts to fix prices under the codes, and the macroeconomic effects of the program.

The early New Deal suspension of antitrust law under the National Recovery Administration (NRA) is surely one of the oddest episodes in American economic history. In its two-year life, the NRA oversaw the development of so-called “codes of fair competition” covering the larger part of the business landscape.1 The NRA generally is thought to have represented a political exchange whereby business gave up some of its rights over employees in exchange for permission to form cartels.2 Typically, labor is taken to have gotten the better part of the bargain; the union movement having extended its new powers after the Supreme Court abolished the NRA in 1935, while the business community faced a newly aggressive FTC by the end of the 1930s. While this characterization may be true in broad outline, close examination of the NRA reveals that matters may be somewhat more complicated than is suggested by the interpretation of the program as a win for labor contrasted with a missed opportunity for business.

Recent evaluations of the NRA have wended their way back to themes sounded during the early nineteen thirties, in particular, interrelationships between the so-called “trade practice” or cartelization provisions of the program and the grant of enhanced bargaining power to trade unions.3 On the microeconomic side, allowing unions to bargain for industry-wide wages may have facilitated cartelization in some industries. Meanwhile, macroeconomists have suggested that the Act and its progeny, especially labor measures such as the National Labor Relations Act may bear more responsibility for the length and severity of the Great Depression than has been recognized heretofore. 4 If this thesis holds up to closer scrutiny, the era may come to be seen as a primary example of the potential macroeconomic costs of shifts in political and economic power.

Kickoff Campaign and Blanket Codes

The NRA began operations in a burst of “ballyhoo” during the summer of 1933. 5 The agency was formed upon passage of the National Industrial Recovery Act (NIRA) in mid-June. A kick-off campaign of parades and press events succeeded in getting over 2 million employers to sign a preliminary “blanket code” known as the “President’s Re-Employment Agreement.” Signatories of the PRA pledged to pay minimum wages ranging from around $12 to $15 per 40-hour week, depending on size of town. Some 16 million workers were covered, out of a non-farm labor force of some 25 million. “Share-the-work” provisions called for limits of 35 to 40 hours per week for most employees. 6

NRA Codes

Over the next year and a half, the blanket code was superseded by over 500 codes negotiated for individual industries. The NIRA provided that: “Upon the application to the President by one or more trade or industrial associations or groups, the President may approve a code or codes of fair competition for the trade or industry.” 7 The carrot held out to induce participation was enticing: “any code … and any action complying with the provisions thereof . . . shall be exempt from the provisions of the antitrust laws of the United States.” 8 Representatives of trade associations overran Washington, and by the time the NRA was abolished, hundreds of codes covering over three-quarters of private, non-farm employment had been approved.9 Code signatories were supposed to be allowed to use the NRA “Blue Eagle” as a symbol that “we do our part” only as long as they remained in compliance with code provisions.10

Disputes Arise

Almost 80 percent of the codes had provisions that were directed at establishment of price floors.11 The Act did not specifically authorize businesses to fix prices, and indeed it specified that ” . . .codes are not designed to promote monopolies.” 12 However, it is an understatement to say that there was never any consensus among firms, industries and NRA officials as to precisely what was to be allowed as part of an acceptable code. Arguments about exactly what the NIRA allowed, and how the NRA should implement the Act began during its drafting and continued unabated throughout its life. The arguments extended from the level of general principles to the smallest details of policy, unsurprising given the complete dependence of appropriate regulatory design on precise regulatory objectives, which here were embroiled in dispute from start to finish.

To choose just one out of many examples of such disputes: There was a debate within the NRA as to whether “code authorities” (industry governing bodies) should be allowed to use industry-wide or “representative” cost data to define a price floor based on “lowest reasonable cost.” Most economists would understand this type of rule as a device that would facilitate monopoly pricing. However, a charitable interpretation of the views of administration proponents is that they had some sort of “soft competition” in mind. That is, they wished to develop and allow the use of mechanisms that would extend to more fragmented industries a type of peaceful coexistence more commonly associated with oligopoly. Those NRA supporters of the representative-cost-based price floor imagined that a range of prices would emerge if such a floor were to be set, whereas detractors believed that “the minimum would become the maximum,” that is, the floor would simply be a cartel price, constraining competition across all firms in an industry.13

Price Floors

While a rule allowing emergency price floors based on “lowest reasonable cost” was eventually approved, there was no coherent NRA program behind it.14 Indeed, the NRA and code authorities often operated at cross-purposes. At the same time that some officials of the NRA arguably took actions to promote softened competition, some in industry tried to implement measures more likely to support hard-core cartels, even when they thereby reduced the chance of soft competition should collusion fail. For example, with the partial support of the NRA, many code authorities moved to standardize products, shutting off product differentiation as an arena of potential rivalry, in spite of its role as one of the strongest mechanisms that might soften price competition.15 Of course if one is looking to run a naked price-fixing scheme, it is helpful to eliminate product differentiation as an avenue for cost-raising, profit-eroding rivalry. An industry push for standardization can thus be seen as a way of supporting hard-core cartelization, while less enthusiasm on the part of some administration officials may have reflected an understanding, however intuitive, that socially more desirable soft competition required that avenues for product differentiation be left open.

National Recovery Review Board

According to some critical observers then and later, the codes did lead to an unsurprising sort of “golden age” of cartelization. The National Recovery Review Board, led by an outraged Clarence Darrow (of Scopes “monkey trial” fame) concluded in May of 1934 that “in certain industries monopolistic practices existed.” 16 While there are legitimate examples of every variety of cartelization occurring under the NRA, many contemporaneous and subsequent assessments of Darrow’s work dismiss the Board’s “analysis” as hopelessly biased. Thus although its conclusions are interesting as a matter of political economy, it is far from clear that the Board carried out any dispassionate inventory of conditions across industries, much less a real weighing of evidence.17

Compliance Crisis

In contrast to Darrow’s perspective, other commentators focus on the “compliance crisis” that erupted within a few months of passage of the NIRA.18 Many industries were faced with “chiselers” who refused to respect code pricing rules. Firms that attempted to uphold code prices in the face of defection lost both market share and respect for the NRA.

NRA state compliance offices had recorded over 30,000 “trade practice” complaints by early 1935.19 However, the compliance program was characterized by “a marked timidity on the part of NRA enforcement officials.” 20 This timidity was fatal to the program, since monopoly pricing can easily be more damaging than is the most bare-knuckled competition to a firm that attempts it without parallel action from its competitors. NRA hesitancy came about as a result of doubts about whether a vigorous enforcement effort would withstand constitutional challenge, a not-unrelated lack of support from the Department of Justice, public antipathy for enforcement actions aimed at forcing sellers to charge higher prices, and unabating internal NRA disputes about the advisability of the price-fixing core of the trade practice program.21 Consequently, by mid-1934, firms disinclined to respect code pricing rules were ignoring them. By that point then, contrary to the initial expectations of many code signatories, the new antitrust regime represented only permission to form voluntary cartelization agreements, not the advent of government-enforced cartels. Even there, participants had to be discreet, so as not to run afoul of the antimonopoly language of the Act.

It is still far from clear how much market power was conferred by the NRA’s loosening of antitrust constraints. Of course, modern observers of the alternating successes and failures of cartels such as OPEC will not be surprised that the NRA program led to mixed results. In the absence of government enforcement, the program simply amounted to de facto legalization of self-enforcing cartels. With respect to the ease of collusion, economic theory is clear only on the point that self-enforceability is an open question; self-interest may lead to either breakdown of agreements or success at sustaining them.

Conflicts between Large and Small Firms

Some part of the difficulties encountered by NRA cartels may have had roots in a progressive mandate to offer special protection to the “little guy.” The NIRA had specified that acceptable codes of fair competition must not “eliminate or oppress small enterprises,” 22 and that “any organization availing itself of the benefits of this title shall be truly representative of the trade or industry . . . Any organization violating … shall cease to be entitled to the benefits of this title.” 23 Majority rule provisions were exceedingly common in codes, and were most likely a reflection of this statutory mandate. The concern for small enterprise had strong progressive roots.24 Justice Brandeis’s well-known antipathy for large-scale enterprise and concentration of economic power reflected a widespread and long-standing debate about the legitimate goals of the American experiment.

In addition to evaluating monopolization under the codes, the Darrow board had been charged with assessing the impact of the NRA on small business. Its conclusion was that “in certain industries small enterprises were oppressed.” Again however, as with his review of monopolization, Darrow may have seen only what he was predisposed to see. A number of NRA “code histories” detail conflicts within industries in which small, higher-cost producers sought to use majority rule provisions to support pricing at levels above those desired by larger, lower-cost producers. In the absence of effective enforcement from the government, such prices were doomed to break down, triggering repeated price wars in some industries.25

By 1935, there was understandable bitterness about what many businesses viewed as the lost promise of the NRA. Undoubtedly, the bitterness was exacerbated by the fact that the NRA wanted higher wages while failing to deliver the tools needed for effective cartelization. However, it is not entirely clear that everyone in the business community felt that the labor provisions of the Act were undesirable.26

Labor and Employment Issues

By their nature, market economies give rise to surplus-eroding rivalry among those who would be better off collectively if they could only act in concert. NRA codes of fair competition, specifying agreements on pricing and terms of employment, arose from a perceived confluence of interests among representatives of “business,” “labor,” and “the public” in muting that rivalry. Many proponents of the NIRA held that competitive pressures on business had led to downward pressure on wages, which in turn caused low consumption, leading to greater pressure on business, and so on. Allowing workers to organize and bargain collectively, while their employers pledged to one another not to sell below cost, was identified as a way to arrest harmful deflationary forces. Knowledge that one’s rivals would also be forced to pay “code wages” had some potential for aiding cartel survival. Thus the rationale for NRA wage supports at the microeconomic level potentially dovetailed with the macroeconomic theory by which higher wages were held to support higher consumption and, in turn, higher prices.

Labor provisions of the NIRA appeared in Section 7: “. . . employees shall have the right to organize and bargain collectively through representatives of their own choosing … employers shall comply with the maximum hours of labor, minimum rates of pay, and other conditions of employment…” 27 Each “code of fair competition” had to include labor provisions acceptable to the National Recovery Administration, developed during a process of negotiations, hearings, and review. Thus in order to obtain the shield against antitrust prosecution for their “trade practices” offered by an approved code, significant concessions to workers had to be made.

The NRA is generally judged to have been a success for labor and a miserable failure for business. However, evaluation is complicated to the extent that labor could not have achieved gains with respect to collective bargaining rights over wages and working conditions, had those rights not been more or less willingly granted by employers operating under the belief that stabilization of labor costs would facilitate cartelization. The labor provisions may have indeed helped some industries as well as helping workers, and for firms in such industries, the NRA cannot have been judged a failure. Moreover, while some businesses may have found the Act beneficial, because labor cost stability or freedom to negotiate with rivals enhanced their ability to cooperate on price, it is not entirely obvious that workers as a class gained as much as is sometimes contended.

The NRA did help solidify new and important norms regarding child labor, maximum hours, and other conditions of employment; it will never be known if the same progress could have been made had not industry been more or less hornswoggled into giving ground, using the antitrust laws as bait. Whatever the long-term effects of the NRA on worker welfare, the short-term gains for labor associated with higher wages were questionable. While those workers who managed to stay employed throughout the nineteen thirties benefited from higher wages, to the extent that workers were also consumers, and often unemployed consumers at that, or even potential entrepreneurs, they may have been better off without the NRA.

The issue is far from settled. Ben Bernanke and Martin Parkinson examine the economic growth that occurred during the New Deal in spite of higher wages and suggest “part of the answer may be that the higher wages ‘paid for themselves’ through increased productivity of labor. Probably more important, though, is the observation that with imperfectly competitive product markets, output depends on aggregate demand as well as the real wage. Maybe Herbert Hoover and Henry Ford were right: Higher real wages may have paid for themselves in the broader sense that their positive effect on aggregate demand compensated for their tendency to raise cost.” 28 However, Christina Romer establishes a close connection between NRA programs and the failure of wages and prices to adjust to high unemployment levels. In her view, “By preventing the large negative deviations of output from trend in the mid-1930s from exerting deflationary pressure, [the NRA] prevented the economy’s self-correction mechanism from working.” 29

Aftermath of Supreme Court’s Ruling in Schecter Case

The Supreme Court struck down the NRA on May 27, 1935; the case was a dispute over violations of labor provisions of the “Live Poultry Code” allegedly perpetrated by the Schecter Poultry Corporation. The Court held the code to be invalid on grounds of “attempted delegation of legislative power and the attempted regulation of intrastate transactions which affect interstate commerce only indirectly.” 30 There were to be no more grand bargains between business and labor under the New Deal.

Riven by divergent agendas rooted in industry- and firm-specific technology and demand, “business” was never able to speak with even the tenuous degree of unity achieved by workers. Following the abortive attempt to get the government to enforce cartels, firms and industries went their own ways, using a variety of strategies to enhance their situations. A number of sectors did succeed in getting passage of “little NRAs” with mechanisms tailored to mute competition in their particular circumstances. These mechanisms included the Robinson-Patman Act, aimed at strengthening traditional retailers against the ability of chain stores to buy at lower prices, the Guffey Acts, in which high cost bituminous coal operators and coal miners sought protection from the competition of lower cost operators, and the Motor Carrier Act in which high cost incumbent truckers obtained protection against new entrants.31

On-going macroeconomic analysis suggests that the general public interest may have been poorly served by the experiment of the NRA. Like many macroeconomic theories, the validity of the underconsumption scenario that was put forth in support of the program depended on the strength and timing of the operation of its various mechanisms. Increasingly it appears that the NRA set off inflationary forces thought by some to be desirable at the time, but that in fact had depressing effects on demand for labor and on output. Pure monopolistic deadweight losses probably were less important than higher wage costs (although there has not been any close examination of inefficiencies that may have resulted from the NRA’s attempt to protect small higher-cost producers). The strength of any mitigating effects on aggregate demand remains to be established.

1 Leverett Lyon, P. Homan, L. Lorwin, G. Terborgh, C. Dearing, L. Marshall, The National Recovery Administration: An Analysis and Appraisal, Washington: Brooking Institution, 1935, p. 313, footnote 9.

2 See, for example, Charles Frederick Roos, NRA Economic Planning, Colorado Springs: Cowles Commission, 1935, p. 343.

3See, for example, Colin Gordon, New Deals: Business, Labor, and Politics in America, 1920-1935, New York: Cambridge University Press, 1993, especially chapter 5.

4Christina D. Romer, “Why Did Prices Rise in the 1930s?” Journal of Economic History 59, no. 1 (1999): 167-199; Michael Weinstein, Recovery and Redistribution under the NIRA, Amsterdam: North Holland, 1980, and Harold L. Cole and Lee E. Ohanian, “New Deal Policies and the Persistence of the Great Depression,” Working Paper 597, Federal Reserve Bank of Minneapolis, February 2001. But also see “Unemployment, Inflation and Wages in the American Depression: Are There Lessons for Europe?” Ben Bernanke and Martin Parkinson, American Economic Review: Papers and Proceedings 79, no. 2 (1989): 210-214.

5 See, for example, Donald Brand, Corporatism and the Rule of Law: A Study of the National Recovery Administration, Ithaca: Cornell University Press, 1988, p. 94.

6 See, for example, Roos, op. cit., pp. 77, 92.

7 Section 3(a) of The National Industrial Recovery Act, reprinted at p. 478 of Roos, op. Cit.

8 Section 5 of The National Industrial Recovery Act, reprinted at p. 483 of Roos, op. cit. Note though, that the legal status of actions taken during the NRA era was never clear; Roos points out that “…President Roosevelt signed an executive order on January 20, 1934, providing that any complainant of monopolistic practices … could press it before the Federal Trade Commission or request the assistance of the Department of Justice. And, on the same date, Donald Richberg issued a supplementary statement which said that the provisions of the anti-trust laws were still in effect and that the NRA would not tolerate monopolistic practices.” (Roos, op. cit. p. 376.)

9 Lyon, op. cit., p. 307, cited at p. 52 in Lee and Ohanian, op cit.

10 Roos, op. cit., p. 75; and Blackwell Smith, My Imprint on the Sands of Time: The Life of a New Dealer, Vantage Press, New York, p. 109.

11 Lyon, op. cit., p. 570.

12 Section 3 (a)(2) of The National Industrial Recovery Act, op. Cit.

13 Roos, op. cit., at pp. 254-259. Charles Roos comments that “Leon Henderson and Blackwell Smith, in particular, became intrigued with a notion that competition could be set up within limits and that in this way wide price variations tending to demoralize an industry could be prevented.”

14 Lyon, et al., op. cit., p. 605.

15 Smith, Assistant Counsel of the NRA (per Roos, op cit., p. 254), has the following to say about standardization: One of the more controversial subjects, which we didn’t get into too deeply, except to draw guidelines, was standardization.” Smith goes on to discuss the obvious need to standardize rail track gauges, plumbing fittings, and the like, but concludes, “Industry on the whole wanted more standardization than we could go with.” (Blackwell Smith, op. cit., pp. 106-7.) One must not go overboard looking for coherence among the various positions espoused by NRA administrators; along these lines it is worth remembering Smith’s statement some 60 years later: “Business’s reaction to my policy [Smith was speaking generally here of his collective proposals] to some extent was hostile. They wished that the codes were not as strict as I wanted them to be. Also, there was criticism from the liberal/labor side to the effect that the codes were more in favor of business than they should have been. I said, ‘We are guided by a squealometer. We tune policy until the squeals are the same pitch from both sides.’” (Smith, op. cit. p. 108.)

16 Quoted at p 378 of Roos, op. Cit.

17 Brand, op. cit. at pp. 159-60 cites in agreement extremely critical conclusions by Roos (op. cit. at p. 409) and Arthur Schlesinger, The Age of Roosevelt: The Coming of the New Deal, Boston: Houghton Mifflin, 1959, p. 133.

18 Roos acknowledges a breakdown by spring of 1934: “By March, 1934 something was urgently needed to encourage industry to observe code provisions; business support for the NRA had decreased materially and serious compliance difficulties had arisen.” (Roos, op. cit., at p. 318.) Brand dates the start of the compliance crisis much earlier, in the fall of 1933. (Brand, op. cit., p. 103.)

19 Lyon, op. cit., p. 264.

20 Lyon, op. cit., p. 268.

21 Lyon, op. cit., pp. 268-272. See also Peter H. Irons, The New Deal Lawyers, Princeton: Princeton University Press, 1982.

22 Section 3(a)(2) of The National Industrial Recovery Act, op. Cit.

23 Section 6(b) of The National Industrial Recovery Act, op. Cit.

24 Brand, op. Cit.

25 Barbara Alexander and Gary D. Libecap, “The Effect of Cost Heterogeneity in the Success and Failure of the New Deal’s Agricultural and Industrial Programs,” Explorations in Economic History, 37 (2000), pp. 370-400.

26 Gordon, op. Cit.

27 Section 7 of the National Industrial Recovery Act, reprinted at pp. 484-5 of Roos, op. Cit.

28 Bernanke and Parkinson, op. cit., p. 214.

29 Romer, op. cit., p. 197.

30 Supreme Court of the United States, Nos. 854 and 864, October term, 1934, (decision issued May 27, 1935). Reprinted in Roos, op. cit., p. 580.

31 Ellis W. Hawley, The New Deal and the Problem of Monopoly: A Study in Economic Ambivalence, 1966, Princeton: Princeton University Press, p. 249; Irons, op. cit., pp. 105-106, 248.

History of Workplace Safety in the United States, 1880-1970

Mark Aldrich, Smith College

The dangers of work are usually measured by the number of injuries or fatalities occurring to a group of workers, usually over a period of one year. 1 Over the past century such measures reveal a striking improvement in the safety of work in all the advanced countries. In part this has been the result of the gradual shift of jobs from relatively dangerous goods production such as farming, fishing, logging, mining, and manufacturing into such comparatively safe work as retail trade and services. But even the dangerous trades are now far safer than they were in 1900. To take but one example, mining today remains a comparatively risky activity. Its annual fatality rate is about nine for every one hundred thousand miners employed. A century ago in 1900 about three hundred out of every one hundred thousand miners were killed on the job each year. 2

The Nineteenth Century

Before the late nineteenth century we know little about the safety of American workplaces because contemporaries cared little about it. As a result, only fragmentary information exists prior to the 1880s. Pre-industrial laborers faced risks from animals and hand tools, ladders and stairs. Industrialization substituted steam engines for animals, machines for hand tools, and elevators for ladders. But whether these new technologies generally worsened the dangers of work is unclear. What is clear is that nowhere was the new work associated with the industrial revolution more dangerous than in America.

US Was Unusually Dangerous

Americans modified the path of industrialization that had been pioneered in Britain to fit the particular geographic and economic circumstances of the American continent. Reflecting the high wages and vast natural resources of a new continent, this American system encouraged use of labor saving machines and processes. These developments occurred within a legal and regulatory climate that diminished employer’s interest in safety. As a result, Americans developed production methods that were both highly productive and often very dangerous. 3

Accidents Were “Cheap”

While workers injured on the job or their heirs might sue employers for damages, winning proved difficult. Where employers could show that the worker had assumed the risk, or had been injured by the actions of a fellow employee, or had himself been partly at fault, courts would usually deny liability. A number or surveys taken about 1900 showed that only about half of all workers fatally injured recovered anything and their average compensation only amounted to about half a year’s pay. Because accidents were so cheap, American industrial methods developed with little reference to their safety. 4

Mining

Nowhere was the American system more dangerous than in early mining. In Britain, coal seams were deep and coal expensive. As a result, British mines used mining methods that recovered nearly all of the coal because they used waste rock to hold up the roof. British methods also concentrated the working, making supervision easy, and required little blasting. American coal deposits by contrast, were both vast and near the surface; they could be tapped cheaply using techniques known as “room and pillar” mining. Such methods used coal pillars and timber to hold up the roof, because timber and coal were cheap. Since miners worked in separate rooms, labor supervision was difficult and much blasting was required to bring down the coal. Miners themselves were by no means blameless; most were paid by the ton, and when safety interfered with production, safety often took a back seat. For such reasons, American methods yielded more coal per worker than did European techniques, but they were far more dangerous, and toward the end of the nineteenth century, the dangers worsened (see Table 1).5

Table 1

British and American Mine Safety, 1890 -1904

(Fatality rates per Thousand Workers per Year)

Years American Anthracite American Bituminous Great Britain
1890-1894 3.29 2.52 1.61
1900-1904 3.13 3.53 1.28

Source: British data from Great Britain, General Report. Other data from Aldrich, Safety First.

Railroads

Nineteenth century American railroads were also comparatively dangerous to their workers – and their passengers as well – and for similar reasons. Vast North American distances and low population density turned American carriers into predominantly freight haulers – and freight was far more dangerous to workers than passenger traffic, for men had to go in between moving cars for coupling and uncoupling and ride the cars to work brakes. The thin traffic and high wages also forced American carriers to economize on both capital and labor. Accordingly, American carriers were poorly built and used few signals, both of which resulted in many derailments and collisions. Such conditions made American railroad work far more dangerous than that in Britain (see Table 2).6

Table 2

Comparative Safety of British and American Railroad Workers, 1889 – 1901

(Fatality Rates per Thousand Workers per Year)

1889 1895 1901
British railroad workers

All causes
1.14 0.95 0.89
British trainmena

All causes
4.26 3.22 2.21
Coupling 0.94 0.83 0.74
American Railroad workers

All causes
2.67 2.31 2.50
American trainmen

All causes
8.52 6.45 7.35
Coupling 1.73c 1.20 0.78
Brakingb 3.25c 2.44 2.03

Source: Aldrich, Safety First, Table 1 and Great Britain Board of Trade, General Report.

1

Note: Death rates are per thousand employees.

a. Guards, brakemen, and shunters.

b. Deaths from falls from cars and striking overhead obstructions.

Manufacturing

American manufacturing also developed in a distinctively American fashion that substituted power and machinery for labor and manufactured products with interchangeable arts for ease in mass production. Whether American methods were less safe than those in Europe is unclear but by 1900 they were extraordinarily risky by modern standards, for machines and power sources were largely unguarded. And while competition encouraged factory managers to strive for ever-increased output, they showed little interest in improving safety.7

Worker and Employer Responses

Workers and firms responded to these dangers in a number of ways. Some workers simply left jobs they felt were too dangerous, and risky jobs may have had to offer higher pay to attract workers. After the Civil War life and accident insurance companies expanded, and some workers purchased insurance or set aside savings to offset the income risks from death or injury. Some unions and fraternal organizations also offered their members insurance. Railroads and some mines also developed hospital and insurance plans to care for injured workers while many carriers provided jobs for all their injured men. 8

Improving safety, 1910-1939

Public efforts to improve safety date from the very beginnings of industrialization. States established railroad regulatory commissions as early as the 1840s. But while most of the commissions were intended to improve safety, they had few powers and were rarely able to exert much influence on working conditions. Similarly, the first state mining commission began in Pennsylvania in 1869, and other states soon followed. Yet most of the early commissions were ineffectual and as noted safety actually deteriorated after the Civil War. Factory commissions also dated from but most were understaffed and they too had little power.9

Railroads

The most successful effort to improve work safety during the nineteenth century began on the railroads in the 1880s as a small band of railroad regulators, workers, and managers began to campaign for the development of better brakes and couplers for freight cars. In response George Westinghouse modified his passenger train air brake in about 1887 so it would work on long freights, while at roughly the same time Ely Janney developed an automatic car coupler. For the railroads such equipment meant not only better safety, but also higher productivity and after 1888 they began to deploy it. The process was given a boost in 1889-1890 when the newly-formed Interstate Commerce Commission (ICC) published its first accident statistics. They demonstrated conclusively the extraordinary risks to trainmen from coupling and riding freight (Table 2). In 1893 Congress responded, passing the Safety Appliance Act, which mandated use of such equipment. It was the first federal law intended primarily to improve work safety, and by 1900 when the new equipment was widely diffused, risks to trainmen had fallen dramatically.10

Federal Safety Regulation

In the years between 1900 and World War I, a rather strange band of Progressive reformers, muckraking journalists, businessmen, and labor unions pressed for changes in many areas of American life. These years saw the founding of the Federal Food and Drug Administration, the Federal Reserve System and much else. Work safety also became of increased public concern and the first important developments came once again on the railroads. Unions representing trainmen had been impressed by the safety appliance act of 1893 and after 1900 they campaigned for more of the same. In response Congress passed a host of regulations governing the safety of locomotives and freight cars. While most of these specific regulations were probably modestly beneficial, collectively their impact was small because unlike the rules governing automatic couplers and air brakes they addressed rather minor risks.11

In 1910 Congress also established the Bureau of Mines in response to a series of disastrous and increasingly frequent explosions. The Bureau was to be a scientific, not a regulatory body and it was intended to discover and disseminate new knowledge on ways to improve mine safety.12

Workers’ Compensation Laws Enacted

Far more important were new laws that raised the cost of accidents to employers. In 1908 Congress passed a federal employers’ liability law that applied to railroad workers in interstate commerce and sharply limited defenses an employee could claim. Worker fatalities that had once cost the railroads perhaps $200 now cost $2,000. Two years later in 1910, New York became the first state to pass a workmen’s compensation law. This was a European idea. Instead of requiring injured workers to sue for damages in court and prove the employer was negligent, the new law automatically compensated all injuries at a fixed rate. Compensation appealed to businesses because it made costs more predictable and reduced labor strife. To reformers and unions it promised greater and more certain benefits. Samuel Gompers, leader of the American Federation of Labor had studied the effects of compensation in Germany. He was impressed with how it stimulated business interest in safety, he said. Between 1911 and 1921 forty-four states passed compensation laws.13

Employers Become Interested in Safety

The sharp rise in accident costs that resulted from compensation laws and tighter employers’ liability initiated the modern concern with work safety and initiated the long-term decline in work accidents and injuries. Large firms in railroading, mining, manufacturing and elsewhere suddenly became interested in safety. Companies began to guard machines and power sources while machinery makers developed safer designs. Managers began to look for hidden dangers at work, and to require that workers wear hard hats and safety glasses. They also set up safety departments run by engineers and safety committees that included both workers and managers. In 1913 companies founded the National Safety Council to pool information. Government agencies such as the Bureau of Mines and National Bureau of Standards provided scientific support while universities also researched safety problems for firms and industries14

Accident Rates Begin to Fall Steadily

During the years between World War I and World War II the combination of higher accident costs along with the institutionalization of safety concerns in large firms began to show results. Railroad employee fatality rates declined steadily after 1910 and at some large companies such as DuPont and whole industries such as steel making (see Table 3) safety also improved dramatically. Largely independent changes in technology and labor markets also contributed to safety as well. The decline in labor turnover meant fewer new employees who were relatively likely to get hurt, while the spread of factory electrification not only improved lighting but reduced the dangers from power transmission as well. In coal mining the shift from underground work to strip mining also improved safety. Collectively these long-term forces reduced manufacturing injury rates about 38 percent between 1926 and 1939 (see Table 4).15

Table 3

Steel Industry fatality and Injury rates, 1910-1939

(Rates are per million manhours)

Period Fatality rate Injury Rate
1910-1913 0.40 44.1
1937-1939 0.13 11.7

Pattern of Improvement Was Uneven

Yet the pattern of improvement was uneven, both over time and among firms and industries. Safety still deteriorated in times of economic boon when factories mines and railroads were worked to the limit and labor turnover rose. Nor were small companies as successful in reducing risks, for they paid essentially the same compensation insurance premium irrespective of their accident rate, and so the new laws had little effect there. Underground coal mining accidents also showed only modest improvement. Safety was also expensive in coal and many firms were small and saw little payoff from a lower accident rate. The one source of danger that did decline was mine explosions, which diminished in response to technologies developed by the Bureau of Mines. Ironically, however, in 1940 six disastrous blasts that killed 276 men finally led to federal mine inspection in 1941.16

Table 4

Work Injury Rates, Manufacturing and Coal Mining, 1926-1970

(Per Million Manhours)

.

Year Manufacturing Coal Mining
1926 24.2
1931 18.9 89.9
1939 14.9 69.5
1945 18.6 60.7
1950 14.7 53.3
1960 12.0 43.4
1970 15.2 42.6

Source: U.S. Department of Commerce Bureau of the Census, Historical Statistics of the United States, Colonial Times to 1970 (Washington, 1975), Series D-1029 and D-1031.

Postwar Trends, 1945-1970

The economic boon and associated labor turnover during World War II worsened work safety in nearly all areas of the economy, but after 1945 accidents again declined as long-term forces reasserted themselves (Table 4). In addition, after World War II newly powerful labor unions played an increasingly important role in work safety. In the 1960s however economic expansion again led to rising injury rates and the resulting political pressures led Congress to establish the Occupational Safety and Health Administration (OSHA) and the Mine Safety and Health Administration in 1970. The continuing problem of mine explosions also led to the foundation of the Mine Safety and Health Administration (MSHA) that same year. The work of these agencies had been controversial but on balance they have contributed to the continuing reductions in work injuries after 1970.17

References and Further Reading

Aldrich, Mark. Safety First: Technology, Labor and Business in the Building of Work Safety, 1870-1939. Baltimore: Johns Hopkins University Press, 1997.

Aldrich, Mark. “Preventing ‘The Needless Peril of the Coal Mine’: the Bureau of Mines and the Campaign Against Coal Mine Explosions, 1910-1940.” Technology and Culture 36, no. 3 (1995): 483-518.

Aldrich, Mark. “The Peril of the Broken Rail: the Carriers, the Steel Companies, and Rail Technology, 1900-1945.” Technology and Culture 40, no. 2 (1999): 263-291

Aldrich, Mark. “Train Wrecks to Typhoid Fever: The Development of Railroad Medicine Organizations, 1850 -World War I.” Bulletin of the History of Medicine, 75, no. 2 (Summer 2001): 254-89.

Derickson Alan. “Participative Regulation of Hazardous Working Conditions: Safety Committees of the United Mine Workers of America,” Labor Studies Journal 18, no. 2 (1993): 25-38.

Dix, Keith. Work Relations in the Coal Industry: The Hand Loading Era. Morgantown: University of West Virginia Press, 1977. The best discussion of coalmine work for this period.

Dix, Keith. What’s a Coal Miner to Do? Pittsburgh: University of Pittsburgh Press, 1988. The best discussion of coal mine labor during the era of mechanization.

Fairris, David. “From Exit to Voice in Shopfloor Governance: The Case of Company Unions.” Business History Review 69, no. 4 (1995): 494-529.

Fairris, David. “Institutional Change in Shopfloor Governance and the Trajectory of Postwar Injury Rates in U.S. Manufacturing, 1946-1970.” Industrial and Labor Relations Review 51, no. 2 (1998): 187-203.

Fishback, Price. Soft Coal Hard Choices: The Economic Welfare of Bituminous Coal Miners, 1890-1930. New York: Oxford University Press, 1992. The best economic analysis of the labor market for coalmine workers.

Fishback, Price and Shawn Kantor. A Prelude to the Welfare State: The Origins of Workers’ Compensation. Chicago: University of Chicago Press, 2000. The best discussions of how employers’ liability rules worked.

Graebner, William. Coal Mining Safety in the Progressive Period. Lexington: University of Kentucky Press, 1976.

Great Britain Board of Trade. General Report upon the Accidents that Have Occurred on Railways of the United Kingdom during the Year 1901. London, HMSO, 1902.

Great Britain Home Office Chief Inspector of Mines. General Report with Statistics for 1914, Part I. London: HMSO, 1915.

Hounshell, David. From the American System to Mass Production, 1800-1932: The Development of Manufacturing Technology in the United States. Baltimore: Johns Hopkins University Press, 1984.

Humphrey, H. B. “Historical Summary of Coal-Mine Explosions in the United States — 1810-1958.” United States Bureau of Mines Bulletin 586 (1960).

Kirkland, Edward. Men, Cities, and Transportation. 2 vols. Cambridge: Harvard University Press, 1948, Discusses railroad regulation and safety in New England.

Lankton, Larry. Cradle to Grave: Life, Work, and Death in Michigan Copper Mines. New York: Oxford University Press, 1991.

Licht, Walter. Working for the Railroad. Princeton: Princeton University Press, 1983.

Long, Priscilla. Where the Sun Never Shines. New York: Paragon, 1989. Covers coal mine safety at the end of the nineteenth century.

Mendeloff, John. Regulating Safety: An Economic and Political Analysis of Occupational Safety and Health Policy. Cambridge: MIT Press, 1979. An accessible modern discussion of safety under OSHA.

National Academy of Sciences. Toward Safer Underground Coal Mines. Washington, DC: NAS, 1982.

Rogers, Donald. “From Common Law to Factory Laws: The Transformation of Workplace Safety Law in Wisconsin before Progressivism.” American Journal of Legal History (1995): 177-213.

Root, Norman and Daley, Judy. “Are Women Safer Workers? A New Look at the Data.” Monthly Labor Review 103, no. 9 (1980): 3-10.

Rosenberg, Nathan. Technology and American Economic Growth. New York: Harper and Row, 1972. Analyzes the forces shaping American technology.

Rosner, David and Gerald Markowity, editors. Dying for Work. Blomington: Indiana University Press, 1987.

Shaw, Robert. Down Brakes: A History of Railroad Accidents, Safety Precautions, and Operating Practices in the United States of America. London: P. R. Macmillan. 1961.

Trachenberg, Alexander. The History of Legislation for the Protection of Coal Miners in Pennsylvania, 1824 – 1915. New York: International Publishers. 1942.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States, Colonial Times to 1970. Washington, DC, 1975.

Usselman, Steven. “Air Brakes for Freight Trains: Technological Innovation in the American Railroad Industry, 1869-1900.” Business History Review 58 (1984): 30-50.

Viscusi, W. Kip. Risk By Choice: Regulating Health and Safety in the Workplace. Cambridge: Harvard University Press, 1983. The most readable treatment of modern safety issues by a leading scholar.

Wallace, Anthony. Saint Clair. New York: Alfred A. Knopf, 1987. Provides a superb discussion of early anthracite mining and safety.

Whaples, Robert and David Buffum. “Fraternalism, Paternalism, the Family and the Market: Insurance a Century Ago.” Social Science History 15 (1991): 97-122.

White, John. The American Railroad Freight Car. Baltimore: Johns Hopkins University Press, 1993. The definitive history of freight car technology.

Whiteside, James. Regulating Danger: The Struggle for Mine Safety in the Rocky Mountain Coal Industry. Lincoln: University of Nebraska Press, 1990.

Wokutch, Richard. Worker Protection Japanese Style: Occupational Safety and Health in the Auto Industry. Ithaca, NY: ILR, 1992

Worrall, John, editor. Safety and the Work Force: Incentives and Disincentives in Workers’ Compensation. Ithaca, NY: ILR Press, 1983.

1 Injuries or fatalities are expressed as rates. For example, if ten workers are injured out of 450 workers during a year, the rate would be .006666. For readability it might be expressed as 6.67 per thousand or 666.7 per hundred thousand workers. Rates may also be expressed per million workhours. Thus if the average work year is 2000 hours, ten injuries in 450 workers results in [10/450x2000]x1,000,000 = 11.1 injuries per million hours worked.

2 For statistics on work injuries from 1922-1970 see U.S. Department of Commerce, Historical Statistics, Series 1029-1036. For earlier data are in Aldrich, Safety First, Appendix 1-3.

3 Hounshell, American System. Rosenberg, Technology,. Aldrich, Safety First.

4 On the workings of the employers’ liability system see Fishback and Kantor, A Prelude, chapter 2

5 Dix, Work Relations, and his What’s a Coal Miner to Do? Wallace, Saint Clair, is a superb discussion of early anthracite mining and safety. Long, Where the Sun, Fishback, Soft Coal, chapters 1, 2, and 7. Humphrey, “Historical Summary.” Aldrich, Safety First, chapter 2.

6 Aldrich, Safety First chapter 1.

7 Aldrich, Safety First chapter 3

8 Fishback and Kantor, A Prelude, chapter 3, discusses higher pay for risky jobs as well as worker savings and accident insurance See also Whaples and Buffum, “Fraternalism, Paternalism.” Aldrich, ” Train Wrecks to Typhoid Fever.”

9Kirkland, Men, Cities. Trachenberg, The History of Legislation Whiteside, Regulating Danger. An early discussion of factory legislation is in Susan Kingsbury, ed.,xxxxx. Rogers,” From Common Law.”

10 On the evolution of freight car technology see White, American Railroad Freight Car, Usselman “Air Brakes for Freight trains,” and Aldrich, Safety First, chapter 1. Shaw, Down Brakes, discusses causes of train accidents.

11 Details of these regulations may be found in Aldrich, Safety First, chapter 5.

12 Graebner, Coal-Mining Safety, Aldrich, “‘The Needless Peril.”

13 On the origins of these laws see Fishback and Kantor, A Prelude, and the sources cited therein.

14 For assessments of the impact of early compensation laws see Aldrich, Safety First, chapter 5 and Fishback and Kantor, A Prelude, chapter 3. Compensation in the modern economy is discussed in Worrall, Safety and the Work Force. Government and other scientific work that promoted safety on railroads and in coal mining are discussed in Aldrich, “‘The Needless Peril’,” and “The Broken Rail.”

15 Farris, “From Exit to Voice.”

16 Aldrich, “‘Needless Peril,” and Humphrey

17 Derickson, “Participative Regulation” and Fairris, “Institutional Change,” also emphasize the role of union and shop floor issues in shaping safety during these years. Much of the modern literature on safety is highly quantitative. For readable discussions see Mendeloff, Regulating Safety (Cambridge: MIT Press, 1979), and Viscusi, Risk by Choice

The US Coal Industry in the Nineteenth Century

Sean Patrick Adams, University of Central Florida

Introduction

The coal industry was a major foundation for American industrialization in the nineteenth century. As a fuel source, coal provided a cheap and efficient source of power for steam engines, furnaces, and forges across the United States. As an economic pursuit, coal spurred technological innovations in mine technology, energy consumption, and transportation. When mine managers brought increasing sophistication to the organization of work in the mines, coal miners responded by organizing into industrial trade unions. The influence of coal was so pervasive in the United States that by the advent of the twentieth century, it became a necessity of everyday life. In an era where smokestacks equaled progress, the smoky air and sooty landscape of industrial America owed a great deal to the growth of the nation’s coal industry. By the close of the nineteenth century, many Americans across the nation read about the latest struggle between coal companies and miners by the light of a coal-gas lamp and in the warmth of a coal-fueled furnace, in a house stocked with goods brought to them by coal-fired locomotives. In many ways, this industry served as a major factor of American industrial growth throughout the nineteenth century.

The Antebellum American Coal Trade

Although coal had served as a major source of energy in Great Britain for centuries, British colonists had little use for North America’s massive reserves of coal prior to American independence. With abundant supplies of wood, water, and animal fuel, there was little need to use mineral fuel in seventeenth and eighteenth-century America. But as colonial cities along the eastern seaboard grew in population and in prestige, coal began to appear in American forges and furnaces. Most likely this coal was imported from Great Britain, but a small domestic trade developed in the bituminous fields outside of Richmond, Virginia and along the Monongahela River near Pittsburgh, Pennsylvania.

The Richmond Basin

Following independence from Britain, imported coal became less common in American cities and the domestic trade became more important. Economic nationalists such as Tench Coxe, Albert Gallatin, and Alexander Hamilton all suggested that the nation’s coal trade — at that time centered in the Richmond coal basin of eastern Virginia — would serve as a strategic resource for the nation’s growth and independence. Although it labored under these weighty expectations, the coal trade of eastern Virginia was hampered by its existence on the margins of the Old Dominion’s plantation economy. Colliers of the Richmond Basin used slave labor effectively in their mines, but scrambled to fill out their labor force, especially during peak periods of agricultural activity. Transportation networks in the region also restricted the growth of coal mining. Turnpikes proved too expensive for the coal trade and the James River and Kanawha Canal failed to make necessary improvements in order to accommodate coal barge traffic and streamline the loading, conveyance, and distribution of coal at Richmond’s tidewater port. Although the Richmond Basin was nation’s first major coalfield, miners there found growth potential to be limited.

The Rise of Anthracite Coal

At the same time that the Richmond Basin’s coal trade declined in importance, a new type of mineral fuel entered urban markets of the American seaboard. Anthracite coal has higher carbon content and is much harder than bituminous coal, thus earning the nickname “stone coal” in its early years of use. In 1803, Philadelphians watched a load of anthracite coal actually squelch a fire during a trial run, and city officials used the load of “stone coal” as attractive gravel for sidewalks. Following the War of 1812, however, a series of events paved the way for anthracite coal’s acceptance in urban markets. Colliers like Jacob Cist saw the shortage of British and Virginia coal in urban communities as an opportunity to promote the use of “stone coal.” Philadelphia’s American Philosophical Society and Franklin Institute enlisted the aid of the area’s scientific community to disseminate information to consumers on the particular needs of anthracite. The opening of several links between Pennsylvania’s anthracite fields via the Lehigh Coal and Navigation Company (1820), the Schuylkill Navigation Company (1825), and the Delaware and Hudson (1829) insured that the flow of anthracite from mine to market would be cheap and fast. “Stone coal” became less a geological curiosity by the 1830s and instead emerged as a valuable domestic fuel for heating and cooking, as well as a powerful source of energy for urban blacksmiths, bakers, brewers, and manufacturers. As demonstrated in Figure 1, Pennsylvania anthracite dominated urban markets by the late 1830s. By 1840, annual production had topped one million tons, or about ten times the annual production of the Richmond bituminous field.

Figure One: Percentage of Seaboard Coal Consumption by Origin, 1822-1842

Sources:

Hunt’s Merchant’s Magazine and Commercial Review 8 (June 1843): 548;

Alfred Chandler, “Anthracite Coal and the Beginnings of the Industrial Revolution,” p. 154.

The Spread of Coalmining

The antebellum period also saw the expansion of coal mining into many more states than Pennsylvania and Virginia, as North America contains a variety of workable coalfields. Ohio’s bituminous fields employed 7,000 men and raised about 320,000 tons of coal in 1850 — only three years later the state’s miners had increased production to over 1,300,000 tons. In Maryland, the George’s Creek bituminous region began to ship coal to urban markets by the Baltimore and Ohio Railroad (1842) and the Chesapeake and Ohio Canal (1850). The growth of St. Louis provided a major boost to the coal industries of Illinois and Missouri, and by 1850 colliers in the two states raised about 350,000 tons of coal annually. By the advent of the Civil War, coal industries appeared in at least twenty states.

Organization of Antebellum Mines

Throughout the antebellum period, coal mining firms tended to be small and labor intensive. The seams that were first worked in the anthracite fields of eastern Pennsylvania or the bituminous fields in Virginia, western Pennsylvania, and Ohio tended to lie close to the surface. A skilled miner and a handful of laborers could easily raise several tons of coal a day through the use of a “drift” or “slope” mine that intersected a vein of coal along a hillside. In the bituminous fields outside of Pittsburgh, for example, coal seams were exposed along the banks of the Monongahela and colliers could simply extract the coal with a pickax or shovel and roll it down the riverbank via a handcart into a waiting barge. Once the coal left the mouth of the mine, however, the size of the business handling it varied. Proprietary colliers usually worked on land that was leased for five to fifteen years — often from a large landowner or corporation. The coal was often shipped to market via a large railroad or canal corporation such as the Baltimore and Ohio Railroad, or the Delaware and Hudson Canal. Competition between mining firms and increases in production kept prices and profit margins relatively low, and many colliers slipped in and out of bankruptcy. These small mining firms were typical of the “easy entry, easy exit” nature of American business competition in the antebellum period.

Labor Relations

Since most antebellum coal mining operations were often limited to a few skilled miners aided by lesser skilled laborers, the labor relations in American coal mining regions saw little extended conflict. Early coal miners also worked close to the surface, often in horizontal drift mines, which meant that work was not as dangerous in the era before deep shaft mining. Most mining operations were far-flung enterprises away from urban centers, which frustrated attempts to organize miners into a “critical mass” of collective power — even in the nation’s most developed anthracite fields. These factors, coupled with the mine operator’s belief that individual enterprise in the anthracite regions insured a harmonious system of independent producers, had inhibited the development of strong labor organizations in Pennsylvania’s antebellum mining industry. In less developed regions, proprietors often worked in the mines themselves, so the lines between ownership, management, and labor were often blurred.

Early Unions

Most disputes, when they did occur, were temporary affairs that focused upon the low wages spurred by the intense competition among colliers. The first such action in the anthracite industry occurred in July of 1842 when workers from Minersville in Schuylkill County marched on Pottsville to protest low wages. This short-lived strike was broken up by the Orwigsburgh Blues, a local militia company. In 1848 John Bates enrolled 5,000 miners and struck for higher pay in the summer of 1849. But members of the “Bates Union” found themselves locked out of work and the movement quickly dissipated. In 1853, the Delaware and Hudson Canal Company’s miners struck for a 2½ cent per ton increase in their piece rate. This strike was successful, but failed to produce any lasting union presence in the D&H’s operations. Reports of disturbances in the bituminous fields of western Pennsylvania and Ohio follow the same pattern, as antebellum strikes tended to be localized and short-lived. Production levels thus remained high, and consumers of mineral fuel could count upon a steady supply reaching market.

Use of Anthracite in the Iron Industry

The most important technological development in the antebellum American coal industry was the successful adoption of anthracite coal to iron making techniques. Since the 1780s, bituminous coal or coke — which is bituminous coal with the impurities burned away — had been the preferred fuel for British iron makers. Once anthracite had nearly successfully entered American hearths, there seemed to be no reason why stone coal could not be used to make iron. As with its domestic use, however, the industrial potential of anthracite coal faced major technological barriers. In British and American iron furnaces of the early nineteenth century, the high heat needed to smelt iron ore required a blast of excess air to aid the combustion of the fuel, whether it was coal, wood, or charcoal. While British iron makers in the 1820s attempted to increase the efficiency of the process by using superheated air, known commonly as a “hot blast,” American iron makers still used a “cold blast” to stoke their furnaces. The density of anthracite coal resisted attempts to ignite it through the cold blast and therefore appeared to be an inappropriate fuel for most American iron furnaces.

Anthracite iron first appeared in Pennsylvania in 1840, when David Thomas brought Welsh hot blast technology into practice at the Lehigh Crane Iron Company. The firm had been chartered in 1839 under the general incorporation act. The Allentown firm’s innovation created a stir in iron making circles, and iron furnaces for smelting ore with anthracite began to appear across eastern and central Pennsylvania. In 1841, only a year after the Lehigh Crane Iron Company’s success, Walter Johnson found no less than eleven anthracite iron furnaces in operation. That same year, an American correspondent of London bankers cited savings on iron making of up to twenty-five percent after the conversion to anthracite and noted that “wherever the coal can be procured the proprietors are changing to the new plan; and it is generally believed that the quality of the iron is much improved where the entire process is affected with anthracite coal.” Pennsylvania’s investment in anthracite iron paid dividends for the industrial economy of the state and proved that coal could be adapted to a number of industrial pursuits. By 1854, forty-six percent of all American pig iron had been smelted with anthracite coal as a fuel, and by 1860 anthracite’s share of pig iron was more than fifty-six percent.

Rising Levels of Coal Output and Falling Prices

The antebellum decades saw the coal industry emerge as a critical component of America’s industrial revolution. Anthracite coal became a fixture in seaboard cities up and down the east coast of North America — as cities grew, so did the demand for coal. To the west, Pittsburgh and Ohio colliers shipped their coal as far as Louisville, Cincinnati, or New Orleans. As wood, animal, and waterpower became scarcer, mineral fuel usually took their place in domestic consumption and small-scale manufacturing. The structure of the industry, many small-scale firms working on short-term leases, meant that production levels remained high throughout the antebellum period, even in the face of falling prices. In 1840, American miners raised 2.5 million tons of coal to serve these growing markets and by 1850 increased annual production to 8.4 million tons. Although prices tended to fluctuate with the season, in the long run, they fell throughout the antebellum period. For example, in 1830 anthracite coal sold for about $11 per ton. Ten years later, the price had dropped to $7 per ton and by 1860 anthracite sold for about $5.50 a ton in New York City. Annual production in 1860 also passed twenty million tons for the first time in history. Increasing production, intense competition, low prices, and quiet labor relations all were characteristics of the antebellum coal trade in the United States, but developments during and after the Civil War would dramatically alter the structure and character of this critical industrial pursuit.

Coal and the Civil War

The most dramatic expansion of the American coal industry occurred in the late antebellum decades but the outbreak of the Civil War led to some major changes. The fuel needs of the federal army and navy, along with their military suppliers, promised a significant increase in the demand for coal. Mine operators planned for rising, or at least stable, coal prices for the duration of the war. Their expectations proved accurate. Even when prices are adjusted for wartime inflation, they increased substantially over the course of the conflict. Over the years 1860 to 1863, the real (i.e., inflation-adjusted) price of a ton of anthracite rose by over thirty percent, and in 1864 the real price had increased to forty-five percent above its 1860 level. In response, the production of coal increased to over twelve million tons of anthracite and over twenty-four million tons nationwide by 1865.

The demand for mineral fuel in the Confederacy led to changes in southern coalfields as well. In 1862, the Confederate Congress organized the Niter and Mining Bureau within the War Department to supervise the collection of niter (also known as saltpeter) for the manufacture of gunpowder and the mining of copper, lead, iron, coal, and zinc. In addition to aiding the Richmond Basin’s production, the Niter and Mining Bureau opened new coalfields in North Carolina and Alabama and coordinated the flow of mineral fuel to Confederate naval stations along the coast. Although the Confederacy was not awash in coal during the conflict, the work of the Niter and Mining Bureau established the groundwork for the expansion of mining in the postbellum South.

In addition to increases in production, the Civil War years accelerated some qualitative changes in the structure of the industry. In the late 1850s, new railroads stretched to new bituminous coalfields in states like Maryland, Ohio, and Illinois. In the established anthracite coal regions of Pennsylvania, railroad companies profited immensely from the increased traffic spurred by the war effort. For example, the Philadelphia & Reading Railroad’s margin of profit increased from $0.88 per ton of coal in 1861 to $1.72 per ton in 1865. Railroad companies emerged from the Civil War as the most important actors in the nation’s coal trade.

The American Coal Trade after the Civil War

Railroads and the Expansion of the Coal Trade

In the years immediately following the Civil War, the expansion of the coal trade accelerated as railroads assumed the burden of carrying coal to market and opening up previously inaccessible fields. They did this by purchasing coal tracts directly and leasing them to subsidiary firms or by opening their own mines. In 1878, the Baltimore and Ohio Railroad shipped three million tons of bituminous coal from mines in Maryland and from the northern coalfields of the new state of West Virginia. When the Chesapeake and Ohio Railroad linked Huntington, West Virginia with Richmond, Virginia in 1873, the rich bituminous coal fields of southern West Virginia were open for development. The Norfolk and Western developed the coalfields of southwestern Virginia by completing their railroad from tidewater to remote Tazewell County in 1883. A network of smaller lines linking individual collieries to these large trunk lines facilitated the rapid development of Appalachian coal.

Railroads also helped open up the massive coal reserves west of the Mississippi. Small coal mines in Missouri and Illinois existed in the antebellum years, but were limited to the steamboat trade down the Mississippi River. As the nation’s web of railroad construction expanded across the Great Plains, coalfields in Colorado, New Mexico, and Wyoming witnessed significant development. Coal had truly become a national endeavor in the United States.

Technological Innovations

As the coal industry expanded, it also incorporated new mining methods. Early slope or drift mines intersected coal seams relatively close to the surface and needed only small capital investments to prepare. Most miners still used picks and shovels to extract the coal, but some miners used black powder to blast holes in the coal seams, then loaded the broken coal onto wagons by hand. But as miners sought to remove more coal, shafts were dug deeper below the water line. As a result, coal mining needed larger amounts of capital as new systems of pumping, ventilation, and extraction required the implementation of steam power in mines. By the 1890s, electric cutting machines replaced the blasting method of loosening the coal in some mines, and by 1900 a quarter of American coal was mined using these methods. As the century progressed, miners raised more and more coal by using new technology. Along with this productivity came the erosion of many traditional skills cherished by experienced miners.

The Coke Industry

Consumption patterns also changed. The late nineteenth century saw the emergence of coke — a form of processed bituminous coal in which impurities are “baked” out under high temperatures — as a powerful fuel in the iron and steel industry. The discovery of excellent coking coal in the Connellsville region of southwestern Pennsylvania spurred the aggressive growth of coke furnaces there. By 1880, the Connellsville region contained more than 4,200 coke ovens and the national production of coke in the United States stood at three million tons. Two decades later, the United States consumed over twenty million tons of coke fuel.

Competition and Profits

The successful incorporation of new mining methods and the emergence of coke as a major fuel source served as both a blessing and a curse to mining firms. With the new technology they raised more coal, but as more coalfields opened up and national production neared eighty million tons by 1880, coal prices remained relatively low. Cheap coal undoubtedly helped America’s rapidly industrializing economy, but it also created an industry structure characterized by boom and bust periods, low profit margins, and cutthroat competition among firms. But however it was raised, the United States became more and more dependent upon coal as the nineteenth century progressed, as demonstrated by Figure 2.

Figure 2: Coal as a Percentage of American Energy Consumption, 1850-1900

Source: Sam H. Schurr and Bruce C. Netschert, Energy in the American Economy, 1850-1975 (Baltimore: Johns Hopkins Press, 1960), 36-37.

The Rise of Labor Unions

As coal mines became more capital intensive over the course of the nineteenth century, the role of miners changed dramatically. Proprietary mines usually employed skilled miners as subcontractors in the years prior to the Civil War; by doing so they abdicated a great deal of control over the pace of mining. Corporate reorganization and the introduction of expensive machinery eroded the traditional authority of the skilled miner. By the 1870s, many mining firms employed managers to supervise the pace of work, but kept the old system of paying mine laborers per ton rather than an hourly wage. Falling piece rates quickly became a source of discontent in coal mining regions.

Miners responded to falling wages and the restructuring of mine labor by organizing into craft unions. The Workingmen’s Benevolent Association founded in Pennsylvania in 1868, united English, Irish, Scottish, and Welsh anthracite miners. The WBA won some concessions from coal companies until Franklin Gowen, acting president of the Philadelphia and Reading Railroad led a concerted effort to break the union in the winter of 1874-75. When sporadic violence plagued the anthracite fields, Gowen led the charge against the “Molly Maguires,” a clandestine organization supposedly led by Irish miners. After the breaking of the WBA, most coal mining unions served to organize skilled workers in specific regions. In 1890, a national mining union appeared when delegates from across the United States formed the United Mine Workers of America. The UMWA struggled to gain widespread acceptance until 1897, when widespread strikes pushed many workers into union membership. By 1903, the UMWA listed about a quarter of a million members, raised a treasury worth over one million dollars, and played a major role in industrial relations of the nation’s coal industry.

Coal at the Turn of the Century

By 1900, the American coal industry was truly a national endeavor that raised fifty-seven million tons of anthracite and 212 million tons of bituminous coal. (See Tables 1 and 2 for additional trends.) Some coal firms grew to immense proportions by nineteenth-century standards. The U.S. Coal and Oil Company, for example, was capitalized at six million dollars and owned the rights to 30,000 acres of coal-bearing land. But small mining concerns with one or two employees also persisted through the turn of the century. New developments in mine technology continued to revolutionize the trade as more and more coal fields across the United States became integrated into the national system of railroads. Industrial relations also assumed nationwide dimensions. John Mitchell, the leader of the UMWA, and L.M. Bowers of the Colorado Fuel and Iron Company, symbolized a new coal industry in which hard-line positions developed in both labor and capital’s respective camps. Since the bituminous coal industry alone employed over 300,000 workers by 1900, many Americans kept a close eye on labor relations in this critical trade. Although “King Coal” stood unchallenged as the nation’s leading supplier of domestic and industrial fuel, tension between managers and workers threatened the stability of the coal industry in the twentieth century.

 

Table 1: Coal Production in the United States, 1829-1899

Year Coal Production (thousands of tons) Percent Increase over Decade Tons per capita
Anthracite Bituminous
1829 138 102 0.02
1839 1008 552 550 0.09
1849 3995 2453 313 0.28
1859 9620 6013 142 0.50
1869 17,083 15,821 110 0.85
1879 30,208 37,898 107 1.36
1889 45,547 95,683 107 2.24
1899 60,418 193,323 80 3.34

Source: Fourteenth Census of the United States, Vol. XI, Mines and Quarries, 1922, Tables 8 and 9, pp. 258 and 260.

Table 2: Leading Coal Producing States, 1889

State Coal Production (thousands of tons)
Pennsylvania 81,719
Illinois 12,104
Ohio 9977
West Virginia 6232
Iowa 4095
Alabama 3573
Indiana 2845
Colorado 2544
Kentucky 2400
Kansas 2221
Tennessee 1926

Source: Thirteenth Census of the United States, Vol. XI, Mines and Quarries, 1913, Table 4, p. 187

Suggestions for Further Reading

Adams, Sean Patrick. “Different Charters, Different Paths: Corporations and Coal in Antebellum Pennsylvania and Virginia,” Business and Economic History 27 (Fall 1998): 78-90.

Binder, Frederick Moore. Coal Age Empire: Pennsylvania Coal and Its Utilization to 1860. Harrisburg: Pennsylvania Historical and Museum Commission, 1974.

Blatz, Perry. Democratic Miners: Work and Labor Relations in the Anthracite Coal Industry, 1875-1925. Albany: SUNY Press, 1994.

Broehl, Wayne G. The Molly Maguires. Cambridge, MA: Harvard University Press, 1964.

Bruce, Kathleen. Virginia Iron Manufacture in the Slave Era. New York: The Century Company, 1931.

Chandler, Alfred. “Anthracite Coal and the Beginnings of the ‘Industrial Revolution’ in the United States,” Business History Review 46 (1972): 141-181.

DiCiccio, Carmen. Coal and Coke in Pennsylvania. Harrisburg: Pennsylvania Historical and Museum Commission, 1996

Eavenson, Howard. The First Century and a Quarter of the American Coal Industry. Pittsburgh: Privately Printed, 1942.

Eller, Ronald. Miners, Millhands, and Mountaineers: Industrialization of the Appalachian South, 1880-1930. Knoxville: University of Tennessee Press, 1982.

Harvey, Katherine. The Best Dressed Miners: Life and Labor in the Maryland Coal Region, 1835-1910. Ithaca, NY: Cornell University Press, 1993.

Hoffman, John. “Anthracite in the Lehigh Valley of Pennsylvania, 1820-1845,” United States National Museum Bulletin 252 (1968): 91-141.

Laing, James T. “The Early Development of the Coal Industry in the Western Counties of Virginia,” West Virginia History 27 (January 1966): 144-155.

Laslett, John H.M. editor. The United Mine Workers: A Model of Industrial Solidarity? University Park: Penn State University Press, 1996.

Letwin, Daniel. The Challenge of Interracial Unionism: Alabama Coal Miners, 1878-1921 Chapel Hill: University of North Carolina Press, 1998.

Lewis, Ronald. Coal, Iron, and Slaves. Industrial Slavery in Maryland and Virginia, 1715-1865. Westport, Connecticut: Greenwood Press, 1979.

Long, Priscilla. Where the Sun Never Shines: A History of America’s Bloody Coal Industry. New York: Paragon, 1989.

Nye, David E.. Consuming Power: A Social History of American Energies. Cambridge: Massachusetts Institute of Technology Press, 1998.

Palladino, Grace. Another Civil War: Labor, Capital, and the State in the Anthracite Regions of Pennsylvania, 1840-1868. Urbana: University of Illinois Press, 1990.

Powell, H. Benjamin. Philadelphia’s First Fuel Crisis. Jacob Cist and the Developing Market for Pennsylvania Anthracite. University Park: The Pennsylvania State University Press, 1978.

Schurr, Sam H. and Bruce C. Netschert. Energy in the American Economy, 1850-1975: An Economic Study of Its History and Prospects. Baltimore: Johns Hopkins Press, 1960.

Stapleton, Darwin. The Transfer of Early Industrial Technologies to America. Philadelphia: American Philosophical Society, 1987.

Stealey, John E.. The Antebellum Kanawha Salt Business and Western Markets. Lexington: The University Press of Kentucky, 1993.

Wallace, Anthony F.C. St. Clair. A Nineteenth-Century Coal Town’s Experience with a Disaster-Prone Industry. New York: Alfred A. Knopf, 1981.

Warren, Kenneth. Triumphant Capitalism: Henry Clay Frick and the Industrial Transformation of America. Pittsburgh: University of Pittsburgh Press, 1996.

Woodworth, J. B.. “The History and Conditions of Mining in the Richmond Coal-Basin, Virginia.” Transactions of the American Institute of Mining Engineers 31 (1902): 477-484.

Yearley, Clifton K.. Enterprise and Anthracite: Economics and Democracy in Schuylkill County, 1820-1875. Baltimore: The Johns Hopkins University Press, 1961.

From GATT to WTO: The Evolution of an Obscure Agency to One Perceived as Obstructing Democracy

Susan Ariel Aaronson, National Policy Association

Historical Roots of GATT and the Failure of the ITO

While the United States has always participated in international trade, it did not take a leadership role in global trade policy making until the Great Depression. One reason for this is that under the US Constitution, Congress has responsibility for promoting and regulating commerce, while the executive branch has responsibility for foreign policy. Thus, trade policy was a tug of war between the branches and the two branches did not always agree on the mix of trade promotion and protection. However, in 1934, the United States began an experiment, the Reciprocal Trade Agreements Act of 1934. In the hopes of expanding employment, Congress agreed to permit the executive branch to negotiate bilateral trade agreements. (Bilateral agreements are those between two parties — for example, the US and another country.)

During the 1930s, the amount of bilateral negotiation under this act was fairly limited, and in truth it did not do much to expand global or domestic trade. However, the Second World War led policy makers to experiment on a broader level. In the 1940s, working with the British government, the United States developed two innovations to expand and govern trade among nations. These mechanisms were called the General Agreement on Tariffs and Trade (GATT) and the ITO (International Trade Organization). GATT was simply a temporary multilateral agreement designed to provide a framework of rules and a forum to negotiate trade barrier reductions among nations. It was built on the Reciprocal Trade Agreements Act, which allowed the executive branch to negotiate trade agreements, with temporary authority from the Congress.

The ITO

The ITO, in contrast, set up a code of world trade principles and a formal international institution. The ITO’s architects were greatly influenced by John Maynard Keynes, the British economist. The ITO represented an internationalization of the view that governments could play a positive role in encouraging international economic growth. It was incredibly comprehensive: including chapters on commercial policy, investment, employment and even business practices (what we call antitrust or competition policies today). The ITO also included a secretariat with the power to arbitrate trade disputes. But the ITO was not popular. It also took a long time to negotiate. Its final charter was signed by 54 nations at the UN Conference on Trade and Employment in Havana in March 1948, but this was too late. The ITO missed the flurry of support for internationalism that accompanied the end of WWII and which led to the establishment of agencies such as the UN, the IMF and the World Bank. The US Congress never brought membership in the ITO to a vote, and when the president announced that he would not seek ratification of the Havana Charter, the ITO effectively died. Consequently the provisional GATT (which was not a formal international organization) governed world trade until 1994 (Aaronson, 1996, 3-5).

GATT

GATT was a club, albeit a club that was increasingly popular. But GATT was not a treaty. The United States (and other nations) joined GATT under its Protocol of Provisional Application. This meant that the provisions of GATT were binding only insofar as they are not inconsistent with a nation’s existing legislation. With this clause, the United States could spur trade liberalization or contravene the rules of GATT when politically or economically necessary (US Tariff Commission, 1950, 19-21, 20 note 4).

From 1948 until 1993, GATT’s purview and membership grew dramatically. During this period, GATT sponsored eight trade rounds where member nations, called contracting parties, agreed to mutually reduce trade barriers. But trade liberalization under the GATT came with costs to some Americans. Important industries in the United States such as textiles, television, steel and footwear suffered from foreign competition and some workers lost jobs. However, most Americans benefited from this growth in world trade; as consumers they got a cheaper and more diverse supply of goods, as producers, most found new markets and growing employment. From 1948 to about 1980 this economic growth came at little cost to the American economy as a whole or to American democracy (Aaronson, 1996, 133-134).

The Establishment of the WTO

By the late 1980s, a growing number of nations decided that GATT could better serve global trade expansion if it became a formal international organization. In 1988, the US Congress, in the Omnibus Trade and Competitiveness Act, explicitly called for more effective dispute settlement mechanisms. They pressed for negotiations to formalize GATT and to make it a more powerful and comprehensive organization. The result was the World Trade Organization, (WTO), which was established during the Uruguay Round (1986-1993) of GATT negotiations and which subsumed GATT. The WTO provides a permanent arena for member governments to address international trade issues and it oversees the implementation of the trade agreements negotiated in the Uruguay Round of trade talks

The WTO’s Powers

The WTO is not simply GATT transformed into a formal international organization. It covers a much broader purview, including subsidies, intellectual property, food safety and other policies that were once solely the subject of national governments. The WTO also has strong dispute settlement mechanisms. As under GATT, panels weigh trade disputes, but these panels have to adhere to a strict time schedule. Moreover, in contrast with GATT procedure, no country can veto or delay panel decisions. If US laws protecting the environment (such as laws requiring gas mileage standards) were found to be de facto trade impediments, the US must take action. It can either change its law, do nothing and face retaliation, or compensate the other party for lost trade if it keeps such a law (Jackson, 1994).

The WTO’s Mixed Record

Despite its broader scope and powers, the WTO has had a mixed record. Nations have clamored to join this new organization and receive the benefits of expanded trade and formalized multinational rules. Today the WTO has grown 142 members. Nations such as China, Russia, Saudi Arabia and Ukraine hope to join the WTO soon. But since the WTO was created, its members have not been able to agree on the scope of a new round of trade talks. Many developing countries believe that their industrialized trading partners have not fully granted them the benefits promised under the Uruguay Round of GATT. Some countries regret including intellectual property protections under the aegis of the WTO.

Protests

A wide range of citizens has become concerned about the effect of trade rules upon the achievement of other important policy goals. In India, Latin America, Europe, Canada and the United States, alarmed citizens have taken to the streets to protest globalization and in particular what they perceive as the undemocratic nature of the WTO. During the fiftieth anniversary of GATT in Geneva in 1998, some 30,000 people rioted. During the Seattle Ministerial Meetings in November/December 1999, again about 30,000 people protested, some violently. When the WTO attempts to kick off a new round in Doha, Qatar later this year, protestors are again planning to disrupt the proceedings (Aaronson, 2001).

Explaining Recent Protests about the WTO

During the first thirty years of GATT’s history, the relationship of trade policy to human rights, labor rights, consumer protection, and the environment were essentially “off-stage.” This is because GATT’s role was limited to governing how nations used traditional tools of economic protection — border measures such as tariffs and quotas.

GATT’s Scope Was Initially Limited

Why did policy makers limit the scope of GATT? The US could participate in GATT negotiations only by Congress granting extensions of the Reciprocal Trade Agreements Act of 1934. But this act allowed the president only to negotiate commercial policy. As a result, GATT said almost nothing about the effects of trade (whether trade degrades the environment or injures workers) or the conditions of trade (whether disparate systems of regulation, such as consumer, environmental, or labor standards, allow for fair competition). From the 1940s to the 1970s, few policy makers would admit that their systems of regulations sometimes distorted trade. Such regulations were the turf of domestic policy makers, not foreign policy makers. GATT also said little about domestic norms or regulations. In 1971, GATT established a working party on environmental measures and international trade, but it did not meet until 1991, after much pressure from some European nations (Charnovitz, 1992, 341, 348).

GATT’s Scope Widened to Include Domestic Policies

Policy makers and economists have long recognized that trade and social regulations can intersect. Although the United States did not ban trade in slaves until 1807, the US was among the first nations to ban goods manufactured by forced labor (prison labor) in the Tariff Act of 1890 (section 51) (Aaronson, 2001, 44). This provision influenced many trade agreements that followed, including GATT, which includes a similar provision. But in the 1970s, public officials began to admit that domestic regulations, such as health and safety regulations, could with or without intent, also distort trade (Keck and Sikkink, 1998, 41-47). They worked to include rules governing such regulations in the purview of GATT and other trade agreements. This process began in the Tokyo Round (1973-79) of GATT negotiations, but came to fruition during the Uruguay Round. Policy makers expanded the turf of trade agreements to include rules governing once domestic policies such as intellectual property, food safety, and subsidies (GATT Secretariat, 1993, Annex IV, 91).

Rising Importance of International Trade and Trade Policy

In 1970, the import and export of American goods and services added up to only about 11.5% of gross domestic product. This climbed swiftly to 20.5% in 1980 and at the end of the century averaged about 24%. (In addition, by the mid-1980s a persistent trade deficit emerged, with imports exceeding exports by significant amounts year after year — imports exceeded exports by 3% of GDP in 1987, for example.)

Public Opinion Has Become More Concerned about Trade Policy

Partly because of the rising importance of international trade, since at least 1980, the relationship of trade policy to the achievement of other public policy goals became an important and contentious issue. A growing number of citizens began to question whether trade agreements should include such social or environmental issues. Others argued that trade agreements had the effect of undermining domestic regulations such as environmental, food safety or consumer regulations. Still others argued that trade agreements did not sufficiently regulate the behavior of global corporations. Although relatively few Americans have taken to the streets to protest trade laws, polling data reveal that Americans agree with some of the principal concerns of the protesters. They want trade agreements to raise the environmental and labor standards in the nations with which Americans trade.

Most Agree That Trade Fuels Economic Growth

On the other hand, most people agree with analysts who argue that trade helps fuel American growth (PIPA, 1999). (For example, 93% of economists surveyed agreed that tariffs and import quotas usually reduce general economic welfare (Alston, Kearl, Vaughan, 1992).) Economists argue that the US must trade if it is to maintain its high standard of living. Autarchy is not a practical option even for America’s mighty and diversified economy. Although the US is blessed with navigable rivers, fertile soil, abundant resources, a hard working populace, and a huge internal market, Americans must trade because they cannot efficiently or sufficiently produce all the goods and services that citizens desire. Moreover, there are some goods that Americans cannot produce. That is why America from the beginning of its history has signed trade agreements with other nations.

Building a National Consensus on Trade Policy Is a Difficult Balancing Act

For the last decade, Americans have not been able to find common ground on the turf of trade policy and how to ensure that trade agreements such as those enforced by the WTO don’t thwart achievement of other important policy goals. After 1993, American business did not push for a new round of trade talks, as the global and the domestic economy prospered. But in recent months (early 2001), business has been much more active, as has George W. Bush’s Administration, in trying to develop a new round of trade talks under the WTO. Business has become more eager as economic growth has slowed. Moreover, American business leaders seem to have learned the lessons of the 1999 Seattle protests. The members of the Business Roundtable, an organization of chief executive officers from America’s largest, most prestigious companies have noted, “we must first build a national consensus on trade policy… Building this consensus will…require the careful consideration of international labor and environmental issues…that cannot be ignored.” The Roundtable concluded by noting the problem is not whether these issues are trade policy issues. They stressed that trade proponents and critics must find a strategy — a trade policy approach that allows negotiators to address these issues constructively (Business Roundtable, 2001). The Roundtable was essentially saying that we must find common ground and must acknowledge the relationship of trade policy to the achievement of other policy goals. The Roundtable was not alone. Other formal and informal business groups such as the National Association of Manufacturers, as well as environmental and labor groups, have tried to develop an inventory of ideas on how to proceed in pursuing trade agreements while also promoting other important policy goals such as environmental protection or labor rights. Republican members of Congress responded publicly to these efforts with a warning that such efforts could compromise the President’s strategy for
trade liberalization. As of this writing, however, the US Trade Representative has not announced how it will resolve the relationship between trade and social/environmental policy goals within specific trade agreements, such as the WTO. Resolving these issues will undoubtedly be very difficult, so the WTO will probably remain the source of contention.

References

Aaronson, Susan. Trade and the American Dream: A Social History of Postwar Trade Policy. Lexington, KY: University Press of Kentucky, 1996.

Aaronson, Susan. Taking Trade to the Streets: The Lost History of Efforts to Shape Globalization. Ann Arbor: University of Michigan Press, 2001.

Alston, Richard M., J.R. Kearl, and Michael B. Vaughan. “Is There a Consensus among Economists in the 1990′s?” American Economic Review: Papers and Proceedings 82 (1992): 203-209.

Business Roundtable. “The Case for US Trade Leadership: The United States is Falling Behind.” Statement 2/9/2001. www.brt.org.

Charnovitz, Steve. “Environmental and Labour Standards in Trade.” World Economy 15 (1992).

GATT Secretariat. “Final Act Embodying the Results of the Uruguay Round of Multilateral Trade Negotiations.” December 15, 1993.

Jackson, John H. “The World Trade Organization, Dispute Settlement and Codes of Conduct.” In The New GATT: Implications for the United States, edited by Susan M. Collins and Barry P. Bosworth, 63-75. Washington: Brookings, 1994.

Keck, Margaret E. and Kathryn Sikkink. Activists beyond Borders: Advocacy Networks in International Politics. Ithaca: Cornell University Press, 1998.

Program on International Policy Attitudes. “Americans on Globalization.” Poll conducted October 21-October 29, 1999 with 18,126 adults. See www.pipa.org/OnlineReports/Globalization/executive_summary.html

US Tariff Commission. Operation of the Trades Agreements Program, Second Report,

Arthur Young

David R. Stead, University of York

Arthur Young (1741-1820) was widely regarded by his contemporaries as the leading agricultural writer of the time. Born in London, he was the youngest child of the Suffolk gentry landowners Anne and the Reverend Arthur. Young was educated at Lavenham Grammar School, and after abortive attempts to become a merchant and then army officer, in 1763 took a farm on his mother’s estate at Bradfield, although he had little knowledge of farming. Nevertheless he conducted a variety of agricultural experiments and continued his early interest in writing by publishing his first major agricultural work, The Farmer’s Letters, in 1767. Young’s subsequent output was prolific. Most famous are his Tours of England, Ireland and France, which mixed travel diaries with facts, figures and critical commentary on farming practices. In 1784 he founded the periodical Annals of Agriculture, and edited the forty-six volumes published as well as contributing a large proportion of their content. Young was somewhat controversially appointed Secretary of the Board of Agriculture (a state-sponsored body promoting improved farming standards) in 1793, a position he held until his death. He also wrote six of the Board’s surveys of English counties.

Young was a vigorous advocate of agrarian improvements, especially enclosures and long leases, and his statistics and lively prose must have helped publicize and diffuse the innovations in farming practices that were taking place. He was consulted by agriculturists and politicians at home and abroad, including George Washington, and received numerous honors. His marriage to Martha Allen from 1765 was unhappy, though, with faults seemingly on both sides. The youngest of the couple’s four children died in 1797, triggering the melancholia and religious fervor that characterised Young in his later years. His prodigious work rate slowed after about 1805 on account of deteriorating vision, and ultimately blindness.

Some contemporary rivals, notably William Marshall, were fiercely critical of Young’s abilities as a farmer and accurate observer: the judgment of historians remains divided. Young certainly never made a financial success of farming, but this was partly because he expended large sums on agricultural experiments and was frequently absent from his farm writing or travelling. Allegations that Young’s enquiries were based on alehouse gossip, or conducted too hastily, are perhaps not without some truth, but his sample survey investigative procedure undoubtedly represented a pioneering scientific approach to agricultural research. Ironically, historians’ analysis of Youngs facts and figures has produced results that do not always support his original conclusions. For example, enclosures turn out to be not as important in increasing farm output as Young maintained.

Bibliography

Allen, Robert C. and Cormac Ó Gráda. “On the Road Again with Arthur Young: English, Irish, and French Agriculture during the Industrial Revolution.” Journal of Economic History 48 (1988): 93-116.Betham-Edwards, M., editor. The Autobiography of Arthur Young. London: Smith, Elder & Co., 1898.

Brunt, Liam. “The Advent of the Sample Survey in the Social Sciences.” The Statistician 50 (2001): 179-89.

Brunt, Liam. “Rehabilitating Arthur Young.” Economic History Review 56 (2003): 265-99.

Gazley, John G. The Life of Arthur Young, 1741-1820. Philadelphia: American Philosophical Society, 1973.

Kerridge, Eric. “Arthur Young and William Marshall.” History Studies 1 (1968): 43-53.

Mingay, G. E., editor. Arthur Young and His Times. London: Macmillan, 1975.

Citation: Stead, David. “Arthur Young”. EH.Net Encyclopedia, edited by Robert Whaples. November 18, 2003. URL http://eh.net/encyclopedia/arthur-young/