EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Women Workers in the British Industrial Revolution

Joyce Burnette, Wabash College

Historians disagree about whether the British Industrial Revolution (1760-1830) was beneficial for women. Frederick Engels, writing in the late nineteenth century, thought that the Industrial Revolution increased women’s participation in labor outside the home, and claimed that this change was emancipating. 1 More recent historians dispute the claim that women’s labor force participation rose, and focus more on the disadvantages women experienced during this time period.2 One thing is certain: the Industrial Revolution was a time of important changes in the way that women worked.

The Census

Unfortunately, the historical sources on women’s work are neither as complete nor as reliable as we would like. Aggregate information on the occupations of women is available only from the census, and while census data has the advantage of being comprehensive, it is not a very good measure of work done by women during the Industrial Revolution. For one thing, the census does not provide any information on individual occupations until 1841, which is after the period we wish to study.3 Even then the data on women’s occupations is questionable. For the 1841 census, the directions for enumerators stated that “The professions &c. of wives, or of sons or daughters living with and assisting their parents but not apprenticed or receiving wages, need not be inserted.” Clearly this census would not give us an accurate measure of female labor force participation. Table One illustrates the problem further; it shows the occupations of men and women recorded in the 1851 census, for 20 occupational categories. These numbers suggest that female labor force participation was low, and that 40 percent of occupied women worked in domestic service. However, economic historians have demonstrated that these numbers are misleading. First, many women who were actually employed were not listed as employed in the census. Women who appear in farm wage books have no recorded occupation in the census.4 At the same time, the census over-estimates participation by listing in the “domestic service” category women who were actually family members. In addition, the census exaggerates the extent to which women were concentrated in domestic service occupations because many women listed as “maids”, and included in the domestic servant category in the aggregate tables, were really agricultural workers.5

Table One

Occupational Distribution in the 1851 Census of Great Britain

Occupational Category Males (thousands) Females (thousands) Percent Female
Public Administration 64 3 4.5
Armed Forces 63 0 0.0
Professions 162 103 38.9
Domestic Services 193 1135 85.5
Commercial 91 0 0.0
Transportation & Communications 433 13 2.9
Agriculture 1788 229 11.4
Fishing 36 1 2.7
Mining 383 11 2.8
Metal Manufactures 536 36 6.3
Building & Construction 496 1 0.2
Wood & Furniture 152 8 5.0
Bricks, Cement, Pottery, Glass 75 15 16.7
Chemicals 42 4 8.7
Leather & Skins 55 5 8.3
Paper & Printing 62 16 20.5
Textiles 661 635 49.0
Clothing 418 491 54.0
Food, Drink, Lodging 348 53 13.2
Other 445 75 14.4
Total Occupied 6545 2832 30.2
Total Unoccupied 1060 5294 83.3

Source: B.R. Mitchell, Abstract of British Historical Statistics, Cambridge: Cambridge University Press, 1962, p. 60.

Domestic Service

Domestic work – cooking, cleaning, caring for children and the sick, fetching water, making and mending clothing – took up the bulk of women’s time during the Industrial Revolution period. Most of this work was unpaid. Some families were well-off enough that they could employ other women to do this work, as live-in servants, as charring women, or as service providers. Live-in servants were fairly common; even middle-class families had maids to help with the domestic chores. Charring women did housework on a daily basis. In London women were paid 2s.6d. per day for washing, which was more than three times the 8d. typically paid for agricultural labor in the country. However, a “day’s work” in washing could last 20 hours, more than twice as long as a day’s work in agriculture.6 Other women worked as laundresses, doing the washing in their own homes.

Cottage Industry

Before factories appeared, most textile manufacture (including the main processes of spinning and weaving) was carried out under the “putting-out” system. Since raw materials were expensive, textile workers rarely had enough capital to be self-employed, but would take raw materials from a merchant, spin or weave the materials in their homes, and then return the finished product and receive a piece-rate wage. This system disappeared during the Industrial Revolution as new machinery requiring water or steam power appeared, and work moved from the home to the factory.

Before the Industrial Revolution, hand spinning had been a widespread female employment. It could take as many as ten spinners to provide one hand-loom weaver with yarn, and men did not spin, so most of the workers in the textile industry were women. The new textile machines of the Industrial Revolution changed that. Wages for hand-spinning fell, and many rural women who had previously spun found themselves unemployed. In a few locations, new cottage industries such as straw-plaiting and lace-making grew and took the place of spinning, but in other locations women remained unemployed.

Another important cottage industry was the pillow-lace industry, so called because women wove the lace on pins stuck in a pillow. In the late-eighteenth century women in Bedford could earn 6s. a week making lace, which was about 50 percent more than women earned in argiculture. However, this industry too disappeared due to mechanization. Following Heathcote’s invention of the bobbinet machine (1809), cheaper lace could be made by embroidering patterns on machine-made lace net. This new type of lace created a new cottage industry, that of “lace-runners” who emboidered patterns on the lace.

The straw-plaiting industry employed women braiding straw into bands used for making hats and bonnets. The industry prospered around the turn of the century due to the invention of a simple tool for splitting the straw and war, which cut off competition from Italy. At this time women could earn 4s. to 6s. per week plaiting straw. This industry also declined, though, following the increase in free trade with the Continent in the 1820s.

Factories

A defining feature of the Industrial Revolution was the rise of factories, particularly textile factories. Work moved out of the home and into a factory, which used a central power source to run its machines. Water power was used in most of the early factories, but improvements in the steam engine made steam power possible as well. The most dramatic productivity growth occurred in the cotton industry. The invention of James Hargreaves’ spinning jenny (1764), Richard Arkwright’s “throstle” or “water frame” (1769), and Samuel Crompton’s spinning mule (1779, so named because it combined features of the two earlier machines) revolutionized spinning. Britain began to manufacture cotton cloth, and declining prices for the cloth encouraged both domestic consumption and export. Machines also appeared for other parts of the cloth-making process, the most important of which was Edmund Cartwright’s powerloom, which was adopted slowly because of imperfections in the early designs, but was widely used by the 1830s. While cotton was the most important textile of the Industrial Revolution, there were advances in machinery for silk, flax, and wool production as well.7

The advent of new machinery changed the gender division of labor in textile production. Before the Industrial Revolution, women spun yarn using a spinning wheel (or occasionally a distaff and spindle). Men didn’t spin, and this division of labor made sense because women were trained to have more dexterity than men, and because men’s greater strength made them more valuable in other occupations. In contrast to spinning, handloom weaving was done by both sexes, but men outnumbered women. Men monopolized highly skilled preparation and finishing processes such as wool combing and cloth-dressing. With mechanization, the gender division of labor changed. Women used the spinning jenny and water frame, but mule spinning was almost exclusively a male occupation because it required more strength, and because the male mule-spinners actively opposed the employment of female mule-spinners. Women mule-spinners in Glasgow, and their employers, were the victims of violent attacks by male spinners trying to reduce the competition in their occupation.8 While they moved out of spinning, women seem to have increased their employment in weaving (both in handloom weaving and eventually in powerloom factories). Both sexes were employed as powerloom operators.

Table Two

Factory Workers in 1833: Females as a Percent of the Workforce

Industry Ages 12 and under Ages 13-20 Ages 21+ All Ages
Cotton 51.8 65.0 52.2 58.0
Wool 38.6 46.2 37.7 40.9
Flax 54.8 77.3 59.5 67.4
Silk 74.3 84.3 71.3 78.1
Lace 38.7 57.4 16.6 36.5
Potteries 38.1 46.9 27.1 29.4
Dyehouse 0.0 0.0 0.0 0.0
Glass 0.0 0.0 0.0 0.0
Paper - 100.0 39.2 53.6
Whole Sample 52.8 66.4 48.0 56.8

Source: “Report from Dr. James Mitchell to the Central Board of Commissioners, respecting the Returns made from the Factories, and the Results obtained from them.” British Parliamentary Papers, 1834 (167) XIX. Mitchell collected data from 82 cotton factories, 65 wool factories, 73 flax factories, 29 silk factories, 7 potteries, 11 lace factories, one dyehouse, one “glass works”, and 2 paper mills throughout Great Britain.

While the highly skilled and highly paid task of mule-spinning was a male occupation, many women and girls were engaged in other tasks in textile factories. For example, the wet-spinning of flax, introduced in Leeds in 1825, employed mainly teenage girls. Girls often worked as assistants to mule-spinners, piecing together broken threads. In fact, females were a majority of the factory labor force. Table Two shows that 57 percent of factory workers were female, most of them under age 20. Women were widely employed in all the textile industries, and constituted the majority of workers in cotton, flax, and silk. Outside of textiles, women were employed in potteries and paper factories, but not in dye or glass manufacture. Of the women who worked in factories, 16 percent were under age 13, 51 percent were between the ages of 13 and 20, and 33 percent were age 21 and over. On average, girls earned the same wages as boys. Children’s wages rose from about 1s.6d. per week at age 7 to about 5s. per week at age 15. Beginning at age 16, and a large gap between male and female wages appeared. At age 30, women factory workers earned only one-third as much as men.

Figure One
Distribution of Male and Female Factory Employment by Age, 1833

Figure 1

Source: “Report from Dr. James Mitchell to the Central Board of Commissioners, respecting the Returns made from the Factories, and the Results obtained from them.” British Parliamentary Papers, 1834 (167) XIX.
The y-axis shows the percentage of total employment within each sex that is in that five-year age category.

Figure Two
Wages of Factory Workers in 1833

Figure 2

Source: “Report from Dr. James Mitchell to the Central Board of Commissioners, respecting the Returns made from the Factories, and the Results obtained from them.” British Parliamentary Papers, 1834 (167) XIX.

Agriculture

Wage Workers

Wage-earners in agriculture generally fit into one of two broad categories – servants who were hired annually and received part of their wage in room and board, and day-laborers who lived independently and were paid a daily or weekly wage. Before industrialization servants comprised between one-third and one-half of labor in agriculture.9 For servants the value of room and board was a substantial portion of their compensation, so the ratio of money wages is an under-estimate of the ratio of total wages (see Table Three). Most servants were young and unmarried. Because servants were paid part of their wage in kind, as board, the use of the servant contract tended to fall when food prices were high. During the Industrial Revolution the use of servants seems to have fallen in the South and East.10 The percentage of servants who were female also declined in the first half of the nineteenth century.11

Table Three

Wages of Agricultural Servants (£ per year)

Year Location Male Money Wage Male In-Kind Wage Female Money Wage Female In-Kind Wage Ratio of Money Wages Ratio of Total Wages
1770 Lancashire 7 9 3 6 0.43 0.56
1770 Oxfordshire 10 12 4 8 0.40 0.55
1770 Staffordshire 11 9 4 6 0.36 0.50
1821 Yorkshire 16.5 27 7 18 0.42 0.57

Source: Joyce Burnette, “An Investigation of the Female-Male Wage Gap during the Industrial Revolution in Britain,” Economic History Review 50 (May 1997): 257-281.

While servants lived with the farmer and received food and lodging as part of their wage, laborers lived independently, received fewer in-kind payments, and were paid a daily or a weekly wage. Though the majority of laborers were male, some were female. Table Four shows the percentage of laborers who were female at various farms in the late-18th and early-19th centuries. These numbers suggest that female employment was widespread, but varied considerably from one location to the next. Compared to men, female laborers generally worked fewer days during the year. The employment of female laborers was concentrated around the harvest, and women rarely worked during the winter. While men commonly worked six days per week, outside of harvest women generally averaged around four days per week.

Table Four

Employment of Women as Laborers in Agriculture:
Percentage of Annual Work-Days Worked by Females

Year Location Percent Female
1772-5 Oakes in Norton, Derbyshire 17
1774-7 Dunster Castle Farm, Somerset 27
1785-92 Dunster Castle Farm, Somerset 40
1794-5 Dunster Castle Farm, Somerset 42
1801-3 Dunster Castle Farm, Somerset 35
1801-4 Nettlecombe Barton, Somerset 10
1814-6 Nettlecombe Barton, Somerset 7
1826-8 Nettlecombe Barton, Somerset 5
1828-39 Shipton Moyne, Gloucestershire 19
1831-45 Oakes in Norton, Derbyshire 6
1836-9 Dunster Castle Farm, Somerset 26
1839-40 Lustead, Norfolk 6
1846-9 Dunster Castle Farm, Somerset 29

Sources: Joyce Burnette, “Labourers at the Oakes: Changes in the Demand for Female Day-Laborers at a Farm near Sheffield During the Agricultural Revolution,” Journal of Economic History 59 (March 1999): 41-67; Helen Speechley, Female and Child Agricultural Day Labourers in Somerset, c. 1685-1870, dissertation, Univ. of Exeter, 1999. Sotheron-Estcourt accounts, G.R.O. D1571; Ketton-Cremer accounts, N.R.O. WKC 5/250

The wages of female day-laborers were fairly uniform; generally a farmer paid the same wage to all the adult women he hired. Women’s daily wages were between one-third and one-half of male wages. Women generally worked shorter days, though, so the gap in hourly wages was not quite this large.12 In the less populous counties of Northumberland and Durham, male laborers were required to provide a “bondager,” a woman (usually a family member) who was available for day-labor whenever the employer wanted her.13

Table Five

Wages of Agricultural Laborers

Year Location Male Wage (d./day) Female Wage (d./day) Ratio
1770 Yorkshire 5 12 0.42
1789 Hertfordshire 6 16 0.38
1797 Warwickshire 6 14 0.43
1807 Oxfordshire 9 23 0.39
1833 Cumberland 12 24 0.50
1833 Essex 10 22 0.45
1838 Worcester 9 18 0.50

Source: Joyce Burnette, “An Investigation of the Female-Male Wage Gap during the Industrial Revolution in Britain,” Economic History Review 50 (May 1997): 257-281.

Various sources suggest that women’s employment in agriculture declined during the early nineteenth century. Enclosure increased farm size and changed the patterns of animal husbandry, both of which seem to have led to reductions in female employment.14 More women were employed during harvest than during other seasons, but women’s employment during harvest declined as the scythe replaced the sickle as the most popular harvest tool. While women frequently harvested with the sickle, they did not use the heavier scythe.15 Female employment fell the most in the East, where farms increasingly specialized in grain production. Women had more work in the West, which specialized more in livestock and dairy farming.16

Non-Wage-Earners

During the eighteenth century there were many opportunities for women to be productively employed in farm work on their own account, whether they were wives of farmers on large holdings, or wives of landless laborers. In the early nineteenth century, however, many of these opportunities disappeared, and women’s participation in agricultural production fell.

In a village that had a commons, even if the family merely rented a cottage the wife could be self-employed in agriculture because she could keep a cow, or other animals, on the commons. By careful management of her stock, a woman might earn as much during the year as her husband earned as a laborer. Women also gathered fuel from the commons, saving the family considerable expense. The enclosure of the commons, though, eliminated these opportunities. In an enclosure, land was re-assigned so as to eliminate the commons and consolidate holdings. Even when the poor had clear legal rights to use the commons, these rights were not always compensated in the enclosure agreement. While enclosure occurred at different times for different locations, the largest waves of enclosures occurred in the first two decades of the nineteenth century, meaning that, for many, opportunities for self-employment in agriculture declined as the same time as employment in cottage industry declined. 17

Only a few opportunities for agricultural production remained for the landless laboring family. In some locations landlords permitted landless laborers to rent small allotments, on which they could still grow some of their own food. The right to glean on fields after harvest seems to have been maintained at least through the middle of the nineteenth century, by which time it had become one of the few agricultural activities available to women in some areas. Gleaning was a valuable right; the value of the grain gleaned was often between 5 and 10 percent of the family’s total annual income.18

In the eighteenth century it was common for farmers’ wives to be actively involved in farm work, particularly in managing the dairy, pigs, and poultry. The diary was an important source of income for many farms, and its success depended on the skill of the mistress, who usually ran the operation with no help from men. In the nineteenth century, however, farmer’s wives were more likely to withdraw from farm management, leaving the dairy to the management of dairymen who paid a fixed fee for the use of the cows.19 While poor women withdrew from self-employment in agriculture because of lost opportunities, farmer’s wives seem to have withdraw because greater prosperity allowed them to enjoy more leisure.

It was less common for women to manage their own farms, but not unknown. Commercial directories list numerous women farmers. For example, the 1829 Directory of the County of Derby lists 3354 farmers, of which 162, or 4.8%, were clearly female.20 While the commercial directories themselves do not indicate to what extent these women were actively involved in their farms, other evidence suggests that at least some women farmers were actively involved in the work of the farm.21

Self-Employed

During the Industrial Revolution period women were also active businesswomen in towns. Among business owners listed in commercial directories, about 10 percent were female. Table Seven shows the percentage female in all the trades with at least 25 people listed in the 1788 Manchester commercial directory. Single women, married women, and widows are included in these numbers. Sometimes these women were widows carrying on the businesses of their deceased husbands, but even in this case that does not mean they were simply figureheads. Widows often continued their husband’s businesses because they had been active in management of the business while their husband was alive, and wished to continue.22 Sometimes married women were engaged in trade separately from their husbands. Women most commonly ran shops and taverns, and worked as dressmakers and milliners, but they were not confined to these areas, and appear in most of the trades listed in commercial directories. Manchester, for example, had six female blacksmiths and five female machine makers in 1846. Between 1730 and 1800 there were 121 “rouping women” selling off estates in Edinburgh. 23

Table Six

Business Owners Listed in Commercial Directories

Date City Male Female Unknown Gender Percent Female
1788 Manchester 2033 199 321 8.9
1824-5 Manchester 4185 297 1671 6.6
1846 Manchester 11,942 1222 2316 9.3
1850 Birmingham 15,054 2020 1677 11.8
1850 Derby 2415 332 194 12.1

Sources: Lewis’s Manchester Directory for 1788 (reprinted by Neil Richardson, Manchester, 1984); Pigot and Dean’s Directory for Manchester, Salford, &c. for 1824-5 (Manchester 1825); Slater’s National Commercial Directory of Ireland (Manchester, 1846); Slater’s Royal National and Commercial Directory (Manchester, 1850)

Table Seven

Women in Trades in Manchester, 1788

Trade Men Women Gender Unknown Percent Female
Apothecary/ Surgeon / Midwife 29 1 5 3.3
Attorney 39 0 3 0.0
Boot and Shoe makers 87 0 1 0.0
Butcher 33 1 1 2.9
Calenderer 31 4 5 11.4
Corn & Flour Dealer 45 4 5 8.2
Cotton Dealer 23 0 2 0.0
Draper, Mercer, Dealer of Cloth 46 15 19 24.6
Dyer 44 3 18 6.4
Fustian Cutter / Shearer 54 2 0 3.6
Grocers & Tea Dealers 91 16 12 15.0
Hairdresser & Peruke maker 34 1 0 2.9
Hatter 45 3 4 6.3
Joiner 34 0 1 0.0
Liquor dealer 30 4 14 11.8
Manufacturer, cloth 257 4 118 1.5
Merchant 58 1 18 1.7
Publichouse / Inn / Tavern 126 13 2 9.4
School master / mistress 18 10 0 35.7
Shopkeeper 107 16 4 13.0
Tailor 59 0 1 0.0
Warehouse 64 0 14 0.0

Source: Lewis’s Manchester Directory for 1788 (reprinted by Neil Richardson, Manchester, 1984)

Guilds often controlled access to trades, admitting only those who had served an apprenticeship and thus earned the “freedom” of the trade. Women could obtain “freedom” not only by apprenticeship, but also by widowhood. The widow of a tradesman was often considered knowledgeable enough in the trade that she was given the right to carry on the trade even without an apprenticeship. In the eighteenth century women were apprenticed to a wide variety of trades, including butchery, bookbinding, brush making, carpentry, ropemaking and silversmithing.24 Between the eighteenth and nineteenth centuries the number of females apprenticed to trades declined, possibly suggesting reduced participation by women. However, the power of the guilds and the importance of apprenticeship were also declining during this time, so the decline in female apprenticeships may not have been an important barrier to employment.25

Many women worked in the factories of the Industrial Revolution, and a few women actually owned factories. In Keighley, West Yorkshire, Ann Illingworth, Miss Rachael Leach, and Mrs. Betty Hudson built and operated textile mills.26 In 1833 Mrs. Doig owned a powerloom factory in Scotland, which employed 60 workers.27

While many women did successfully enter trades, there were obstacles to women’s employment that kept their numbers low. Women generally received less education than men (though education of the time was of limited practical use). Women may have found it more difficult than men to raise the necessary capital because English law did not consider a married woman to have any legal existence; she could not sue or be sued. A married woman was a feme covert and technically could not make any legally binding contracts, a fact which may have discouraged others from loaning money to or making other contracts with married women. However, this law was not as limiting in practice as it would seem to be in theory because a married woman engaged in trade on her own account was treated by the courts as a feme sole and was responsible for her own debts.28

The professionalization of certain occupations resulted in the exclusion of women from work they had previously done. Women had provided medical care for centuries, but the professionalization of medicine in the early-nineteenth century made it a male occupation. The Royal College of Physicians admitted only graduates of Oxford and Cambridge, schools to which women were not admitted until the twentieth century. Women were even replaced by men in midwifery. The process began in the late-eighteenth century, when we observe the use of the term “man-midwife,” an oxymoronic title suggestive of changing gender roles. In the nineteenth century the “man-midwife” disappeared, and women were replaced by physicians or surgeons for assisting childbirth. Professionalization of the clergy was also effective in excluding women. While the Church of England did not allow women ministers, the Methodists movement had many women preachers during its early years. However, even among the Methodists female preachers disappeared when lay preachers were replaced with a professional clergy in the early nineteenth century.29

In other occupations where professionalization was not as strong, women remained an important part of the workforce. Teaching, particularly in the lower grades, was a common profession for women. Some were governesses, who lived as household servants, but many opened their own schools and took in pupils. The writing profession seems to have been fairly open to women; the leading novelists of the period include Jane Austen, Charlotte and Emily Brontë, Fanny Burney, George Eliot (the pen name of Mary Ann Evans), Elizabeth Gaskell, and Frances Trollope. Female non-fiction writers of the period include Jane Marcet, Hannah More, and Mary Wollstonecraft.

Other Occupations

The occupations listed above are by no means a complete listing of the occupations of women during the Industrial Revolution. Women made buttons, nails, screws, and pins. They worked in the tin plate, silver plate, pottery and Birmingham “toy” trades (which made small articles like snuff boxes). Women worked in the mines until The Mines Act of 1842 prohibited them from working underground, but afterwards women continued to pursue above-ground mining tasks.

Married Women in the Labor Market

While there are no comprehensive sources of information on the labor force participation of married women, household budgets reported by contemporary authors give us some information on women’s participation.30 For the period 1787 to 1815, 66 percent of married women in working-class households had either a recorded occupation or positive earnings. For the period 1816-20 the rate fell to 49 percent, but in 1821-40 it recovered to 62 percent. Table Eight gives participation rates of women by date and occupation of the husband.

Table Eight

Participation Rates of Married Women

High-Wage Agriculture Low-Wage Agriculture Mining Factory Outwork Trades All
1787-1815 55 85 40 37 46 63 66
1816-1820 34 NA 28 4 42 30 49
1821-1840 22 85 33 86 54 63 62

Source: Sara Horrell and Jane Humphries, “Women’s Labour Force Participation and the Transition to the male-Breadwinner Family, 1790-1865,” Economic History Review 48 (February 1995): 89-117

While many wives worked, the amount of their earnings was small relative to their husband’s earnings. Annual earnings of married women who did work averaged only about 28 percent of their husband’s earnings. Because not all women worked, and because children usually contributed more to the family budget than their mothers, for the average family the wife contributed only around seven percent of total family income.

Childcare

Women workers used a variety of methods to care for their children. Sometimes childcare and work were compatible, and women took their children with them to the fields or shops where they worked.31 Sometimes women working at home would give their infants opiates such as “Godfrey’s Cordial” in order to keep the children quiet while their mothers worked.32 The movement of work into factories increased the difficulty of combining work and childcare. In most factory work the hours were rigidly set, and women who took the jobs had to accept the twelve or thirteen hour days. Work in the factories was very disciplined, so the women could not bring their children to the factory, and could not take breaks at will. However, these difficulties did not prevent women with small children from working.

Nineteenth-century mothers used older siblings, other relatives, neighbors, and dame schools to provide child care while they worked.33 Occasionally mothers would leave young children home alone, but this was dangerous enough that only a few did so.34 Children as young as two might be sent to dame schools, in which women would take children into their home and provide child care, as well as some basic literacy instruction.35 In areas where lace-making or straw-plaiting thrived, children were sent from about age seven to “schools” where they learned the trade.36

Mothers might use a combination of different types of childcare. Elizabeth Wells, who worked in a Leicester worsted factory, had five children, ages 10, 8, 6, 2, and four months. The eldest, a daughter, stayed home to tend the house and care for the infant. The second child worked, and the six-year-old and two-year-old were sent to “an infant school.”37 Mary Wright, an “over-looker” in the rag-cutting room of a Buckinghamshire paper factory, had five children. The eldest worked in the rag-cutting room with her, the youngest was cared for at home, and the middle three were sent to a school; “for taking care of an infant she pays 1s.6d. a-week, and 3d. a-week for the three others. They go to a school, where they are taken care of and taught to read.”38

The cost of childcare was substantial. At the end of the eighteenth century the price of child-care was about 1s. a week, which was about a quarter of a woman’s weekly earnings in agriculture.39 In the 1840s mothers paid anywhere from 9d. to 2s.6d. per week for child care, out of a wage of around 7s. per week.40

For Further Reading

Burnette, Joyce. “An Investigation of the Female-Male Wage Gap during the Industrial Revolution in Britain.” Economic History Review 50 (1997): 257-281.

Davidoff, Leonore, and Catherine Hall. Family Fortunes: Men and Women of the English Middle Class, 1780-1850. Chicago: University of Chicago Press, 1987.

Honeyman, Katrina. Women, Gender and Industrialisation in England, 1700-1870. New York: St. Martin’s Press, 2000.

Horrell, Sara, and Jane Humphries. “Women’s Labour Force Participation and the Transition to the Male-Breadwinner Family, 1790-1865.” Economic History Review 48 (1995): 89-117.

Humphries, Jane. “Enclosures, Common Rights, and Women: The Proletarianization of Families in the Late Eighteenth and Early Nineteenth Centuries.” Journal of Economic History 50 (1990): 17-42.

King, Peter. “Customary Rights and Women’s Earnings: The Importance of Gleaning to the Rural Labouring Poor, 1750-1850.” Economic History Review 44 (1991): 461-476

Kussmaul, Ann. Servants in Husbandry in Early Modern England. Cambridge: Cambridge University Press, 1981.

Pinchbeck, Ivy. Women Workers and the Industrial Revolution, 1750-1850, London: Routledge, 1930.

Sanderson, Elizabeth. Women and Work in Eighteenth-Century Edinburgh. New York: St. Martin’s Press, 1996.

Snell, K.D.M. Annals of the Labouring Poor: Social Change and Agrarian England, 1660-1900. Cambridge: Cambridge University Press, 1985.

Valenze, Deborah. Prophetic Sons and Daughters: Female Preaching and Popular Religion in Industrial England. Princeton University Press, 1985.

Valenze, Deborah. The First Industrial Woman. Oxford: Oxford University Press, 1995.

1 “Since large-scale industry has transferred the woman from the house to the labour market and the factory, and makes her, often enough, the bread-winner of the family, the last remnants of male domination in the proletarian home have lost all foundation – except, perhaps, for some of that brutality towards women which became firmly rooted with the establishment of monogamy. . . .It will then become evidence that the first premise for the emancipation of women is the reintroduction of the entire female sex into public industry.” Frederick Engels, The Origin of the Family, Private Property and the State, in Karl Marx and Frederick Engels: Selected Works, New York: International Publishers, 1986, p. 508, 510.

2 Ivy Pinchbeck (Women Workers and the Industrial Revolution, Routledge, 1930) claimed that higher incomes allowed some women to withdraw from the labor force. While she saw some disadvantages resulting from this withdrawal, particularly the loss of independence, she thought that overall women benefited from having more time to devote to their homes and families. Davidoff and Hall (Family Fortunes: Man and Women of the English Middle Class, 1780-1850, Univ. of Chicago Press, 1987) agree that women withdrew from work, but they see the change as a negative result of gender discrimination. Similarly, Horrell and Humphries (“Women’s Labour Force Participation and the Transition to the Male-Breadwinner Family, 1790-1865,” Economic History Review, Feb. 1995, XLVIII:89-117) do not find that rising incomes caused declining labor force participation, and they believe that declining demand for female workers caused the female exodus from the workplace.

3 While the British census began in 1801, individual enumeration did not begin until 1841. For a detailed description of the British censuses of the nineteenth century, see Edward Higgs, Making Sense of the Census, London: HMSO, 1989.

4 For example, Helen Speechley, in her dissertation, showed that seven women who worked for wages at a Somerset farm had no recorded occupation in the 1851 census See Helen Speechley, Female and Child Agricultural Day Labourers in Somerset, c. 1685-1870, dissertation, Univ. of Exeter, 1999.

5 Edward Higgs finds that removing family members from the “servants” category reduced the number of servants in Rochdale in 1851. Enumerators did not clearly distinguish between the terms “housekeeper” and “housewife.” See Edward Higgs, “Domestic Service and Household Production” in Angela John, ed., Unequal Opportunities, Oxford: Basil Blackwell, and “Women, Occupations and Work in the Nineteenth Century Censuses,” History Workshop, 1987, 23:59-80. In contrast, the censuses of the early 20th century seem to be fairly accurate; see Tim Hatton and Roy Bailey, “Women’s Work in Census and Survey, 1911-1931,” Economic History Review, Feb. 2001, LIV:87-107.

6 A shilling was equal to 12 pence, so if women earned 2s.6d. for 20 hours, they earned 1.5d. per hour. Women agricultural laborers earned closer to 1d. per hour, so the London wage was higher. See Dorothy George, London Life in the Eighteenth-Century, London: Kegan Paul, Trench, Trubner & Co., 1925, p. 208, and Patricia Malcolmson, English Laundresses, Univ. of Illinois Press, 1986, p. 25. .

7 On the technology of the Industrial Revolution, see David Landes, The Unbound Prometheus, Cambridge Univ. Press, 1969, and Joel Mokyr, The Lever of Riches, Oxford Univ. Press, 1990.

8 A petition from Glasgow cotton manufactures makes the following claim, “In almost every department of the cotton spinning business, the labour of women would be equally efficient with that of men; yet in several of these departments, such measures of violence have been adopted by the combination, that the women who are willing to be employed, and who are anxious by being employed to earn the bread of their families, have been driven from their situations by violence. . . . Messrs. James Dunlop and Sons, some years ago, erected cotton mills in Calton of Glasgow, on which they expended upwards of [£]27,000 forming their spinning machines, (Chiefly with the view of ridding themselves of the combination [the male union],) of such reduced size as could easily be wrought by women. They employed women alone, as not being parties to the combination, and thus more easily managed, and less insubordinate than male spinners. These they paid at the same rate of wages, as were paid at other works to men. But they were waylaid and attacked, in going to, and returning from their work; the houses in which they resided, were broken open in the night. The women themselves were cruelly beaten and abused; and the mother of one of them killed; . . . And these nefarious attempts were persevered in so systematically, and so long, that Messrs. Dunlop and sons, found it necessary to dismiss all female spinners from their works, and to employ only male spinners, most probably the very men who had attempted their ruin.” First Report from the Select Committee on Artizans and Machinery, British Parliamentary Papers, 1824 vol. V, p. 525.

9 Ann Kussmaul, Servants in Husbandry in Early Modern England, Cambridge Univ. Press, 1981, Ch. 1

10 See Ivy Pinchbeck, Women Workers and the Industrial Revolution, Routledge, 1930, Ch. 1, and K.D.M. Snell, Annals of the Labouring Poor, Cambridge Univ. Press, 1985, Ch. 2.

11 For the period 1574 to 1821 about 45 percent of servants were female, but this fell to 32 percent in 1851. See Ann Kussmaul, Servants in Husbandry in Early Modern England, Cambridge Univ. Press, 1981, Ch. 1.

12 Men usually worked 12-hour days, and women averaged closer to 10 hours. See Joyce Burnette, “An Investigation of the Female-Male Wage Gap during the Industrial Revolution in Britain,” Economic History Review, May 1997, 50:257-281.

13 See Ivy Pinchbeck, Women Workers and the Industrial Revolution, Routledge, 1930, p. 65.

14 See Robert Allen, Enclosure and the Yeoman, Clarendon Press, 1992, and Joyce Burnette, “Labourers at the Oakes: Changes in the Demand for Female Day-Laborers at a Farm near Sheffield During the Agricultural Revolution,” Journal of Economics History, March 1999, 59:41-67.

15 While the scythe had been used for mowing grass for hay or cheaper grains for some time, the sickle was used for harvesting wheat until the nineteenth century. Thus adoption of the scythe for harvesting wheat seems to be a response to changing prices rather than invention of a new technology. The scythe required less labor to harvest a given acre, but left more grain on the ground, so as grain prices fell relative to wages, farmers substituted the scythe for the sickle. See E.J.T. Collins, “Harvest Technology and Labour Supply in Britain, 1790-1870,” Economic History Review, Dec. 1969, XXIII:453-473.

16 K.D.M. Snell, Annals of the Labouring Poor, Cambridge, 1985.

17 See Jane Humphries, “Enclosures, Common Rights, and Women: The Proletarianization of Families in the Late Eighteenth and Early Nineteenth Centuries,” Journal of Economic History, March 1990, 50:17-42, and J.M. Neeson, Commoners: Common Rights, Enclosure and Social Change in England, 1700-1820, Cambridge Univ. Press, 1993.

18 See Peter King, “Customary Rights and Women’s Earnings: The Importance of Gleaning to the Rural Labouring Poor, 1750-1850,” Economic History Review, 1991, XLIV:461-476.

19 Pinchbeck, Women Workers and the Industrial Revolution, Routledge, 1930, p. 41-42 See also Deborah Valenze, The First Industrial Woman, Oxford Univ. Press, 1995

20 Stephen Glover, The Directory of the County of Derby, Derby: Henry Mozley and Son, 1829.

21 Eden gives an example of gentlewomen who, on the death of their father, began to work as farmers. He notes, “not seldom, in one and the same day, they have divided their hours in helping to fill the dung-cart, and receiving company of the highest rank and distinction.” (F.M. Eden, The State of the Poor, vol. i., p. 626.) One woman farmer who was clearly an active manager celebrated her success in a letter sent to the Annals of Agriculture, (quoted by Pinchbeck, Women Workers and the Industrial Revolution, Routledge, 1930, p. 30): “I bought a small estate, and took possession of it in the month of July, 1803. . . . As a woman undertaking to farm is generally a subject of ridicule, I bought the small estate by way of experiment: the gentlemen of the county have now complimented me so much on having set so good and example to the farmers, that I have determined on taking a very large farm into my hands.” The Annals of Agriculture give a number of examples of women farmers cited for their experiments or their prize-winning crops.

22 Tradesmen considered themselves lucky to find a wife who was good at business. In his autobiography James Hopkinson, a cabinetmaker, said of his wife, “I found I had got a good and suitable companion one with whom I could take sweet council and whose love and affections was only equall’d by her ability as a business woman.” Victorian Cabinet Maker: The Memoirs of James Hopkinson, 1819-1894, 1968, p. 96.

23 See Elizabeth Sanderson, Women and Work in Eighteenth-Century Edinburgh, St. Martin’s Press, 1996.

24 See K.D.M. Snell, Annals of the Labouring Poor, Cambridge Univ. Press, 1985, Table 6.1.

25 The law requiring a seven-year apprenticeship before someone could work in a trade was repealed in 1814.

26 See Francois Crouzet, The First Industrialists, Cambridge Univ. Press, 1985, and M.L. Baumber, From Revival to Regency: A History of Keighley and Haworth, 1740-1820, Crabtree Ltd., Keighley, 1983.

27 First Report of the Central Board of His Majesty’s Commissioners for inquiry into the Employment of Children in Factories, with Minutes of Evidence, British Parliamentary Papers, 1833 (450) XX, A1, p. 120.

28 For example, in the case of “LaVie and another Assignees against Philips and another Assignees,” the court upheld the right of a woman to operate as feme sole. In 1764 James Cox and his wife Jane were operating separate businesses, and both went bankrupt within the space of two months. Jane’s creditors sued James’s creditors for the recovery of five fans, goods from her shop that had been taken for James’s debts. The court ruled that, since Jane was trading as a feme sole, her husband did not own the goods in her shop, and thus James’s creditors had no right to seize them. See William Blackstone, Reports of Cases determined in the several Courts of Westminster-Hall, from 1746 to 1779, London, 1781, p. 570-575.

29 See Deborah Valenze, Prophetic Sons and Daughters: Female Preaching and Popular Religion in Industrial England, Princeton Univ. Press, 1985.

30 See Sara Horrell and Jane Humphries, “Women’s Labour Force Participation and the Transition to the male-Breadwinner Family, 1790-1865,” Economic History Review, Feb. 1995, XLVIII:89-117.

31 In his autobiography James Hopkinson says of his wife, “How she laboured at the press and assisted me in the work of my printing office, with a child in her arms, I have no space to tell, nor in fact have I space to allude to the many ways she contributed to my good fortune.” James Hopkinson, Victorian Cabinet Marker: The Memoirs of James Hopkinson, 1819-1894, J.B. Goodman, ed., Routledge & Kegan Paul, 1968, p. 96. A 1739 poem by Mary Collier suggests that carrying babies into the field was fairly common; it contains these lines:

Our tender Babes into the Field we bear,
And wrap them in our Cloaths to keep them warm,
While round about we gather up the Corn;
. . .
When Night comes on, unto our Home we go,
Our Corn we carry, and our Infant too.

Mary Collier, The Woman’s Labour, Augustan Reprint Society, #230, 1985, p. 10. A 1835 Poor Law report stated that in Sussex, “the custom of the mother of a family carrying her infant with her in its cradle into the field, rather than lose the opportunity of adding her earnings to the general stock, though partially practiced before, is becoming very much more general now.” (Quoted in Pinchbeck, Women Workers and the Industrial Revolution, Routledge, 1930, p. 85.)

32 Sarah Johnson of Nottingham claimed that she ” Knows it is quite a common custom for mothers to give Godfrey’s and the Anodyne cordial to their infants, ‘it is quite too common.’ It is given to infants at the breast; it is not given because the child is ill, but ‘to compose it to rest, to sleep it,’ so that the mother may get to work. ‘Has seen an infant lay asleep on its mother’s lap whilst at the lace-frame for six or eight hours at a time.’ This has been from the effects of the cordial.” [Reports from Assistant Handloom-Weavers’ Commissioners, British Parliamentary Papers, 1840 (43) XXIII, p. 157] Mary Colton, a lace worker from Nottingham, described her use of the drug to parliamentary investigators thus: ‘Was confined of an illegitimate child in November, 1839. When the child was a week old she gave it a half teaspoonful of Godfrey’s twice a-day. She could not afford to pay for the nursing of the child, and so gave it Godfrey’s to keep it quiet, that she might not be interrupted at the lace piece; she gradually increased the quantity by a drop or two at a time until it reached a teaspoonful; when the infant was four months old it was so “wankle” and thin that folks persuaded her to give it laudanum to bring it on, as it did other children. A halfpenny worth, which was about a teaspoonful and three-quarters, was given in two days; continued to give her this quantity since February, 1840, until this last past (1841), and then reduced the quantity. She now buys a halfpenny worth of laudanum and a halfpenny worth of Godfrey’s mixed, which lasts her three days. . . . If it had not been for her having to sit so close to work she would never have given the child Godfrey’s. She has tried to break it off many times but cannot, for if she did, she should not have anything to eat.” [Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 630].

33 Elizabeth Leadbeater, who worked for a Birmingham brass-founder, worked while she was nursing and had her mother look after the infant. [Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 710.] Mrs. Smart, an agricultural worker from Calne, Wiltshire, noted, “Sometimes I have had my mother, and sometimes my sister, to take care of the children, or I could not have gone out.” [Reports of Special Assistant Poor Law Commissioners on the Employment of Women and Children in Agriculture, British Parliamentary Papers, 1843 (510) XII, p. 65.] More commonly, though, older siblings provided the childcare. “Older siblings” generally meant children of nine or ten years old, and included boys as well as girls. Mrs. Britton of Calne, Wiltshire, left her children in the care of her eldest boy. [Reports of Special Assistant Poor Law Commissioners on the Employment of Women and Children in Agriculture, British Parliamentary Papers, 1843 (510) XII, p. 66] In a family from Presteign, Wales, containing children aged 9, 7, 5, 3, and 1, we find that “The oldest children nurse the youngest.” [F.M. Eden, State of the Poor, London: Davis, 1797, vol. iii, p. 904] When asked what income a labourer’s wife and children could earn, some respondents to the 1833 “Rural Queries” assumed that the eldest child would take care of the others, leaving the mother free to work. The returns from Bengeworth, Worcester, report that, “If the Mother goes to field work, the eldest Child had need to stay at home, to tend the younger branches of the Family.” Ewhurst, Surrey, reported that “If the Mother were employed, the elder Children at home would probably be required to attend to the younger Children.” [Report of His Majesty’s Commissioners for Inquiry in the Administration and Practical Operation of the Poor Law, Appendix B, “Rural Queries,” British Parliamentary Papers, 1834 (44) XXX, p. 488 and 593]

34 Parents heard of incidents, such as one reported in the Times (Feb. 6, 1819):

A shocking accident occurred at Llandidno, near Conway, on Tuesday night, during the absence of a miner and his wife, who had gone to attend a methodist meeting, and locked the house door, leaving two children within; the house by some means took fire, and was, together with the unfortunate children, consumed to ashes; the eldest only four years old!

Mothers were aware of these dangers. One mother who admitted to leaving her children at home worried greatly about the risks:

I have always left my children to themselves, and, God be praised! nothing has ever happened to them, though I thought it dangerous. I have many a time come home, and have thought it a mercy to find nothing has happened to them. . . . Bad accidents often happen. [Reports of Special Assistant Poor Law Commissioners on the Employment of Women and Children in Agriculture, British Parliamentary Papers, 1843 (510) XII, p. 68.]

Leaving young children home without child care had real dangers, and the fact that most working mothers paid for childcare suggests that they did not consider leaving young children alone to be an acceptable option.

35 In 1840 an observer of Spitalfields noted, “In this neighborhood, where the women as well as the men are employed in the manufacture of silk, many children are sent to small schools, not for instruction, but to be taken care of whilst their mothers are at work.”[ Reports from Assistant Handloom-Weavers’ Commissioners, British Parliamentary Papers, 1840 (43) XXIII, p. 261] In 1840 the wife of a Gloucester weaver earned 2s. a week from running a school; she had twelve students and charged each 2d. a week. [Reports from Assistant Handloom Weavers’ Commissioners, British Parliamentary Papers, 1840 (220) XXIV, p. 419] In 1843 the lace-making schools of the midlands generally charged 3d. per week. [Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 46, 64, 71, 72]

36 At one straw-plaiting school in Hertfordshire,

Children commence learning the trade about seven years old: parents pay 3d. a-week for each child, and for this they are taught the trade and taught to read. The mistress employs about from 15 to 20 at work in a room; the parents get the profits of the children’s labour.[ Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 64]

At these schools there was very little instruction; some time was devoted to teaching the children to read, but they spent most of their time working. One mistress complained that the children worked too much and learned too little, “In my judgment I think the mothers task the children too much; the mistress is obliged to make them perform it, otherwise they would put them to other schools.” Ann Page of Newport Pagnell, Buckinghamshire, had “eleven scholars” and claimed to “teach them all reading once a-day.” [Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 66, 71] The standard rate of 3d. per week seems to have been paid for supervision of the children rather than for the instruction.

37 First Report of the Central Board of His Majesty’s Commissioners for Inquiring into the Employment of Children in Factories, British Parliamentary Papers, 1833 (450) XX, C1 p. 33.

38 Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 46.

39 David Davies, The Case of Labourers in Husbandry Stated and Considered, London: Robinson, 1795, p.14. Agricultural wages for this time period are found in Eden, State of the Poor, London: Davis, 1797.

40 In 1843 parliamentary investigator Alfred Austin reports, “Where a girl is hired to take care of children, she is paid about 9d. a week, and has her food besides, which is a serious deduction from the wages of the woman at work.”[ Reports of Special Assistant Poor Law Commissioners on the Employment of Women and Children in Agriculture, British Parliamentary Papers,1843 (510) XII, p.26] Agricultural wages in the area were 8d. per day, so even without the cost of food, the cost of child care was about one-fifth a woman’s wage. One Scottish woman earned 7s. per week in a coal mine and paid 2s.6d., or 36 percent of her income, for the care of her children.[ B.P.P. 1844 (592) XVI, p. 6] In 1843 Mary Wright, a “over-looker” at a Buckinghamshire paper factory, paid even more for child care; she told parliamentary investigators that “for taking care of an infant she pays 1s.6d. a-week, and 3d. a-week for three others.” [Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 46] She earned 10s.6d. per week, so her total child-care payments were 21 percent of her wage. Engels put the cost of child care at 1s. or 18d. a week. [Engels, [1845] 1926, p. 143] Factory workers often made 7s. a week, so again these women may have paid around one-fifth of their earnings for child care. Some estimates suggest even higher fractions of women’s income went to child care. The overseer of Wisbech, Cambridge, suggests a higher fraction; he reports, “The earnings of the Wife we consider comparatively small, in cases where she has a large family to attend to; if she has one or two children, she has to pay half, or perhaps more of her earnings for a person to take care of them.” [Report of His Majesty’s Commissioners for Inquiry in the Administration and Practical Operation of the Poor Law, Appendix B, “Rural Queries,” British Parliamentary Papers, 1834 (44) XXX, p. 76]

Citation: Burnette, Joyce. “Women Workers in the British Industrial Revolution”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/women-workers-in-the-british-industrial-revolution/

The Economic History of Indonesia

Jeroen Touwen, Leiden University, Netherlands

Introduction

In recent decades, Indonesia has been viewed as one of Southeast Asia’s successful highly performing and newly industrializing economies, following the trail of the Asian tigers (Hong Kong, Singapore, South Korea, and Taiwan) (see Table 1). Although Indonesia’s economy grew with impressive speed during the 1980s and 1990s, it experienced considerable trouble after the financial crisis of 1997, which led to significant political reforms. Today Indonesia’s economy is recovering but it is difficult to say when all its problems will be solved. Even though Indonesia can still be considered part of the developing world, it has a rich and versatile past, in the economic as well as the cultural and political sense.

Basic Facts

Indonesia is situated in Southeastern Asia and consists of a large archipelago between the Indian Ocean and the Pacific Ocean, with more than 13.000 islands. The largest islands are Java, Kalimantan (the southern part of the island Borneo), Sumatra, Sulawesi, and Papua (formerly Irian Jaya, which is the western part of New Guinea). Indonesia’s total land area measures 1.9 million square kilometers (750,000 square miles). This is three times the area of Texas, almost eight times the area of the United Kingdom and roughly fifty times the area of the Netherlands. Indonesia has a tropical climate, but since there are large stretches of lowland and numerous mountainous areas, the climate varies from hot and humid to more moderate in the highlands. Apart from fertile land suitable for agriculture, Indonesia is rich in a range of natural resources, varying from petroleum, natural gas, and coal, to metals such as tin, bauxite, nickel, copper, gold, and silver. The size of Indonesia’s population is about 230 million (2002), of which the largest share (roughly 60%) live in Java.

Table 1

Indonesia’s Gross Domestic Product per Capita

Compared with Several Other Asian Countries (in 1990 dollars)

Indonesia Philippines Thailand Japan
1900 745 1 033 812 1 180
1913 904 1 066 835 1 385
1950 840 1 070 817 1 926
1973 1 504 1 959 1 874 11 439
1990 2 516 2 199 4 645 18 789
2000 3 041 2 385 6 335 20 084

Source: Angus Maddison, The World Economy: A Millennial Perspective, Paris: OECD Development Centre Studies 2001, 206, 214-215. For year 2000: University of Groningen and the Conference Board, GGDC Total Economy Database, 2003, http://www.eco.rug.nl/ggdc.

Important Aspects of Indonesian Economic History

“Missed Opportunities”

Anne Booth has characterized the economic history of Indonesia with the somewhat melancholy phrase “a history of missed opportunities” (Booth 1998). One may compare this with J. Pluvier’s history of Southeast Asia in the twentieth century, which is entitled A Century of Unfulfilled Expectations (Breda 1999). The missed opportunities refer to the fact that despite its rich natural resources and great variety of cultural traditions, the Indonesian economy has been underperforming for large periods of its history. A more cyclical view would lead one to speak of several ‘reversals of fortune.’ Several times the Indonesian economy seemed to promise a continuation of favorable economic development and ongoing modernization (for example, Java in the late nineteenth century, Indonesia in the late 1930s or in the early 1990s). But for various reasons Indonesia time and again suffered from severe incidents that prohibited further expansion. These incidents often originated in the internal institutional or political spheres (either after independence or in colonial times), although external influences such as the 1930s Depression also had their ill-fated impact on the vulnerable export-economy.

“Unity in Diversity”

In addition, one often reads about “unity in diversity.” This is not only a political slogan repeated at various times by the Indonesian government itself, but it also can be applied to the heterogeneity in the national features of this very large and diverse country. Logically, the political problems that arise from such a heterogeneous nation state have had their (negative) effects on the development of the national economy. The most striking difference is between densely populated Java, which has a long tradition of politically and economically dominating the sparsely populated Outer Islands. But also within Java and within the various Outer Islands, one encounters a rich cultural diversity. Economic differences between the islands persist. Nevertheless, for centuries, the flourishing and enterprising interregional trade has benefited regional integration within the archipelago.

Economic Development and State Formation

State formation can be viewed as a condition for an emerging national economy. This process essentially started in Indonesia in the nineteenth century, when the Dutch colonized an area largely similar to present-day Indonesia. Colonial Indonesia was called ‘the Netherlands Indies.’ The term ‘(Dutch) East Indies’ was mainly used in the seventeenth and eighteenth centuries and included trading posts outside the Indonesian archipelago.

Although Indonesian national historiography sometimes refers to a presumed 350 years of colonial domination, it is exaggerated to interpret the arrival of the Dutch in Bantam in 1596 as the starting point of Dutch colonization. It is more reasonable to say that colonization started in 1830, when the Java War (1825-1830) was ended and the Dutch initiated a bureaucratic, centralizing polity in Java without further restraint. From the mid-nineteenth century onward, Dutch colonization did shape the borders of the Indonesian nation state, even though it also incorporated weaknesses in the state: ethnic segmentation of economic roles, unequal spatial distribution of power, and a political system that was largely based on oppression and violence. This, among other things, repeatedly led to political trouble, before and after independence. Indonesia ceased being a colony on 17 August 1945 when Sukarno and Hatta proclaimed independence, although full independence was acknowledged by the Netherlands only after four years of violent conflict, on 27 December 1949.

The Evolution of Methodological Approaches to Indonesian Economic History

The economic history of Indonesia analyzes a range of topics, varying from the characteristics of the dynamic exports of raw materials, the dualist economy in which both Western and Indonesian entrepreneurs participated, and the strong measure of regional variation in the economy. While in the past Dutch historians traditionally focused on the colonial era (inspired by the rich colonial archives), from the 1960s and 1970s onward an increasing number of scholars (among which also many Indonesians, but also Australian and American scholars) started to study post-war Indonesian events in connection with the colonial past. In the course of the 1990s attention gradually shifted from the identification and exploration of new research themes towards synthesis and attempts to link economic development with broader historical issues. In 1998 the excellent first book-length survey of Indonesia’s modern economic history was published (Booth 1998). The stress on synthesis and lessons is also present in a new textbook on the modern economic history of Indonesia (Dick et al 2002). This highly recommended textbook aims at a juxtaposition of three themes: globalization, economic integration and state formation. Globalization affected the Indonesian archipelago even before the arrival of the Dutch. The period of the centralized, military-bureaucratic state of Soeharto’s New Order (1966-1998) was only the most recent wave of globalization. A national economy emerged gradually from the 1930s as the Outer Islands (a collective name which refers to all islands outside Java and Madura) reoriented towards industrializing Java.

Two research traditions have become especially important in the study of Indonesian economic history during the past decade. One is a highly quantitative approach, culminating in reconstructions of Indonesia’s national income and national accounts over a long period of time, from the late nineteenth century up to today (Van der Eng 1992, 2001). The other research tradition highlights the institutional framework of economic development in Indonesia, both as a colonial legacy and as it has evolved since independence. There is a growing appreciation among scholars that these two approaches complement each other.

A Chronological Survey of Indonesian Economic History

The precolonial economy

There were several influential kingdoms in the Indonesian archipelago during the pre-colonial era (e.g. Srivijaya, Mataram, Majapahit) (see further Reid 1988,1993; Ricklefs 1993). Much debate centers on whether this heyday of indigenous Asian trade was effectively disrupted by the arrival of western traders in the late fifteenth century

Sixteenth and seventeenth century

Present-day research by scholars in pre-colonial economic history focuses on the dynamics of early-modern trade and pays specific attention to the role of different ethnic groups such as the Arabs, the Chinese and the various indigenous groups of traders and entrepreneurs. During the sixteenth to the nineteenth century the western colonizers only had little grip on a limited number of spots in the Indonesian archipelago. As a consequence much of the economic history of these islands escapes the attention of the economic historian. Most data on economic matters is handed down by western observers with their limited view. A large part of the area remained engaged in its own economic activities, including subsistence agriculture (of which the results were not necessarily very meager) and local and regional trade.

An older research literature has extensively covered the role of the Dutch in the Indonesian archipelago, which began in 1596 when the first expedition of Dutch sailing ships arrived in Bantam. In the seventeenth and eighteenth centuries the Dutch overseas trade in the Far East, which focused on high-value goods, was in the hands of the powerful Dutch East India Company (in full: the United East Indies Trading Company, or Vereenigde Oost-Indische Compagnie [VOC], 1602-1795). However, the region was still fragmented and Dutch presence was only concentrated in a limited number of trading posts.

During the eighteenth century, coffee and sugar became the most important products and Java became the most important area. The VOC gradually took over power from the Javanese rulers and held a firm grip on the productive parts of Java. The VOC was also actively engaged in the intra-Asian trade. For example, cotton from Bengal was sold in the pepper growing areas. The VOC was a successful enterprise and made large dividend payments to its shareholders. Corruption, lack of investment capital, and increasing competition from England led to its demise and in 1799 the VOC came to an end (Gaastra 2002, Jacobs 2000).

The nineteenth century

In the nineteenth century a process of more intensive colonization started, predominantly in Java, where the Cultivation System (1830-1870) was based (Elson 1994; Fasseur 1975).

During the Napoleonic era the VOC trading posts in the archipelago had been under British rule, but in 1814 they came under Dutch authority again. During the Java War (1825-1830), Dutch rule on Java was challenged by an uprising led by Javanese prince Diponegoro. To repress this revolt and establish firm rule in Java, colonial expenses increased, which in turn led to a stronger emphasis on economic exploitation of the colony. The Cultivation System, initiated by Johannes van den Bosch, was a state-governed system for the production of agricultural products such as sugar and coffee. In return for a fixed compensation (planting wage), the Javanese were forced to cultivate export crops. Supervisors, such as civil servants and Javanese district heads, were paid generous ‘cultivation percentages’ in order to stimulate production. The exports of the products were consigned to a Dutch state-owned trading firm (the Nederlandsche Handel-Maatschappij, NHM, established in 1824) and sold profitably abroad.

Although the profits (‘batig slot’) for the Dutch state of the period 1830-1870 were considerable, various reasons can be mentioned for the change to a liberal system: (a) the emergence of new liberal political ideology; (b) the gradual demise of the Cultivation System during the 1840s and 1850s because internal reforms were necessary; and (c) growth of private (European) entrepreneurship with know-how and interest in the exploitation of natural resources, which took away the need for government management (Van Zanden and Van Riel 2000: 226).

Table 2

Financial Results of Government Cultivation, 1840-1849 (‘Cultivation System’) (in thousands of guilders in current values)

1840-1844 1845-1849
Coffee 40 278 24 549
Sugar 8 218 4 136
Indigo, 7 836 7 726
Pepper, Tea 647 1 725
Total net profits 39 341 35 057

Source: Fasseur 1975: 20.

Table 3

Estimates of Total Profits (‘batig slot’) during the Cultivation System,

1831/40 – 1861/70 (in millions of guilders)

1831/40 1841/50 1851/60 1861/70
Gross revenues of sale of colonial products 227.0 473.9 652.7 641.8
Costs of transport etc (NHM) 88.0 165.4 138.7 114.7
Sum of expenses 59.2 175.1 275.3 276.6
Total net profits* 150.6 215.6 289.4 276.7

Source: Van Zanden and Van Riel 2000: 223.

* Recalculated by Van Zanden and Van Riel to include subsidies for the NHM and other costs that in fact benefited the Dutch economy.

The heyday of the colonial export economy (1900-1942)

After 1870, private enterprise was promoted but the exports of raw materials gained decisive momentum after 1900. Sugar, coffee, pepper and tobacco, the old export products, were increasingly supplemented with highly profitable exports of petroleum, rubber, copra, palm oil and fibers. The Outer Islands supplied an increasing share in these foreign exports, which were accompanied by an intensifying internal trade within the archipelago and generated an increasing flow of foreign imports. Agricultural exports were cultivated both in large-scale European agricultural plantations (usually called agricultural estates) and by indigenous smallholders. When the exploitation of oil became profitable in the late nineteenth century, petroleum earned a respectable position in the total export package. In the early twentieth century, the production of oil was increasingly concentrated in the hands of the Koninklijke/Shell Group.


Figure 1

Foreign Exports from the Netherlands-Indies, 1870-1940

(in millions of guilders, current values)

Source: Trade statistics

The momentum of profitable exports led to a broad expansion of economic activity in the Indonesian archipelago. Integration with the world market also led to internal economic integration when the road system, railroad system (in Java and Sumatra) and port system were improved. In shipping lines, an important contribution was made by the KPM (Koninklijke Paketvaart-Maatschappij, Royal Packet boat Company) that served economic integration as well as imperialist expansion. Subsidized shipping lines into remote corners of the vast archipelago carried off export goods (forest products), supplied import goods and transported civil servants and military.

The Depression of the 1930s hit the export economy severely. The sugar industry in Java collapsed and could not really recover from the crisis. In some products, such as rubber and copra, production was stepped up to compensate for lower prices. In the rubber exports indigenous producers for this reason evaded the international restriction agreements. The Depression precipitated the introduction of protectionist measures, which ended the liberal period that had started in 1870. Various import restrictions were launched, making the economy more self-sufficient, as for example in the production of rice, and stimulating domestic integration. Due to the strong Dutch guilder (the Netherlands adhered to the gold standard until 1936), it took relatively long before economic recovery took place. The outbreak of World War II disrupted international trade, and the Japanese occupation (1942-1945) seriously disturbed and dislocated the economic order.

Table 4

Annual Average Growth in Economic Key Aggregates 1830-1990

GDP per capita Export volume Export

Prices

Government Expenditure
Cultivation System 1830-1840 n.a. 13.5 5.0 8.5
Cultivation System 1840-1848 n.a. 1.5 - 4.5 [very low]
Cultivation System 1849-1873 n.a. 1.5 1.5 2.6
Liberal Period 1874-1900 [very low] 3.1 - 1.9 2.3
Ethical Period 1901-1928 1.7 5.8 17.4 4.1
Great Depression 1929-1934 -3.4 -3.9 -19.7 0.4
Prewar Recovery 1934-1940 2.5 2.2 7.8 3.4
Old Order 1950-1965 1.0 0.8 - 2.1 1.8
New Order 1966-1990 4.4 5.4 11.6 10.6

Source: Booth 1998: 18.

Note: These average annual growth percentages were calculated by Booth by fitting an exponential curve to the data for the years indicated. Up to 1873 data refer only to Java.

The post-1945 period

After independence, the Indonesian economy had to recover from the hardships of the Japanese occupation and the war for independence (1945-1949), on top of the slow recovery from the 1930s Depression. During the period 1949-1965, there was little economic growth, predominantly in the years from 1950 to 1957. In 1958-1965, growth rates dwindled, largely due to political instability and inappropriate economic policy measures. The hesitant start of democracy was characterized by a power struggle between the president, the army, the communist party and other political groups. Exchange rate problems and absence of foreign capital were detrimental to economic development, after the government had eliminated all foreign economic control in the private sector in 1957/58. Sukarno aimed at self-sufficiency and import substitution and estranged the suppliers of western capital even more when he developed communist sympathies.

After 1966, the second president, general Soeharto, restored the inflow of western capital, brought back political stability with a strong role for the army, and led Indonesia into a period of economic expansion under his authoritarian New Order (Orde Baru) regime which lasted until 1997 (see below for the three phases in New Order). In this period industrial output quickly increased, including steel, aluminum, and cement but also products such as food, textiles and cigarettes. From the 1970s onward the increased oil price on the world market provided Indonesia with a massive income from oil and gas exports. Wood exports shifted from logs to plywood, pulp, and paper, at the price of large stretches of environmentally valuable rainforest.

Soeharto managed to apply part of these revenues to the development of technologically advanced manufacturing industry. Referring to this period of stable economic growth, the World Bank Report of 1993 speaks of an ‘East Asian Miracle’ emphasizing the macroeconomic stability and the investments in human capital (World Bank 1993: vi).

The financial crisis in 1997 revealed a number of hidden weaknesses in the economy such as a feeble financial system (with a lack of transparency), unprofitable investments in real estate, and shortcomings in the legal system. The burgeoning corruption at all levels of the government bureaucracy became widely known as KKN (korupsi, kolusi, nepotisme). These practices characterize the coming-of-age of the 32-year old, strongly centralized, autocratic Soeharto regime.

From 1998 until present

Today, the Indonesian economy still suffers from severe economic development problems following the financial crisis of 1997 and the subsequent political reforms after Soeharto stepped down in 1998. Secessionist movements and the low level of security in the provincial regions, as well as relatively unstable political policies, form some of its present-day problems. Additional problems include the lack of reliable legal recourse in contract disputes, corruption, weaknesses in the banking system, and strained relations with the International Monetary Fund. The confidence of investors remains low, and in order to achieve future growth, internal reform will be essential to build up confidence of international donors and investors.

An important issue on the reform agenda is regional autonomy, bringing a larger share of export profits to the areas of production instead of to metropolitan Java. However, decentralization policies do not necessarily improve national coherence or increase efficiency in governance.

A strong comeback in the global economy may be at hand, but has not as yet fully taken place by the summer of 2003 when this was written.

Additional Themes in the Indonesian Historiography

Indonesia is such a large and multi-faceted country that many different aspects have been the focus of research (for example, ethnic groups, trade networks, shipping, colonialism and imperialism). One can focus on smaller regions (provinces, islands), as well as on larger regions (the western archipelago, the eastern archipelago, the Outer Islands as a whole, or Indonesia within Southeast Asia). Without trying to be exhaustive, eleven themes which have been subject of debate in Indonesian economic history are examined here (on other debates see also Houben 2002: 53-55; Lindblad 2002b: 145-152; Dick 2002: 191-193; Thee 2002: 242-243).

The indigenous economy and the dualist economy

Although western entrepreneurs had an advantage in technological know-how and supply of investment capital during the late-colonial period, there has been a traditionally strong and dynamic class of entrepreneurs (traders and peasants) in many regions of Indonesia. Resilient in times of economic malaise, cunning in symbiosis with traders of other Asian nationalities (particularly Chinese), the Indonesian entrepreneur has been rehabilitated after the relatively disparaging manner in which he was often pictured in the pre-1945 literature. One of these early writers, J.H. Boeke, initiated a school of thought centering on the idea of ‘economic dualism’ (referring to a modern western and a stagnant eastern sector). As a consequence, the term ‘dualism’ was often used to indicate western superiority. From the 1960s onward such ideas have been replaced by a more objective analysis of the dualist economy that is not so judgmental about the characteristics of economic development in the Asian sector. Some focused on technological dualism (such as B. Higgins) others on ethnic specialization in different branches of production (see also Lindblad 2002b: 148, Touwen 2001: 316-317).

The characteristics of Dutch imperialism

Another vigorous debate concerns the character of and the motives for Dutch colonial expansion. Dutch imperialism can be viewed as having a rather complex mix of political, economic and military motives which influenced decisions about colonial borders, establishing political control in order to exploit oil and other natural resources, and preventing local uprisings. Three imperialist phases can be distinguished (Lindblad 2002a: 95-99). The first phase of imperialist expansion was from 1825-1870. During this phase interference with economic matters outside Java increased slowly but military intervention was occasional. The second phase started with the outbreak of the Aceh War in 1873 and lasted until 1896. During this phase initiatives in trade and foreign investment taken by the colonial government and by private businessmen were accompanied by extension of colonial (military) control in the regions concerned. The third and final phase was characterized by full-scale aggressive imperialism (often known as ‘pacification’) and lasted from 1896 until 1907.

The impact of the cultivation system on the indigenous economy

The thesis of ‘agricultural involution’ was advocated by Clifford Geertz (1963) and states that a process of stagnation characterized the rural economy of Java in the nineteenth century. After extensive research, this view has generally been discarded. Colonial economic growth was stimulated first by the Cultivation System, later by the promotion of private enterprise. Non-farm employment and purchasing power increased in the indigenous economy, although there was much regional inequality (Lindblad 2002a: 80; 2002b:149-150).

Regional diversity in export-led economic expansion

The contrast between densely populated Java, which had been dominant in economic and political regard for a long time, and the Outer Islands, which were a large, sparsely populated area, is obvious. Among the Outer Islands we can distinguish between areas which were propelled forward by export trade, either from Indonesian or European origin (examples are Palembang, East Sumatra, Southeast Kalimantan) and areas which stayed behind and only slowly picked the fruits of the modernization that took place elsewhere (as for example Benkulu, Timor, Maluku) (Touwen 2001).

The development of the colonial state and the role of Ethical Policy

Well into the second half of the nineteenth century, the official Dutch policy was to abstain from interference with local affairs. The scarce resources of the Dutch colonial administrators should be reserved for Java. When the Aceh War initiated a period of imperialist expansion and consolidation of colonial power, a call for more concern with indigenous affairs was heard in Dutch politics, which resulted in the official Ethical Policy which was launched in 1901 and had the threefold aim of improving indigenous welfare, expanding the educational system, and allowing for some indigenous participation in the government (resulting in the People’s Council (Volksraad) that was installed in 1918 but only had an advisory role). The results of the Ethical Policy, as for example measured in improvements in agricultural technology, education, or welfare services, are still subject to debate (Lindblad 2002b: 149).

Living conditions of coolies at the agricultural estates

The plantation economy, which developed in the sparsely populated Outer Islands (predominantly in Sumatra) between 1870 and 1942, was in bad need of labor. The labor shortage was solved by recruiting contract laborers (coolies) in China, and later in Java. The Coolie Ordinance was a government regulation that included the penal clause (which allowed for punishment by plantation owners). In response to reported abuse, the colonial government established the Labor Inspectorate (1908), which aimed at preventing abuse of coolies on the estates. The living circumstances and treatment of the coolies has been subject of debate, particularly regarding the question whether the government put enough effort in protecting the interests of the workers or allowed abuse to persist (Lindblad 2002b: 150).

Colonial drain

How large of a proportion of economic profits was drained away from the colony to the mother country? The detrimental effects of the drain of capital, in return for which European entrepreneurial initiatives were received, have been debated, as well as the exact methods of its measurement. There was also a second drain to the home countries of other immigrant ethnic groups, mainly to China (Van der Eng 1998; Lindblad 2002b: 151).

The position of the Chinese in the Indonesian economy

In the colonial economy, the Chinese intermediary trader or middleman played a vital role in supplying credit and stimulating the cultivation of export crops such as rattan, rubber and copra. The colonial legal system made an explicit distinction between Europeans, Chinese and Indonesians. This formed the roots of later ethnic problems, since the Chinese minority population in Indonesia has gained an important (and sometimes envied) position as capital owners and entrepreneurs. When threatened by political and social turmoil, Chinese business networks may have sometimes channel capital funds to overseas deposits.

Economic chaos during the ‘Old Order’

The ‘Old Order’-period, 1945-1965, was characterized by economic (and political) chaos although some economic growth undeniably did take place during these years. However, macroeconomic instability, lack of foreign investment and structural rigidity formed economic problems that were closely connected with the political power struggle. Sukarno, the first president of the Indonesian republic, had an outspoken dislike of colonialism. His efforts to eliminate foreign economic control were not always supportive of the struggling economy of the new sovereign state. The ‘Old Order’ has for long been a ‘lost area’ in Indonesian economic history, but the establishment of the unitary state and the settlement of major political issues, including some degree of territorial consolidation (as well as the consolidation of the role of the army) were essential for the development of a national economy (Dick 2002: 190; Mackie 1967).

Development policy and economic planning during the ‘New Order’ period

The ‘New Order’ (Orde Baru) of Soeharto rejected political mobilization and socialist ideology, and established a tightly controlled regime that discouraged intellectual enquiry, but did put Indonesia’s economy back on the rails. New flows of foreign investment and foreign aid programs were attracted, the unbridled population growth was reduced due to family planning programs, and a transformation took place from a predominantly agricultural economy to an industrializing economy. Thee Kian Wie distinguishes three phases within this period, each of which deserve further study:

(a) 1966-1973: stabilization, rehabilitation, partial liberalization and economic recovery;

(b) 1974-1982: oil booms, rapid economic growth, and increasing government intervention;

(c) 1983-1996: post-oil boom, deregulation, renewed liberalization (in reaction to falling oil-prices), and rapid export-led growth. During this last phase, commentators (including academic economists) were increasingly concerned about the thriving corruption at all levels of the government bureaucracy: KKN (korupsi, kolusi, nepotisme) practices, as they later became known (Thee 2002: 203-215).

Financial, economic and political crisis: KRISMON, KRISTAL

The financial crisis of 1997 started with a crisis of confidence following the depreciation of the Thai baht in July 1997. Core factors causing the ensuing economic crisis in Indonesia were the quasi-fixed exchange rate of the rupiah, quickly rising short-term foreign debt and the weak financial system. Its severity had to be attributed to political factors as well: the monetary crisis (KRISMON) led to a total crisis (KRISTAL) because of the failing policy response of the Soeharto regime. Soeharto had been in power for 32 years and his government had become heavily centralized and corrupt and was not able to cope with the crisis in a credible manner. The origins, economic consequences, and socio-economic impact of the crisis are still under discussion. (Thee 2003: 231-237; Arndt and Hill 1999).

(Note: I want to thank Dr. F. Colombijn and Dr. J.Th Lindblad at Leiden University for their useful comments on the draft version of this article.)

Selected Bibliography

In addition to the works cited in the text above, a small selection of recent books is mentioned here, which will allow the reader to quickly grasp the most recent insights and find useful further references.

General textbooks or periodicals on Indonesia’s (economic) history:

Booth, Anne. The Indonesian Economy in the Nineteenth and Twentieth Centuries: A History of Missed Opportunities. London: Macmillan, 1998.

Bulletin of Indonesian Economic Studies.

Dick, H.W., V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie. The Emergence of a National Economy in Indonesia, 1800-2000. Sydney: Allen & Unwin, 2002.

Itinerario “Economic Growth and Institutional Change in Indonesia in the 19th and 20th centuries” [special issue] 26 no. 3-4 (2002).

Reid, Anthony. Southeast Asia in the Age of Commerce, 1450-1680, Vol. I: The Lands below the Winds. New Haven: Yale University Press, 1988.

Reid, Anthony. Southeast Asia in the Age of Commerce, 1450-1680, Vol. II: Expansion and Crisis. New Haven: Yale University Press, 1993.

Ricklefs, M.C. A History of Modern Indonesia since ca. 1300. Basingstoke/Londen: Macmillan, 1993.

On the VOC:

Gaastra, F.S. De Geschiedenis van de VOC. Zutphen: Walburg Pers, 1991 (1st edition), 2002 (4th edition).

Jacobs, Els M. Koopman in Azië: de Handel van de Verenigde Oost-Indische Compagnie tijdens de 18de Eeuw. Zutphen: Walburg Pers, 2000.

Nagtegaal, Lucas. Riding the Dutch Tiger: The Dutch East Indies Company and the Northeast Coast of Java 1680-1743. Leiden: KITLV Press, 1996.

On the Cultivation System:

Elson, R.E. Village Java under the Cultivation System, 1830-1870. Sydney: Allen and Unwin, 1994.

Fasseur, C. Kultuurstelsel en Koloniale Baten. De Nederlandse Exploitatie van Java, 1840-1860. Leiden, Universitaire Pers, 1975. (Translated as: The Politics of Colonial Exploitation: Java, the Dutch and the Cultivation System. Ithaca, NY: Southeast Asia Program, Cornell University Press 1992.)

Geertz, Clifford. Agricultural Involution: The Processes of Ecological Change in Indonesia. Berkeley: University of California Press, 1963.

Houben, V.J.H. “Java in the Nineteenth Century: Consolidation of a Territorial State.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 56-81. Sydney: Allen & Unwin, 2002.

On the Late-Colonial Period:

Dick, H.W. “Formation of the Nation-state, 1930s-1966.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 153-193. Sydney: Allen & Unwin, 2002.

Lembaran Sejarah, “Crisis and Continuity: Indonesian Economy in the Twentieth Century” [special issue] 3 no. 1 (2000).

Lindblad, J.Th., editor. New Challenges in the Modern Economic History of Indonesia. Leiden: PRIS, 1993. Translated as: Sejarah Ekonomi Modern Indonesia. Berbagai Tantangan Baru. Jakarta: LP3ES, 2002.

Lindblad, J.Th., editor. The Historical Foundations of a National Economy in Indonesia, 1890s-1990s. Amsterdam: North-Holland, 1996.

Lindblad, J.Th. “The Outer Islands in the Nineteenthh Century: Contest for the Periphery.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 82-110. Sydney: Allen & Unwin, 2002a.

Lindblad, J.Th. “The Late Colonial State and Economic Expansion, 1900-1930s.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 111-152. Sydney: Allen & Unwin, 2002b.

Touwen, L.J. Extremes in the Archipelago: Trade and Economic Development in the Outer Islands of Indonesia, 1900‑1942. Leiden: KITLV Press, 2001.

Van der Eng, Pierre. “Exploring Exploitation: The Netherlands and Colonial Indonesia, 1870-1940.” Revista de Historia Económica 16 (1998): 291-321.

Zanden, J.L. van, and A. van Riel. Nederland, 1780-1914: Staat, instituties en economische ontwikkeling. Amsterdam: Balans, 2000. (On the Netherlands in the nineteenth century.)

Independent Indonesia:

Arndt, H.W. and Hal Hill, editors. Southeast Asia’s Economic Crisis: Origins, Lessons and the Way forward. Singapore: Institute of Southeast Asian Studies, 1999.

Cribb, R. and C. Brown. Modern Indonesia: A History since 1945. Londen/New York: Longman, 1995.

Feith, H. The Decline of Constitutional Democracy in Indonesia. Ithaca, New York: Cornell University Press, 1962.

Hill, Hal. The Indonesian Economy. Cambridge: Cambridge University Press, 2000. (This is the extended second edition of Hill, H., The Indonesian Economy since 1966. Southeast Asia’s Emerging Giant. Cambridge: Cambridge University Press, 1996.)

Hill, Hal, editor. Unity and Diversity: Regional Economic Development in Indonesia since 1970. Singapore: Oxford University Press, 1989.

Mackie, J.A.C. “The Indonesian Economy, 1950-1960.” In The Economy of Indonesia: Selected Readings, edited by B. Glassburner, 16-69. Ithaca NY: Cornell University Press 1967.

Robison, Richard. Indonesia: The Rise of Capital. Sydney: Allen and Unwin, 1986.

Thee Kian Wie. “The Soeharto Era and After: Stability, Development and Crisis, 1966-2000.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 194-243. Sydney: Allen & Unwin, 2002.

World Bank. The East Asian Miracle: Economic Growth and Public Policy. Oxford: World Bank /Oxford University Press, 1993.

On economic growth:

Booth, Anne. The Indonesian Economy in the Nineteenth and Twentieth Centuries. A History of Missed Opportunities. London: Macmillan, 1998.

Van der Eng, Pierre. “The Real Domestic Product of Indonesia, 1880-1989.” Explorations in Economic History 39 (1992): 343-373.

Van der Eng, Pierre. “Indonesia’s Growth Performance in the Twentieth Century.” In The Asian Economies in the Twentieth Century, edited by Angus Maddison, D.S. Prasada Rao and W. Shepherd, 143-179. Cheltenham: Edward Elgar, 2002.

Van der Eng, Pierre. “Indonesia’s Economy and Standard of Living in the Twentieth Century.” In Indonesia Today: Challenges of History, edited by G. Lloyd and S. Smith, 181-199. Singapore: Institute of Southeast Asian Studies, 2001.

Citation: Touwen, Jeroen. “The Economic History of Indonesia”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-indonesia/

Indentured Servitude in the Colonial U.S.

Joshua Rosenbloom, University of Kansas

During the seventeenth and eighteenth centuries a variety of labor market institutions developed to facilitate the movement of labor in response to the opportunities created by American factor proportions. While some immigrants migrated on their own, the majority of immigrants were either indentured servants or African slaves.

Because of the cost of passage—which exceeded half a year’s income for a typical British immigrant and a full year’s income for a typical German immigrant—only a small portion of European migrants could afford to pay for their passage to the Americas (Grubb 1985a). They did so by signing contracts, or “indentures,” committing themselves to work for a fixed number of years in the future—their labor being their only viable asset—with British merchants, who then sold these contracts to colonists after their ship reached America. Indentured servitude was introduced by the Virginia Company in 1619 and appears to have arisen from a combination of the terms of two other types of labor contract widely used in England at the time: service in husbandry and apprenticeship (Galenson 1981). In other cases, migrants borrowed money for their passage and committed to repay merchants by pledging to sell themselves as servants in America, a practice known as “redemptioner servitude (Grubb 1986). Redemptioners bore increased risk because they could not predict in advance what terms they might be able to negotiate for their labor, but presumably they did so because of other benefits, such as the opportunity to choose their own master, and to select where they would be employed.

Although data on immigration for the colonial period are scattered and incomplete a number of scholars have estimated that between half and three quarters of European immigrants arriving in the colonies came as indentured or redemptioner servants. Using data for the end of the colonial period Grubb (1985b) found that close to three-quarters of English immigrants to Pennsylvania and nearly 60 percent of German immigrants arrived as servants.

A number of scholars have examined the terms of indenture and redemptioner contracts in some detail (see, e.g., Galenson 1981; Grubb 1985a). They find that consistent with the existence of a well-functioning market, the terms of service varied in response to differences in individual productivity, employment conditions, and the balance of supply and demand in different locations.

The other major source of labor for the colonies was the forced migration of African slaves. Slavery had been introduced in the West Indies at an early date, but it was not until the late seventeenth century that significant numbers of slaves began to be imported into the mainland colonies. From 1700 to 1780 the proportion of blacks in the Chesapeake region grew from 13 percent to around 40 percent. In South Carolina and Georgia, the black share of the population climbed from 18 percent to 41 percent in the same period (McCusker and Menard, 1985, p. 222). Galenson (1984) explains the transition from indentured European to enslaved African labor as the result of shifts in supply and demand conditions in England and the trans-Atlantic slave market. Conditions in Europe improved after 1650, reducing the supply of indentured servants, while at the same time increased competition in the slave trade was lowering the price of slaves (Dunn 1984). In some sense the colonies’ early experience with indentured servants paved the way for the transition to slavery. Like slaves, indentured servants were unfree, and ownership of their labor could be freely transferred from one owner to another. Unlike slaves, however, they could look forward to eventually becoming free (Morgan 1971).

Over time a marked regional division in labor market institutions emerged in colonial America. The use of slaves was concentrated in the Chesapeake and Lower South, where the presence of staple export crops (rice, indigo and tobacco) provided economic rewards for expanding the scale of cultivation beyond the size achievable with family labor. European immigrants (primarily indentured servants) tended to concentrate in the Chesapeake and Middle Colonies, where servants could expect to find the greatest opportunities to enter agriculture once they had completed their term of service. While New England was able to support self-sufficient farmers, its climate and soil were not conducive to the expansion of commercial agriculture, with the result that it attracted relatively few slaves, indentured servants, or free immigrants. These patterns are illustrated in Table 1, which summarizes the composition and destinations of English emigrants in the years 1773 to 1776.

Table 1

English Emigration to the American Colonies, by Destination and Type, 1773-76

Total Emigration
Destination Number Percentage Percent listed as servants
New England 54 1.20 1.85
Middle Colonies 1,162 25.78 61.27
New York 303 6.72 11.55
Pennsylvania 859 19.06 78.81
Chesapeake 2,984 66.21 96.28
Maryland 2,217 49.19 98.33
Virginia 767 17.02 90.35
Lower South 307 6.81 19.54
Carolinas 106 2.35 23.58
Georgia 196 4.35 17.86
Florida 5 0.11 0.00
Total 4,507 80.90

Source: Grubb (1985b, p. 334).

References

Dunn, Richard S. “Servants and Slaves: The Recruitment and Employment of Labor.” In Colonial British America: Essays in the New History of the Early Modern Era, edited by Jack P. Greene and J.R. Pole. Baltimore: Johns Hopkins University Press, 1984.

Galenson, David W. White Servitude in Colonial America. New York: Cambridge University Press, 1981.

Galenson, David W. “The Rise and Fall of Indentured Servitude in the Americas: An Economic Analysis.” Journal of Economic History 44, no. 1 (1984): 1-26.

Grubb, Farley. “The Market for Indentured Immigrants: Evidence on the Efficiency of Forward Labor Contracting in Philadelphia, 1745-1773.” Journal of Economic History 45, no. 4 (1985a): 855-68.

Grubb, Farley. “The Incidence of Servitude in Trans-Atlantic Migration, 1771-1804.” Explorations in Economic History 22 (1985b): 316-39.

Grubb, Farley. “Redemptioner Immigration to Pennsylvania: Evidence on Contract Choice and Profitability.” Journal of Economic History 46, no. 2 (1986): 407-18.

McCusker, John J. and Russell R. Menard. The Economy of British America: 1607-1789. Chapel Hill: University of North Carolina Press, 1985.

Morgan, Edmund S. “The Labor Problem at Jamestown, 1607-18.” American Historical Review 76 (1971): 595-611.

Citation: Rosenbloom, Joshua. “Indentured Servitude in the Colonial U.S.”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/indentured-servitude-in-the-colonial-u-s/

Immigration to the United States

Raymond L. Cohn, Illinois State University (Emeritus)

For good reason, it is often said the United States is a nation of immigrants. Almost every person in the United States is descended from someone who arrived from another country. This article discusses immigration to the United States from colonial times to the present. The focus is on individuals who paid their own way, rather than slaves and indentured servants. Various issues concerning immigration are discussed: (1) the basic data sources available, (2) the variation in the volume over time, (3) the reasons immigration occurred, (4) nativism and U.S. immigration policy, (5) the characteristics of the immigrant stream, (6) the effects on the United States economy, and (7) the experience of immigrants in the U.S. labor market.

For readers who wish to further investigate immigration, the following works listed in the Reference section of this entry are recommended as general histories of immigration to the United States: Hansen (1940); Jones (1960); Walker (1964); Taylor (1971); Miller (1985); Nugent (1992); Erickson (1994); Hatton and Williamson (1998); and Cohn (2009).

The Available Data Sources

The primary source of data on immigration to the United States is the Passenger Lists, though U.S. and state census materials, Congressional reports, and company records also contain material on immigrants. In addition, the Integrated Public Use Microdata Series (IPUMS) web site at the University of Minnesota (http://www.ipums.umn.edu/usa/) contains data samples drawn from a number of federal censuses. Since the samples are of individuals and families, the site is useful in immigration research. A number of the countries from which the immigrants left also kept records about the individuals. Many of these records were originally summarized in Ferenczi (1970). Although records from other countries are useful for some purposes, the U.S. records are generally viewed as more complete, especially for the period before 1870. It is worthy of note that comparisons of the lists between countries often lead to somewhat different results. It is also probable that, during the early years, a few of the U.S. lists were lost or never collected.

Passenger Lists

The U.S. Passenger Lists resulted from an 1819 law requiring every ship carrying passengers that arrived in the United States from a foreign port to file with the port authorities a list of all passengers on the ship. These records are the basis for the vast majority of the historical data on immigration. For example, virtually all of the tables in the chapter on immigration in Carter et. al (2006) are based on these records. The Passenger Lists recorded a great deal of information. Each list indicates the name of the ship, the name of the captain, the port(s) of embarkation, the port of arrival, and the date of arrival. Following this information is a list of the passengers. Each person’s name is listed, along with age, gender, occupation, country of origin, country of destination, and whether or not the person died on the voyage. It is often possible to distinguish family groups since family members were usually grouped together and, to save time, the compilers frequently used ditto marks to indicate the same last name. Various data based on the lists were published in Senate or Congressional Reports at the time. Due to their usefulness in genealogical research, the lists are now widely available on microfilm and are increasingly available on CD-rom. Even a few public libraries in major cities have full or partial collections of these records. Most of the ship lists are also available on-line at various web sites.

The Volume of Immigration

Both the total volume of immigration to the United States and the immigrants’ countries of origins varied substantially over time. Table 1 provides the basic data on total immigrant volume by time period broken down by country or area of origin. The column “Average Yearly Total – All Countries” presents the average yearly total immigration to the United States in the time period given. Immigration rates – the average number of immigrants entering per thousand individuals in the U.S. population – are shown in the next column. The columns headed by country or area names show the percentage of immigrants coming from that place. The time periods in Table 1 have been chosen for illustrative purposes. A few things should be noted concerning the figures in Table 1. First, the estimates for much of the period since 1820 are based on the original Passenger Lists and are subject to the caveats discussed above. The estimates for the period before 1820 are the best currently available but are less precise than those after 1820. Second, though it was legal to import slaves into the United States (or the American colonies) before 1808, the estimates presented exclude slaves. Third, though illegal immigration into the United States has occurred, the figures in Table 1 include only legal immigrants. In 2015, the total number of illegal immigrants in the United States is estimated at around 11 million. These individuals were mostly from Mexico, Central America, and Asia.

Trends over Time

From the data presented in Table 1, it is apparent that the volume of immigration and its rate relative to the U.S. population varied over time. Immigration was relatively small until a noticeable increase occurred in the 1830s and a huge jump in the 1840s. The volume passed 200,000 for the first time in 1847 and the period between 1847 and 1854 saw the highest rate of immigration in U.S. history. From the level reached between 1847 and 1854, volume decreased and increased over time through 1930. For the period from 1847 through 1930, the average yearly volume was 434,000. During these years, immigrant volume peaked between 1900 and 1914, when an average of almost 900,000 immigrants arrived in the United States each year. This period is also second in terms of the rate of immigration relative to the U.S. population. The volume and rate fell to low levels between 1931 and 1946, though by the 1970s the volume had again reached that experienced between 1847 and 1930. The rise in volume continued through the 1980s and 1990s, though the rate per one thousand American residents has remained well below that experienced before 1915. It is notable that since about 1990, the average yearly volume of immigration has surpassed the previous peak experienced between 1900 and 1914. In 2015, reflecting the large volume of immigration, about 15 percent of the U.S. population was foreign-born.

Table 1
Immigration Volume and Rates

Years Average Yearly Total – All Countries Immigration Rates (Per 1000 Population) Percent of Average Yearly Total
Great Britain Ireland Scandinavia and Other NW Europe Germany Central and Eastern Europe Southern Europe Asia Africa Australia and Pacific Islands Mexico Other America
1630‑1700 2,200 —- —- —- —- —- —- —- —- —- —- —- —-
1700-1780 4,325 —- —- —- —- —- —- —- —- —- —- —- —-
1780-1819 9,900 —- —- —- —- —- —- —- —- —- —- —- —-
1820-1831 14,538 1.3 22 45 12 8 0 2 0 0 —- 4 6
1832-1846 71,916 4.3 16 41 9 27 0 1 0 0 —- 1 5
1847-1854 334,506 14.0 13 45 6 32 0 0 1 0 —- 0 3
1855-1864 160,427 5.2 25 28 5 33 0 1 3 0 —- 0 4
1865-1873 327,464 8.4 24 16 10 34 1 1 3 0 0 0 10
1874-1880 260,754 5.6 18 15 14 24 5 3 5 0 0 0 15
1881-1893 525,102 8.9 14 12 16 26 16 8 1 0 0 0 6
1894-1899 276,547 3.9 7 12 12 11 32 22 3 0 0 0 2
1900-1914 891,806 10.2 6 4 7 4 45 26 3 0 0 1 5
1915-1919 234,536 2.3 5 2 8 1 7 21 6 0 1 8 40
1920-1930 412,474 3.6 8 5 8 9 14 16 3 0 0 11 26
1931-1946 50,507 0.4 10 2 9 15 8 12 3 1 1 6 33
1947-1960 252,210 1.5 7 2 6 8 4 10 8 1 1 15 38
1961-1970 332,168 1.7 6 1 4 6 4 13 13 1 1 14 38
1971-1980 449,331 2.1 3 0 1 2 4 8 35 2 1 14 30
1981-1990 733,806 3.1 2 0 1 1 3 2 37 2 1 23 27
1991-2000 909,264 3.4 2 1 1 1 11 2 38 5 1 30 9
2001-2008 1,040,951 4.4 2 0 1 1 9 1 35 7 1 17 27
2009-2015 1,046,459 4.8 1 0 1 1 5 1 40 10 1 14 27

Sources: Years before 1820: Grabbe (1989). 1820-1970: Historical Statistics (1976). Years since 1970: U.S. Immigration and Naturalization Service (various years). Note: Entries with a zero indicate less than one-half of one percent. Entries with dashes indicate no information or no immigrants. 2002-2015: Department of Homeland Security: Office of Immigration Statistics (various years).

Sources of Immigration

The sources of immigration have changed a number of times over the years. In general, four relatively distinct periods can be identified in Table 1. Before 1881, the vast majority of immigrants, almost 86% of the total, arrived from northwest Europe, principally Great Britain, Ireland, Germany, and Scandinavia. During the colonial period, though the data do not allow an accurate breakdown, most immigrants arrived from Britain, with smaller numbers coming from Ireland and Germany. The years between 1881 and 1893 saw a transition in the sources of U.S. immigrants. After 1881, immigrant volume from central, eastern, and southern Europe began to increase rapidly. Between 1894 and 1914, immigrants from southern, central, and eastern Europe accounted for 69% of the total. With the onset of World War I in 1914, the sources of U.S. immigration again changed. From 1915 to the present day, a major source of immigrants to the United States has been the Western Hemisphere, accounting for 46% of the total. In the period between 1915 and 1960, virtually all of the remaining immigrants came from Europe, though no specific part of Europe was dominant. Beginning in the 1960s, immigration from Europe fell off substantially and was replaced by a much larger percentage of immigrants from Asia. Also noteworthy is the rise in immigration from Africa in the twenty-first century. Thus, over the course of U.S. history, the sources of immigration changed from northwestern Europe to southern, central and eastern Europe to the Americas in combination with Europe to the current situation where most immigrants come from the Americas, Asia and Africa.

Duration of Voyage and Method of Travel

Before the 1840s, immigrants arrived on sailing ships. General information on the length of the voyage is unavailable for the colonial and early national periods. By the 1840s, however, the average voyage length for ships from the British Isles was five to six weeks, with those from the European continent taking a week or so longer. In the 1840s, a few steamships began to cross the Atlantic. Over the course of the 1850s, steamships began to account for a larger, though still minority, percentage of immigrant travel. By 1873, virtually all immigrants arrived on steamships (Cohn 2005). As a result, the voyage time fell initially to about two weeks and it continued to decline into the twentieth century. Steamships remained the primary means of travel until after World War II. As a consequence of the boom in airplane travel over the last few decades, most immigrants now arrive via air.

Place of Arrival

Where immigrants landed in the United States varied, especially in the period before the Civil War. During the colonial and early national periods, immigrants arrived not only at New York City but also at a variety of other ports, especially Philadelphia, Boston, New Orleans, and Baltimore. Over time, and especially when most immigrants began arriving via steamship, New York City became the main arrival port. No formal immigration facilities existed at any of the ports until New York City established Castle Garden as its landing depot in 1855. This facility, located at the tip of Manhattan, was replaced in 1892 with Ellis Island, which in turn operated until 1954.

Death Rates during the Voyage

A final aspect to consider is the mortality experienced by the individuals on board the ships. Information taken from the Passenger Lists for the period of the sailing ship between 1820 and 1860 finds a loss rate of one to two percent of the immigrants who boarded (Cohn, 2009). Given the length of the trip and taking into account the ages of the immigrants, this rate represents mortality approximately four times higher than that experienced by non-migrants. Mortality was mainly due to outbreaks of cholera and typhus on some ships, leading to especially high death rates among children and the elderly. There appears to have been little trend over time in mortality or differences in the loss rate by country of origin, though some evidence suggests the loss rate may have differed by port of embarkation. In addition, the best evidence from the colonial period finds a loss rate only slightly higher than that of the antebellum years. In the period after the Civil War, with the change to steamships and the resulting shorter travel time and improved on-board conditions, mortality on the voyages fell, though exactly how much has not been determined.

The Causes of Immigration

Economic historians generally believe no single factor led to immigration. In fact, different studies have tried to explain immigration by emphasizing different factors, with the first important study being done by Thomas (1954). The most recent attempt to comprehensively explain immigration has been by Hatton and Williamson (1998), who focus on the period between 1860 and 1914. Massey (1999) expresses relatively similar views. Hatton and Williamson view immigration from a country during this time as being caused by up to five different factors: (a) the difference in real wages between the country and the United States; (b) the rate of population growth in the country 20 or 30 years before; (c) the degree of industrialization and urbanization in the home country; (d) the volume of previous immigrants from that country or region; and (e) economic and political conditions in the United States. To this list can be added factors not relevant during the 1860 to 1914 period, such as the potato famine, the movement from sail to steam, and the presence or absence of immigration restrictions. Thus, a total of at least eight factors affected immigration.

Causes of Fluctuations in Immigration Levels over Time

As discussed above, the total volume of immigration trended upward until World War I. The initial increase in immigration during the 1830s and 1840s was caused by improvements in shipping, more rapid population growth in Europe, and the potato famine in the latter part of the 1840s, which affected not only Ireland but also much of northwest Europe. As previously noted, the steamship replaced the sailing ship after the Civil War. By substantially reducing the length of the trip and increasing comfort and safety, the steamship encouraged an increase in the volume of immigration. Part of the reason volume increased was that temporary immigration became more likely. In this situation, an individual came to the United States not planning to stay permanently but instead planning to work for a period of time before returning home. All in all, the period from 1865 through 1914, when immigration was not restricted and steamships were dominant, saw an average yearly immigrant volume of almost 529,000. In contrast, average yearly immigration between 1820 and 1860 via sailing ship was only 123,000, and even between 1847 and 1860 was only 266,000.

Another feature of the data in Table 1 is that the yearly volume of immigration fluctuated quite a bit in the period before 1914. The fluctuations are mainly due to changes in economic and political conditions in the United States. Essentially, periods of low volume corresponded with U.S. economic depressions or times of widespread opposition to immigrants. In particular, volume declined during the nativist outbreak in the 1850s and the major depressions of the 1870s and 1890s and the Great Depression of the 1930s. As discussed in the next section, the United States imposed widespread restrictions on immigration beginning in the 1920s. Since then, the volume has been subject to more direct determination by the United States government. Thus, fluctuations in the total volume of immigration over time are due to four of the eight factors discussed in the first paragraph of this section: the potato famine, the movement from sail to steam, economic and political conditions in the United States, and the presence or absence of immigration restrictions.

Factors Influencing Immigration Rates from Particular Countries

The other four factors are primarily used to explain changes in the source countries of immigration. A larger difference in real wages between the country and the United States increased immigration from the country because it meant immigrants had more to gain from the move. Because most immigrants were between 15 and 35 years old, a higher population growth 20 or 30 years earlier meant there were more individuals in the potential immigrant group. In addition, a larger volume of young workers in a country reduced job prospects at home and further encouraged immigration. A greater degree of industrialization and urbanization in the home country typically increased immigration because traditional ties with the land were broken during this period, making laborers in the country more mobile. Finally, the presence of a larger volume of previous immigrants from that country or region encouraged more immigration because potential immigrants now had friends or relatives to stay with who could smooth their transition to living and working in the United States.

Based on these four factors, Hatton and Williamson explain the rise and fall in the volume of immigration from a country to the United States. Immigrant volume initially increased as a consequence of more rapid population growth and industrialization in a country and the existence of a large gap in real wages between the country and the United States. Within a number of years, volume increased further due to the previous immigration that had occurred. Volume remained high until various changes in Europe caused immigration to decline. Population growth slowed. Most of the countries had undergone industrialization. Partly due to the previous immigration, real wages rose at home and became closer to those in the United States. Thus, each source country went through stages where immigration increased, reached a peak, and then declined.

Differences in the timing of these effects then led to changes in the source countries of the immigrants. The countries of northwest Europe were the first to experience rapid population growth and to begin industrializing. By the latter part of the nineteenth century, immigration from these countries was in the stage of decline. At about the same time, countries in central, eastern, and southern Europe were experiencing the beginnings of industrialization and more rapid population growth. This model holds directly only through the 1920s, because U.S. government policy changed. At that point, quotas were established on the number of individuals allowed to immigrate from each country. Even so, many countries, especially those in northwest Europe, had passed the point where a large number of individuals wanted to leave and thus did not fill their quotas. The quotas were binding for many other countries in Europe in which pressures to immigrate were still strong. Even today, the countries providing the majority of immigrants to the United States, those south of the United States and in Asia and Africa, are places where population growth is high, industrialization is breaking traditional ties with the land, and real wage differentials with the United States are large.

Immigration Policy and Nativism

This section summarizes the changes in U.S. immigration policy. Only the most important policy changes are discussed and a number of relatively minor changes have been ignored. Interested readers are referred to Le May (1987) and Briggs (1984) for more complete accounts of U.S. immigration policy.

Few Restrictions before 1882

Immigration into the United States was subject to virtually no legal restrictions before 1882. Essentially, anyone who wanted to enter the United States could and, as discussed earlier, no specified arrival areas existed until 1855. Individuals simply got off the ship and went about their business. Little opposition among U.S. citizens to immigration is apparent until about the 1830s. The growing concern at this time was due to the increasing volume of immigration in both absolute terms and relative to the U.S. population, and the facts that more of the arrivals were Catholic and unskilled. The nativist feeling burst into the open during the 1850s when the Know-Nothing political party achieved a great deal of political success in the 1854 off-year elections. This party did not favor restrictions on the number of immigrants, though they did seek to restrict their ability to quickly become voting citizens. For a short period of time, the Know-Nothings had an important presence in Congress and many state legislatures. With the downturn in immigration in 1855 and the nation’s attention turning more to the slavery issue, their influence receded.

Chinese Exclusion Act

The first restrictive immigration laws were directed against Asian countries. The first law was the Chinese Exclusion Act of 1882. This law essentially prohibited the immigration of Chinese citizens and it stayed in effect until it was removed during World War II. In 1907, Japanese immigration was substantially reduced through a Gentlemen’s Agreement between Japan and the United States. It is noteworthy that the Chinese Exclusion Act also prohibited the immigration of “convicts, lunatics, idiots” and those individuals who might need to be supported by government assistance. The latter provision was used to some extent during periods of high unemployment, though as noted above, immigration fell anyway because of the lack of jobs.

Literacy Test Adopted in 1917

The desire to restrict immigration to the United States grew over the latter part of the nineteenth century. This growth was due partly to the high volume and rate of immigration and partly to the changing national origins of the immigrants; more began arriving from southern, central, and eastern Europe. In 1907, Congress set up the Immigration Commission, chaired by Senator William Dillingham, to investigate immigration. This body issued a famous report, now viewed as flawed, concluding that immigrants from the newer parts of Europe did not assimilate easily and, in general, blaming them for various economic ills. Attempts at restricting immigration were initially made by proposing a law requiring a literacy test for admission to the United States, and such a law was finally passed in 1917. This same law also virtually banned immigration from any country in Asia. Restrictionists were no doubt displeased when the volume of immigration from Europe resumed its former level after World War I in spite of the literacy test. The movement then turned to explicitly limiting the number of arrivals.

1920s: Quota Act and National Origins Act

The Quota Act of 1921 laid the framework for a fundamental change in U.S. immigration policy. It limited the number of immigrants from Europe to a total of about 350,000 per year. National quotas were established in direct proportion to each country’s presence in the U.S. population in 1910. In addition, the act assigned Asian countries quotas near zero. Three years later in 1924, the National Origins Act instituted a requirement that visas be obtained from an American consulate abroad before immigrating, reduced the total European quota to about 165,000, and changed how the quotas were determined. Now, the quotas were established in direct proportion to each country’s presence in the U.S. population in 1890, though this aspect of the act was not fully implemented until 1929. Because relatively few individuals immigrated from southern, central, and eastern Europe before 1890, the effect of the 1924 law was to drastically reduce the number of individuals allowed to immigrate to the United States from these countries. Yet total immigration to the United States remained fairly high until the Great Depression because neither the 1921 nor the 1924 law restricted immigration from the Western Hemisphere. Thus, it was the combination of the outbreak of World War I and the subsequent 1920s restrictions that caused the Western Hemisphere to become a more important source of immigrants to the United States after 1915, though it should be recalled the rate of immigration fell to low levels after 1930.

Immigration and Nationality Act of 1965

The last major change in U.S. immigration policy occurred with the passage of the Immigration and Nationality Act of 1965. This law abolished the quotas based on national origins. Instead, a series of preferences were established to determine who would gain entry. The most important preference was given to relatives of U.S. citizens and permanent resident aliens. By the twenty-first century, about two-thirds of immigrants came through these family channels. Preferences were also given to professionals, scientists, artists, and workers in short supply. The 1965 law kept an overall quota on total immigration for Eastern Hemisphere countries, originally set at 170,000, and no more than 20,000 individuals were allowed to immigrate to the United States from any single country. This law was designed to treat all countries equally. Asian countries were treated the same as any other country, so the virtual prohibition on immigration from Asia disappeared. In addition, for the first time the law also limited the number of immigrants from Western Hemisphere countries, with the original overall quota set at 120,000. It is important to note that neither quota was binding because immediate relatives of U.S. citizens, such as spouses, parents, and minor children, were exempt from the quota. In addition, the United States has admitted large numbers of refugees at different times from Vietnam, Haiti, Cuba, and other countries. Finally, many individuals enter the United States on student visas, enroll in colleges and universities, and eventually get companies to sponsor them for a work visa. Thus, the total number of legal immigrants to the United States since 1965 has always been larger than the combined quotas. This law has led to an increase in the volume of immigration and, by treating all countries the same, has led to Asia recently becoming a more important source of U.S. immigrants.

Though features of the 1965 law have been modified since it was enacted, this law still serves as the basis for U.S. immigration policy today. The most important modifications occurred in 1986 when employer sanctions were adopted for those hiring illegal workers. On the other hand, the same law also gave temporary resident status to individuals who had lived illegally in the United States since before 1982. The latter feature led to very high volumes of legal immigration being recorded in 1989, 1990, and 1991.

The Characteristics of the Immigrants

In this section, various characteristics of the immigrant stream arriving at different points in time are discussed. The following characteristics of immigration are analyzed: gender breakdown, age structure, family vs. individual migration, and occupations listed. Virtually all the information is based on the Passenger Lists, a source discussed above.

Gender and Age

Data are presented in Table 2 on the gender breakdown and age structure of immigration. The gender breakdown and age structure remain fairly consistent in the period before 1930. Generally, about 60% of the immigrants were male. As to age structure, about 20% of immigrants were children, 70% were adults up to age 44, and 10% were older than 44. In most of the period and for most countries, immigrants were typically young single males, young couples, or, especially in the era before the steamship, families. For particular countries, such as Ireland, a large number of the immigrants were single women (Cohn, 1995). The primary exception to this generalization was the 1899-1914 period, when 68% of the immigrants were male and adults under 45 accounted for 82% of the total. This period saw the immigration of a large number of single males who planned to work for a period of months or years and return to their homeland, a development made possible by the steamship shortening the voyage and reducing its cost (Nugent, 1992). The characteristics of the immigrant stream since 1930 have been somewhat different. Males have comprised less than one-half of all immigrants. In addition, the percentage of immigrants over age 45 has increased at the expense of those between the ages of 14 and 44.

Table 2
Immigration by Gender and Age

Percent Males Percent under 14 years Percent 14–44 years Percent 45 years and over
Years
1820-1831 70 19 70 11
1832-1846 62 24 67 10
1847-1854 59 23 67 10
1855-1864 58 19 71 10
1865-1873 62 21 66 13
1873-1880 63 19 69 12
1881-1893 61 20 71 10
1894-1898 57 15 77 8
1899-1914 68 12 82 5
1915-1917 59 16 74 10
1918-1930 56 18 73 9
1931-1946 40 15 67 17
1947-1960 45 21 64 15
1961-1970 45 25 61 14
1971-1980 46 24 61 15
1981-1990 52 18 66 16
1991-2000 51 17 65 18
2001-2008 45 15 64 21
2009-2015 45 15 61 24

Notes: From 1918-1970, the age breakdown is “Under 16″ and “16-44.” From 1971 to 1998, the age breakdown is “Under 15″ and “15-44.” For 2001-2015, it is again “Under 16” and “16-44.”

Sources: 1820-1970: Historical Statistics (1976). Years since 1970: U.S. Immigration and Naturalization Service (various years). 2002-2015: Department of Homeland Security: Office of Immigration Statistics (various years).

Occupations

Table 3 presents data on the percentage of immigrants who did not report an occupation and the percentage breakdown of those reporting an occupation. The percentage not reporting an occupation declined through 1914. The small percentages between 1894 and 1914 are a reflection of the large number of single males who arrived during this period. As is apparent, the classification scheme for occupations has changed over time. Though there is no perfect way to correlate the occupation categories used in the different time periods, skilled workers comprised about one-fourth of the immigrant stream through 1970. The immigration of farmers was important before the Civil War but declined steadily over time. The percentage of laborers has varied over time, though during some time periods they comprised one-half or more of the immigrants. The highest percentages of laborers occurred during good years for the U.S. economy (1847-54, 1865-73, 1881-93, 1899-1914), because laborers possessed the fewest skills and would have an easier time finding a job when the U.S. economy was strong. Commercial workers, mainly merchants, were an important group of immigrants very early when immigrant volume was low, but their percentage fell substantially over time. Professional workers were always a small part of U.S. immigration until the 1930s. Since 1930, these workers have comprised a larger percentage of immigrants reporting an occupation.

Table 3
Immigration by Occupation

Year Percent with no occup. listed Percent of immigrants with an occupation in each category
Professional Commercial Skilled Farmers Servants Laborers Misc.
1820-1831 61 3 28 30 23 2 14
1832-1846 56 1 12 27 33 2 24
1847-1854 54 0 6 18 33 2 41
1855-1864 53 1 12 23 23 4 37 0
1865-1873 54 1 6 24 18 7 44 1
1873-1880 47 2 4 24 18 8 40 5
1881-1893 49 1 3 20 14 9 51 3
1894-1898 38 1 4 25 12 18 37 3
Professional, technical, and kindred workers Farmers and farm managers Managers, officials, and proprietors, exc. farm Clerical, sales, and kindred workers Craftsmen, foremen, operatives, and kindred workers Private HH workers Service workers, exc. private household Farm laborers and foremen Laborers, exc. farm and mine
1899-1914 26 1 2 3 2 18 15 2 26 33
1915-1919 37 5 4 5 5 21 15 7 11 26
1920-1930 39 4 5 4 7 24 17 6 8 25
1931-1946 59 19 4 15 13 21 13 6 2 7
1947-1960 53 16 5 5 17 31 8 6 3 10
1961-1970 56 23 2 5 17 25 9 7 4 9
1971-1980 59 25 — a 8 12 36 — b 15 5 — c
1981-1990 56 14 — a 8 12 37 — b 22 7 — c
1991-2000 61 17 — a 7 9 23 — b 14 30 — c
2001-2008 76 45 — a — d 14 21 — b 18 5 — c
2009-2015 76 46 — a — d 12 19 — b 19 5 — c

a – included with “Farm laborers and foremen”; b – included with “Service workers, etc.”; c – included with “Craftsmen, etc.”; d – included with “Professional.”

Sources: 1820-1970: Historical Statistics (1976). Years since 1970: U.S. Immigration and Naturalization Service (various years). 2002-2015: Department of Homeland Security: Office of Immigration Statistics (various years). From 1970 through 2001, the INS has provided the following occupational categories: Professional, specialty, and technical (listed above under “Professional”); Executive, administrative, and managerial (listed above under “Managers, etc.”); Sales; Administrative support (these two are combined and listed above under “Clerical, etc.”); Precision production, craft, and repair; Operator, fabricator, and laborer (these two are combined and listed above under “Craftsmen, etc.”); Farming, forestry, and fishing (listed above under “Farm laborers and foremen”); and Service (listed above under “Service workers, etc.). Since 2002, the Department of Homeland Security has combined the Professional and Executive categories.  Note: Entries with a zero indicate less than one-half of one percent. Entries with dashes indicate no information or no immigrants.

Skill Levels

The skill level of the immigrant stream is important because it potentially affects the U.S. labor force, an issue considered in the next section. Before turning to this issue, a number of comments can be made concerning the occupational skill level of the U.S. immigration stream. First, skill levels fell substantially in the period before the Civil War. Between 1820 and 1831, only 39% of the immigrants were farmers, servants, or laborers, the least skilled groups. Though the data are not as complete, immigration during the colonial period was almost certainly at least this skilled. By the 1847-54 period, however, the less-skilled percentage had increased to 76%. Second, the less-skilled percentage did not change dramatically late in the nineteenth century when the source of immigration changed from northwest Europe to other parts of Europe. Comparing 1873-80 with 1899-1914, both periods of high immigration, farmers, servants, and laborers accounted for 66% of the immigrants in the former period and 78% in the latter period. The second figure is, however, similar to that during the 1847-54 period. Third, the restrictions on immigration imposed during the 1920s had a sizable effect on the skill level of the immigrant stream. Between 1930 and 1970, only 31-34% of the immigrants were in the least-skilled group.

Fourth, a deterioration in immigrant skills appears in the numbers in the 1980s and 1990s, and then an improvement appears since 2001. Both changes may be an illusion.. In Table 3 for the 1980s and 1990s, the percentage in the “Professional” category falls while the percentages in the “Service” and “Farm workers” categories rise. These changes are, however, due to the amnesty for illegal immigrants resulting from the 1986 law. The amnesty led to the recorded volume of immigration in 1989, 1990, and 1991 being much higher than typical, and most of the “extra” immigrants recorded their occupation as “Service” or “Farm laborer.” If these years are ignored, then little change occurred in the occupational distribution of the immigrant stream during the 1980s and 1990s. Two caveats, however, should be noted. First, the illegal immigrants can not, of course, be ignored. Second, the skill level of the U.S. labor force was improving over the same period. Thus, relative to the U.S. labor force and including illegal immigration, it is apparent the occupational skill level of the U.S. immigrant stream declined during the 1980s and 1990s.  Turning to the twenty-first century, the percentage of the legal immigrant stream in the highest-skilled category appears to have increased. This conclusion is also not certain because the changes that occurred in how occupations were categorized beginning in 2001 make a straightforward comparison potentially inexact. This uncertainty is increased by the growing percentage of immigrants for which no occupation is reported. It is not clear whether a larger percentage of those arriving actually did not work (recall that a growing percentage of legal immigrants are somewhat older) or if more simply did not list an occupation. Overall, detecting changes in the skill level of the legal immigrant stream since about 1930 is fraught with difficulty.

The Effects of Immigration on the United States Economy

Though immigration has effects on the country from which the immigrants leave, this section only examines the effects on the United States, mainly those occurring over longer periods of time. Over short periods of time, sizeable and potentially negative effects can occur in a specific area when there is a huge influx of immigrants. A large number of arrivals in a short period of time in one city can cause school systems to become overcrowded, housing prices and welfare payments to increase, and jobs to become difficult to obtain. Yet most economists believe the effects of immigration over time are much less harmful than commonly supposed and, in many ways, are beneficial. . The following longer-term issues are discussed: the effects of immigration on the overall wage rate of U.S. workers; the effects on the wages of particular groups of workers, such as those who are unskilled; and the effects on the rate of economic growth, that is, the standard of living, in the United States. Determining the effects of immigration on the United States is complex and virtually none of the conclusions presented here are without controversy.

Immigration’s Impact on Overall Wage Rates

Immigration is popularly thought to lower the overall wage rate in the United States by increasing the supply of individuals looking for jobs. This effect may occur in an area over a fairly short period of time. Over longer time periods, however, wages will only fall if the amounts of other resources don’t change. Wages will not fall if the immigrants bring sufficient amounts of other resources with them, such as capital, or cause the amount of other resources in the economy to increase sufficiently. For example, historically the large-scale immigration from Europe contributed to rapid westward expansion of the United States during most of the nineteenth century. The westward expansion, however, increased the amounts of land and natural resources that were available, factors that almost certainly kept immigration from lowering wage rates. Immigrants also increase the amounts of other resources in the economy through running their own businesses, which both historically and in recent times has occurred at a greater rate among immigrants than native workers. By the beginning of the twentieth century, the westward frontier had been settled. A number of researchers have estimated that immigration did lower wages at this time (Hatton and Williamson, 1998; Goldin, 1994), though others have criticized these findings (Carter and Sutch, 1999). For the recent time period, most studies have found little effect of immigration on the level of wages, though a few have found an effect (Borjas, 1999).

Even if immigration leads to a fall in the wage rate, it does not follow that individual workers are worse off. Workers typically receive income from sources other than their own labor. If wages fall, then many other resource prices in the economy rise. For example, immigration increases the demand for housing and land and existing owners benefit from an increase in the current value of their property. Whether any individual worker is better off or worse off in this case is not easy to determine. It depends on the amounts of other resources each individual possesses.

Immigration’s Impact on Wages of Unskilled Workers

Consider the second issue, the effects of immigration on the wages of unskilled workers. If the immigrants arriving in the country are primarily unskilled, then the larger number of unskilled workers could cause their wage to fall if the overall demand for these workers doesn’t change. A requirement for this effect to occur is that the immigrants be less skilled than the U.S. labor force they enter. As discussed above, during colonial times immigrant volume was small and the immigrants were probably more skilled than the existing U.S. labor force. During the 1830s and 1840s, the volume and rate of immigration increased substantially and the skill level of the immigrant stream fell to approximately match that of the native labor force. Instead of lowering the wages of unskilled workers relative to those of skilled workers, however, the large inflow apparently led to little change in the wages of unskilled workers, while some skilled workers lost and others gained. The explanation for these results is that the larger number of unskilled workers resulting from immigration was a factor in employers adopting new methods of production that used more unskilled labor. As a result of this technological change, the demand for unskilled workers increased so their wage did not decline. As employers adopted these new machines, however, skilled artisans who had previously done many of these jobs, such as iron casting, suffered losses. Other skilled workers, such as many white-collar workers who were not in direct competition with the immigrants, gained. Some evidence exists to support a differential effect on skilled workers during the antebellum period (Williamson and Lindert, 1980; Margo, 2000). After the Civil War, however, the skill level of the immigrant stream was close to that of the native labor force, so immigration probably did not further affect the wage structure through the 1920s (Carter and Sutch, 1999).

Impact since World War II

The lower volume of immigration in the period from 1930 through 1960 meant immigration had little effect on the relative wages of different workers during these years. With the resumption of higher volumes of immigration after 1965, however, and with the immigrants’ skill levels being low through 2000, an effect on relative wages again became possible. In fact, the relative wages of high-school dropouts in the United States deteriorated during the same period, especially after the mid-1970s. Researchers who have studied the question have concluded that immigration accounted for about one-fourth of the wage deterioration experienced by high-school dropouts during the 1980s, though some researchers find a lower effect and others a higher one (Friedberg and Hunt, 1995; Borjas, 1999). Wages are determined by a number of factors other than immigration. In this case, it is thought the changing nature of the economy, such as the widespread use of computers increasing the benefits to education, bears more of the blame for the decline in the relative wages of high-school dropouts.

Economic Benefits from Immigration

Beyond any effect on wages, there are a number of ways in which immigration might improve the overall standard of living in an economy. First, immigrants may engage in inventive or scientific activity, with the result being a gain to everyone. Evidence exists for both the historical and more recent periods that the United States has attracted individuals with an inventive/scientific nature. The United States has always been a leader in these areas. Individuals are more likely to be successful in such an environment than in one where these activities are not as highly valued. Second, immigrants expand the size of markets for various goods, which may lead to lower firms’ average costs due to an increase in firm size. The result would be a decrease in the price of the goods in question. Third, most individuals immigrate between the ages of 15 and 35, so the expenses of their basic schooling are paid abroad. In the past, most immigrants, being of working age, immediately got a job. Thus, immigration increased the percentage of the population in the United States that worked, a factor that raises the average standard of living in a country. Even in more recent times, most immigrants work, though the increased proportion of older individuals in the immigrant stream means the positive effects from this factor may be lower than in the past. Fourth, while immigrants may place a strain on government services in an area, such as the school system, they also pay taxes. Even illegal immigrants directly pay sales taxes on their purchases of goods and indirectly pay property taxes through their rent. Finally, the fact that immigrants are less likely to immigrate to the United States during periods of high unemployment is also beneficial. By reducing the number of people looking for jobs during these periods, this factor increases the likelihood U.S. citizens will be able to find a job.

The Experience of Immigrants in the U.S. Labor Market

This section examines the labor market experiences of immigrants in the United States. The issue of discrimination against immigrants in jobs is investigated along with the issue of the success immigrants experienced over time. Again, the issues are investigated for the historical period of immigration as well as more recent times. Interested readers are directed to Borjas (1999), Ferrie (2000), Carter and Sutch (1999), Hatton and Williamson (1998), and Friedberg and Hunt (1995) for more technical discussions.

Did Immigrants Face Labor Market Discrimination?

Discrimination can take various forms. The first form is wage discrimination, in which a worker of one group is paid a wage lower than an equally productive worker of another group. Empirical tests of this hypothesis generally find this type of discrimination has not existed. At any point in time, immigrants have been paid the same wage for a specific job as a native worker. If immigrants generally received lower wages than native workers, the differences reflected the lower skills of the immigrants. Historically, as discussed above, the skill level of the immigrant stream was similar to that of the native labor force, so wages did not differ much between the two groups. During more recent years, the immigrant stream has been less skilled than the native labor force, leading to the receipt of lower wages by immigrants. A second form of discrimination is in the jobs an immigrant is able to obtain. For example, in 1910, immigrants accounted for over half of the workers in various jobs; examples are miners, apparel workers, workers in steel manufacturing, meat packers, bakers, and tailors. If a reason for the employment concentration was that immigrants were kept out of alternative higher paying jobs, then the immigrants would suffer. This type of discrimination may have occurred against Catholics during the 1840s and 1850s and against the immigrants from central, southern, and eastern Europe after 1890. In both cases, it is possible the immigrants suffered because they could not obtain higher paying jobs. In more recent years, reports of immigrants trained as doctors, say, in their home country but not allowed to easily practice as such in the United States, may represent a similar situation. Yet the open nature of the U.S. schooling system and economy has been such that this effect usually did not impact the fortunes of the immigrants’ children or did so at a much smaller rate.

Wage Growth, Job Mobility, and Wealth Accumulation

Another aspect of how immigrants fared in the U.S. labor market is their experiences over time with respect to wage growth, job mobility, and wealth accumulation. A study done by Ferrie (1999) for immigrants arriving between 1840 and 1850, the period when the inflow of immigrants relative to the U.S. population was the highest, found immigrants from Britain and Germany generally improved their job status over time. By 1860, over 75% of the individuals reporting a low-skilled job on the Passenger Lists had moved up into a higher-skilled job, while fewer than 25% of those reporting a high-skilled job on the Passenger Lists had moved down into a lower-skilled job. Thus, the job mobility for these individuals was high. For immigrants from Ireland, the experience was quite different; the percentage of immigrants moving up was only 40% and the percentage moving down was over 50%. It isn’t clear if the Irish did worse because they had less education and fewer skills or whether the differences were due to some type of discrimination against them in the labor market. As to wealth, all the immigrant groups succeeded in accumulating larger amounts of wealth the longer they were in the United States, though their wealth levels fell short of those enjoyed by natives. Essentially, the evidence indicates antebellum immigrants were quite successful over time in matching their skills to the available jobs in the U.S. economy.

The extent to which immigrants had success over time in the labor market in the period since the Civil War is not clear. Most researchers have thought that immigrants who arrived before 1915 had a difficult time. For example, Hanes (1996) concludes that immigrants, even those from northwest Europe, had slower earnings growth over time than natives, a finding he argues was due to poor assimilation. Hatton and Williamson (1998), on the other hand, criticize these findings on technical grounds and conclude that immigrants assimilated relatively easily into the U.S. labor market. For the period after World War II, Chiswick (1978) argues that immigrants’ wages have increased relative to those of natives the longer the immigrants have been in the United States. Borjas (1999) has criticized Chiswick’s finding by suggesting it is caused by a decline in the skills possessed by the arriving immigrants between the 1950s and the 1990s. Borjas finds that 25- to 34-year-old male immigrants who arrived in the late 1950s had wages 9% lower than comparable native males, but by 1970 had wages 6% higher. In contrast, those arriving in the late 1970s had wages 22% lower at entry. By the late 1990s, their wages were still 12% lower than comparable natives. Overall, the degree of success experienced by immigrants in the U.S. labor market remains an area of controversy.

References

Borjas, George J. Heaven’s Door: Immigration Policy and the American Economy. Princeton: Princeton University Press, 1999.

Briggs, Vernon M., Jr. Immigration and the American Labor Force. Baltimore: Johns Hopkins University Press, 1984.

Carter, Susan B., and Richard Sutch. “Historical Perspectives on the Economic Consequences of Immigration into the United States.” In The Handbook of International Migration: The American Experience, edited by Charles Hirschman, Philip Kasinitz, and Josh DeWind, 319-341. New York: Russell Sage Foundation, 1999

Carter, Susan B., et. al.  Historical Statistics of the United States: Earliest Times to the Present – Millennial Edition. Volume 1: Population. New York: Cambridge University Press, 2006.

Chiswick, Barry R. “The Effect of Americanization on the Earnings of Foreign-Born Men.” Journal of Political Economy 86 (1978): 897-921.

Cohn, Raymond L. “A Comparative Analysis of European Immigrant Streams to the United States during the Early Mass Migration.” Social Science History 19 (1995): 63-89.

Cohn, Raymond L.  “The Transition from Sail to Steam in Immigration to the United States.” Journal of Economic History 65 (2005): 479-495.

Cohn, Raymond L. Mass Migration under Sail: European Immigration to the Antebellum United States. New York: Cambridge University Press, 2009.

Erickson, Charlotte J. Leaving England: Essays on British Emigration in the Nineteenth Century. Ithaca: Cornell University Press, 1994.

Ferenczi, Imre. International Migrations. New York: Arno Press, 1970.

Ferrie, Joseph P. Yankeys Now: Immigrants in the Antebellum United States, 1840-1860. New York: Oxford University Press, 1999.

Friedberg, Rachael M., and Hunt, Jennifer. “The Impact of Immigrants on Host Country Wages, Employment and Growth.” The Journal of Economic Perspectives 9 (1995): 23-44.

Goldin, Claudia. “The Political Economy of Immigration Restrictions in the United States, 1890 to 1921.” In The Regulated Economy: A Historical Approach to Political Economy, edited by Claudia Goldin and Gary D. Libecap, 223-257. Chicago: University of Chicago Press, 1994.

Grabbe, Hans-Jürgen. “European Immigration to the United States in the Early National Period, 1783-1820.” Proceeding of the American Philosophical Society 133 (1989): 190-214.

Hanes, Christopher. “Immigrants’ Relative Rate of Wage Growth in the Late Nineteenth Century.” Explorations in Economic History 33 (1996): 35-64.

Hansen, Marcus L. The Atlantic Migration, 1607-1860. Cambridge, MA.: Harvard University Press, 1940.

Hatton, Timothy J., and Jeffrey G. Williamson. The Age of Mass Migration: Causes and Economic Impact. New York: Oxford University Press, 1998.

Jones, Maldwyn Allen. American Immigration. Chicago: University of Chicago Press, Second Edition, 1960.

Le May, Michael C. From Open Door to Dutch Door: An Analysis of U.S. Immigration Policy Since 1820. New York: Praeger, 1987.

Margo, Robert A. Wages and Labor Markets in the United States, 1820-1860. Chicago: University of Chicago Press, 2000.

Massey, Douglas S. “Why Does Immigration Occur? A Theoretical Synthesis.” In The Handbook of International Migration: The American Experience, edited by Charles Hirschman, Philip Kasinitz, and Josh DeWind, 34-52. New York: Russell Sage Foundation, 1999.

Miller, Kerby A. Emigrants and Exiles: Ireland and the Irish Exodus to North America. Oxford: Oxford University Press, 1985.

Nugent, Walter. Crossings: The Great Transatlantic Migrations, 1870-1914. Bloomington and Indianapolis: Indiana University Press, 1992.

Taylor, Philip. The Distant Magnet. New York: Harper & Row, 1971.

Thomas, Brinley. Migration and Economic Growth: A Study of Great Britain and the Atlantic Economy. Cambridge, U.K.: Cambridge University Press, 1954.

U.S. Department of Commerce. Historical Statistics of the United States. Washington, DC, 1976.

U.S. Immigration and Naturalization Service. Statistical Yearbook of the Immigration and Naturalization Service. Washington, DC: U.S. Government Printing Office, various years.

Walker, Mack. Germany and the Emigration, 1816-1885. Cambridge, MA: Harvard University Press, 1964.

Williamson, Jeffrey G., and Peter H. Lindert, Peter H. American Inequality: A Macroeconomic History. New York: Academic Press, 1980.

Hours of Work in U.S. History

Robert Whaples, Wake Forest University

In the 1800s, many Americans worked seventy hours or more per week and the length of the workweek became an important political issue. Since then the workweek’s length has decreased considerably. This article presents estimates of the length of the historical workweek in the U.S., describes the history of the shorter-hours “movement,” and examines the forces that drove the workweek’s decline over time.

Estimates of the Length of the Workweek

Measuring the length of the workweek (or workday or workyear) is a difficult task, full of ambiguities concerning what constitutes work and who is to be considered a worker. Estimating the length of the historical workweek is even more troublesome. Before the Civil War most Americans were employed in agriculture and most of these were self-employed. Like self-employed workers in other fields, they saw no reason to record the amount of time they spent working. Often the distinction between work time and leisure time was blurry. Therefore, estimates of the length of the typical workweek before the mid-1800s are very imprecise.

The Colonial Period

Based on the amount of work performed — for example, crops raised per worker — Carr (1992) concludes that in the seventeenth-century Chesapeake region, “for at least six months of the year, an eight to ten-hour day of hard labor was necessary.” This does not account for other required tasks, which probably took about three hours per day. This workday was considerably longer than for English laborers, who at the time probably averaged closer to six hours of heavy labor each day.

The Nineteenth Century

Some observers believe that most American workers adopted the practice of working from “first light to dark” — filling all their free hours with work — throughout the colonial period and into the nineteenth century. Others are skeptical of such claims and argue that work hours increased during the nineteenth century — especially its first half. Gallman (1975) calculates “changes in implicit hours of work per agricultural worker” and estimates that hours increased 11 to 18 percent from 1800 to 1850. Fogel and Engerman (1977) argue that agricultural hours in the North increased before the Civil War due to the shift into time-intensive dairy and livestock. Weiss and Craig (1993) find evidence suggesting that agricultural workers also increased their hours of work between 1860 and 1870. Finally, Margo (2000) estimates that “on an economy-wide basis, it is probable that annual hours of work rose over the (nineteenth) century, by around 10 percent.” He credits this rise to the shift out of agriculture, a decline in the seasonality of labor demand and reductions in annual periods of nonemployment. On the other hand, it is clear that working hours declined substantially for one important group. Ransom and Sutch (1977) and Ng and Virts (1989) estimate that annual labor hours per capita fell 26 to 35 percent among African-Americans with the end of slavery.

Manufacturing Hours before 1890

Our most reliable estimates of the workweek come from manufacturing, since most employers required that manufacturing workers remain at work during precisely specified hours. The Census of Manufactures began to collect this information in 1880 but earlier estimates are available. Much of what is known about average work hours in the nineteenth century comes from two surveys of manufacturing hours taken by the federal government. The first survey, known as the Weeks Report, was prepared by Joseph Weeks as part of the Census of 1880. The second was prepared in 1893 by Commissioner of Labor Carroll D. Wright, for the Senate Committee on Finance, chaired by Nelson Aldrich. It is commonly called the Aldrich Report. Both of these sources, however, have been criticized as flawed due to problems such as sample selection bias (firms whose records survived may not have been typical) and unrepresentative regional and industrial coverage. In addition, the two series differ in their estimates of the average length of the workweek by as much as four hours. These estimates are reported in Table 1. Despite the previously mentioned problems, it seems reasonable to accept two important conclusions based on these data — the length of the typical manufacturing workweek in the 1800s was very long by modern standards and it declined significantly between 1830 and 1890.

Table 1
Estimated Average Weekly Hours Worked in Manufacturing, 1830-1890

Year Weeks Report Aldrich Report
1830 69.1
1840 67.1 68.4
1850 65.5 69.0
1860 62.0 66.0
1870 61.1 63.0
1880 60.7 61.8
1890 60.0

Sources: U.S. Department of Interior (1883), U.S. Senate (1893)
Note: Atack and Bateman (1992), using data from census manuscripts, estimate average weekly hours to be 60.1 in 1880 — very close to Weeks’ contemporary estimate. They also find that the summer workweek was about 1.5 hours longer than the winter workweek.

Hours of Work during the Twentieth Century

Because of changing definitions and data sources there does not exist a consistent series of workweek estimates covering the entire twentieth century. Table 2 presents six sets of estimates of weekly hours. Despite differences among the series, there is a fairly consistent pattern, with weekly hours falling considerably during the first third of the century and much more slowly thereafter. In particular, hours fell strongly during the years surrounding World War I, so that by 1919 the eight-hour day (with six workdays per week) had been won. Hours fell sharply at the beginning of the Great Depression, especially in manufacturing, then rebounded somewhat and peaked during World War II. After World War II, the length of the workweek stabilized around forty hours. Owen’s nonstudent-male series shows little trend after World War II, but the other series show a slow, but steady, decline in the length of the average workweek. Greis’s two series are based on the average length of the workyear and adjust for paid vacations, holidays and other time-off. The last column is based on information reported by individuals in the decennial censuses and in the Current Population Survey of 1988. It may be the most accurate and representative series, as it is based entirely on the responses of individuals rather than employers.

Table 2
Estimated Average Weekly Hours Worked, 1900-1988

Year Census of Manu-facturing JonesManu-

facturing

OwenNonstudent Males GreisManu-

facturing

GreisAll Workers Census/CPS All Workers
1900 59.6* 55.0 58.5
1904 57.9 53.6 57.1
1909 56.8 (57.3) 53.1 55.7
1914 55.1 (55.5) 50.1 54.0
1919 50.8 (51.2) 46.1 50.0
1924 51.1* 48.8 48.8
1929 50.6 48.0 48.7
1934 34.4 40.6
1940 37.6 42.5 43.3
1944 44.2 46.9
1947 39.2 42.4 43.4 44.7
1950 38.7 41.1 42.7
1953 38.6 41.5 43.2 44.0
1958 37.8* 40.9 42.0 43.4
1960 41.0 40.9
1963 41.6 43.2 43.2
1968 41.7 41.2 42.0
1970 41.1 40.3
1973 40.6 41.0
1978 41.3* 39.7 39.1
1980 39.8
1988 39.2

Sources: Whaples (1990a), Jones (1963), Owen (1976, 1988), and Greis (1984). The last column is based on the author’s calculations using Coleman and Pencavel’s data from Table 4 (below).
* = these estimates are from one year earlier than the year listed.
(The figures in parentheses in the first column are unofficial estimates but are probably more precise, as they better estimate the hours of workers in industries with very long workweeks.)

Hours in Other Industrial Sectors

Table 3 compares the length of the workweek in manufacturing to that in other industries for which there is available information. (Unfortunately, data from the agricultural and service sectors are unavailable until late in this period.) The figures in Table 3 show that the length of the workweek was generally shorter in the other industries — sometimes considerably shorter. For example, in 1910 anthracite coalminers’ workweeks were about forty percent shorter than the average workweek among manufacturing workers. All of the series show an overall downward trend.

Table 3
Estimated Average Weekly Hours Worked, Other Industries

Year Manufacturing Construction Railroads Bituminous Coal Anthracite Coal
1850s about 66 about 66
1870s about 62 about 60
1890 60.0 51.3
1900 59.6 50.3 52.3 42.8 35.8
1910 57.3 45.2 51.5 38.9 43.3
1920 51.2 43.8 46.8 39.3 43.2
1930 50.6 42.9 33.3 37.0
1940 37.6 42.5 27.8 27.2
1955 38.5 37.1 32.4 31.4

Sources: Douglas (1930), Jones (1963), Licht (1983), and Tables 1 and 2.
Note: The manufacturing figures for the 1850s and 1870s are approximations based on averaging numbers from the Weeks and Aldrich reports from Table 1. The early estimates for the railroad industry are also approximations.

Recent Trends by Race and Gender

Some analysts, such as Schor (1992) have argued that the workweek increased substantially in the last half of the twentieth century. Few economists accept this conclusion, arguing that it is based on the use of faulty data (public opinion surveys) and unexplained methods of “correcting” more reliable sources. Schor’s conclusions are contradicted by numerous studies. Table 4 presents Coleman and Pencavel’s (1993a, 1993b) estimates of the average workweek of employed people — disaggregated by race and gender. For all four groups the average length of the workweek has dropped since 1950. Although median weekly hours were virtually constant for men, the upper tail of the hours distribution fell for those with little schooling and rose for the well-educated. In addition, Coleman and Pencavel also find that work hours declined for young and older men (especially black men), but changed little for white men in their prime working years. Women with relatively little schooling were working fewer hours in the 1980s than in 1940, while the reverse is true of well-educated women.

Table 4
Estimated Average Weekly Hours Worked, by Race and Gender, 1940-1988

Year White Men Black Men White Women Black Women
1940 44.1 44.5 40.6 42.2
1950 43.4 42.8 41.0 40.3
1960 43.3 40.4 36.8 34.7
1970 43.1 40.2 36.1 35.9
1980 42.9 39.6 35.9 36.5
1988 42.4 39.6 35.5 37.2

Source: Coleman and Pencavel (1993a, 1993b)

Broader Trends in Time Use, 1880 to 2040

In 1880 a typical male household head had very little leisure time — only about 1.8 hours per day over the course of a year. However, as Fogel’s (2000) estimates in Table 5 show, between 1880 and 1995 the amount of work per day fell nearly in half, allowing leisure time to more than triple. Because of the decline in the length of the workweek and the declining portion of a lifetime that is spent in paid work (due largely to lengthening periods of education and retirement) the fraction of the typical American’s lifetime devoted to work has become remarkably small. Based on these trends Fogel estimates that four decades from now less than one-fourth of our discretionary time (time not needed for sleep, meals, and hygiene) will be devoted to paid work — over three-fourths will be available for doing what we wish.

Table 5
Division of the Day for the Average Male Household Head over the Course of a Year, 1880 and 1995

Activity 1880 1995
Sleep 8 8
Meals and hygiene 2 2
Chores 2 2
Travel to and from work 1 1
Work 8.5 4.7
Illness .7 .5
Left over for leisure activities 1.8 5.8

Source: Fogel (2000)

Table 6
Estimated Trend in the Lifetime Distribution of Discretionary Time, 1880-2040

Activity 1880 1995 2040
Lifetime Discretionary Hours 225,900 298,500 321,900
Lifetime Work Hours 182,100 122,400 75,900
Lifetime Leisure Hours 43,800 176,100 246,000

Source: Fogel (2000)
Notes: Discretionary hours exclude hours used for sleep, meals and hygiene. Work hours include paid work, travel to and from work, and household chores.

Postwar International Comparisons

While hours of work have decreased slowly in the U.S. since the end of World War II, they have decreased more rapidly in Western Europe. Greis (1984) calculates that annual hours worked per employee fell from 1908 to 1704 in the U.S. between 1950 and 1979, a 10.7 percent decrease. This compares to a 21.8 percent decrease across a group of twelve Western European countries, where the average fell from 2170 hours to 1698 hours between 1950 and 1979. Perhaps the most precise way of measuring work hours is to have individuals fill out diaries on their day-to-day and hour-to-hour time use. Table 7 presents an international comparison of average work hours both inside and outside of the workplace, by adult men and women — averaging those who are employed with those who are not. (Juster and Stafford (1991) caution, however, that making these comparisons requires a good deal of guesswork.) These numbers show a significant drop in total work per week in the U.S. between 1965 and 1981. They also show that total work by men and women is very similar, although it is divided differently. Total work hours in the U.S. were fairly similar to those in Japan, but greater than in Denmark, while less than in the USSR.

Table 7
Weekly Work Time in Four Countries, Based on Time Diaries, 1960s-1980s

Activity US USSR (Pskov)
Men Women Men Women
1965 1981 1965 1981 1965 1981 1965 1981
Total Work 63.1 57.8 60.9 54.4 64.4 65.7 75.3 66.3
Market Work 51.6 44.0 18.9 23.9 54.6 53.8 43.8 39.3
Commuting 4.8 3.5 1.6 2.0 4.9 5.2 3.7 3.4
Housework 11.5 13.8 41.8 30.5 9.8 11.9 31.5 27.0
Activity Japan Denmark
Men Women Men Women
1965 1985 1965 1985 1964 1987 1964 1987
Total Work 60.5 55.5 64.7 55.6 45.4 46.2 43.4 43.9
Market Work 57.7 52.0 33.2 24.6 41.7 33.4 13.3 20.8
Commuting 3.6 4.5 1.0 1.2 n.a n.a n.a n.a
Housework 2.8 3.5 31.5 31.0 3.7 12.8 30.1 23.1

Source: Juster and Stafford (1991)

The Shorter Hours “Movement” in the U.S.

The Colonial Period

Captain John Smith, after mapping New England’s coast, came away convinced that three days’ work per week would satisfy any settler. Far from becoming a land of leisure, however, the abundant resources of British America and the ideology of its settlers, brought forth high levels of work. Many colonial Americans held the opinion that prosperity could be taken as a sign of God’s pleasure with the individual, viewed work as inherently good and saw idleness as the devil’s workshop. Rodgers (1978) argues that this work ethic spread and eventually reigned supreme in colonial America. The ethic was consistent with the American experience, since high returns to effort meant that hard work often yielded significant increases in wealth. In Virginia, authorities also transplanted the Statue of Artificers, which obliged all Englishmen (except the gentry) to engage in productive activity from sunrise to sunset. Likewise, a 1670 Massachusetts law demanded a minimum ten-hour workday, but it is unlikely that these laws had any impact on the behavior of most free workers.

The Revolutionary War Period

Roediger and Foner (1989) contend that the Revolutionary War era brought a series of changes that undermined support for sun-to-sun work. The era’s republican ideology emphasized that workers needed free time, away from work, to participate in democracy. Simultaneously, the development of merchant capitalism meant that there were, for the first time, a significant number of wageworkers. Roediger and Foner argue that reducing labor costs was crucial to the profitability of these workers’ employers, who reduced costs by squeezing more work from their employees — reducing time for meals, drink and rest and sometimes even rigging the workplace’s official clock. Incensed by their employers’ practice of paying a flat daily wage during the long summer shift and resorting to piece rates during short winter days, Philadelphia’s carpenters mounted America’s first ten-hour-day strike in May 1791. (The strike was unsuccessful.)

1820s: The Shorter Hours Movement Begins

Changes in the organization of work, with the continued rise of merchant capitalists, the transition from the artisanal shop to the early factory, and an intensified work pace had become widespread by about 1825. These changes produced the first extensive, aggressive movement among workers for shorter hours, as the ten-hour movement blossomed in New York City, Philadelphia and Boston. Rallying around the ten-hour banner, workers formed the first city-central labor union in the U.S., the first labor newspaper, and the first workingmen’s political party — all in Philadelphia — in the late 1820s.

Early Debates over Shorter Hours

Although the length of the workday is largely an economic decision arrived at by the interaction of the supply and demand for labor, advocates of shorter hours and foes of shorter hours have often argued the issue on moral grounds. In the early 1800s, advocates argued that shorter work hours improved workers’ health, allowed them time for self-improvement and relieved unemployment. Detractors countered that workers would abuse leisure time (especially in saloons) and that long, dedicated hours of work were the path to success, which should not be blocked for the great number of ambitious workers.

1840s: Early Agitation for Government Intervention

When Samuel Slater built the first textile mills in the U.S., “workers labored from sun up to sun down in summer and during the darkness of both morning and evening in the winter. These hours ? only attracted attention when they exceeded the common working day of twelve hours,” according to Ware (1931). During the 1830s, an increased work pace, tighter supervision, and the addition of about fifteen minutes to the work day (partly due to the introduction of artificial lighting during winter months), plus the growth of a core of more permanent industrial workers, fueled a campaign for a shorter workweek among mill workers in Lowell, Massachusetts, whose workweek averaged about 74 hours. This agitation was led by Sarah Bagley and the New England Female Labor Reform Association, which, beginning in 1845, petitioned the state legislature to intervene in the determination of hours. The petitions were followed by America’s first-ever examination of labor conditions by a governmental investigating committee. The Massachusetts legislature proved to be very unsympathetic to the workers’ demands, but similar complaints led to the passage of laws in New Hampshire (1847) and Pennsylvania (1848), declaring ten hours to be the legal length of the working day. However, these laws also specified that a contract freely entered into by employee and employer could set any length for the workweek. Hence, these laws had little impact. Legislation passed by the federal government had a more direct, though limited effect. On March 31, 1840, President Martin Van Buren issued an executive order mandating a ten-hour day for all federal employees engaged in manual work.

1860s: Grand Eight Hours Leagues

As the length of the workweek gradually declined, political agitation for shorter hours seems to have waned for the next two decades. However, immediately after the Civil War reductions in the length of the workweek reemerged as an important issue for organized labor. The new goal was an eight-hour day. Roediger (1986) argues that many of the new ideas about shorter hours grew out of the abolitionists’ critique of slavery — that long hours, like slavery, stunted aggregate demand in the economy. The leading proponent of this idea, Ira Steward, argued that decreasing the length of the workweek would raise the standard of living of workers by raising their desired consumption levels as their leisure expanded, and by ending unemployment. The hub of the newly launched movement was Boston and Grand Eight Hours Leagues sprang up around the country in 1865 and 1866. The leaders of the movement called the meeting of the first national organization to unite workers of different trades, the National Labor Union, which met in Baltimore in 1867. In response to this movement, eight states adopted general eight-hour laws, but again the laws allowed employer and employee to mutually consent to workdays longer than the “legal day.” Many critics saw these laws and this agitation as a hoax, because few workers actually desired to work only eight hours per day at their original hourly pay rate. The passage of the state laws did foment action by workers — especially in Chicago where parades, a general strike, rioting and martial law ensued. In only a few places did work hours fall after the passage of these laws. Many become disillusioned with the idea of using the government to promote shorter hours and by the late 1860s, efforts to push for a universal eight-hour day had been put on the back burner.

The First Enforceable Hours Laws

Despite this lull in shorter-hours agitation, in 1874, Massachusetts passed the nation’s first enforceable ten-hour law. It covered only female workers and became fully effective by 1879. This legislation was fairly late by European standards. Britain had passed its first effective Factory Act, setting maximum hours for almost half of its very young textile workers, in 1833.

1886: Year of Dashed Hopes

In the early 1880s organized labor in the U.S. was fairly weak. In 1884, the short-lived Federation of Organized Trades and Labor Unions (FOTLU) fired a “shot in the dark.” During its final meeting, before dissolving, the Federation “ordained” May 1, 1886 as the date on which workers would cease working beyond eight hours per day. Meanwhile, the Knights of Labor, which had begun as a secret fraternal society and evolved a labor union, began to gain strength. It appears that many nonunionized workers, especially the unskilled, came to see in the Knights a chance to obtain a better deal from their employers, perhaps even to obtain the eight-hour day. FOTLU’s call for workers to simply walk off the job after eight hours beginning on May 1, plus the activities of socialist and anarchist labor organizers and politicians, and the apparent strength of the Knights combined to attract members in record numbers. The Knights mushroomed and its new membership demanded that their local leaders support them in attaining the eight-hour day. Many smelled victory in the air — the movement to win the eight-hour day became frenzied and the goal became “almost a religious crusade” (Grob, 1961).

The Knights’ leader, Terence Powderly, thought that the push for a May 1 general strike for eight-hours was “rash, short-sighted and lacking in system” and “must prove abortive” (Powderly, 1890). He offered no effective alternative plan but instead tried to block the mass action, issuing a “secret circular” condemning the use of strikes. Powderly reasoned that low incomes forced workmen to accept long hours. Workers didn’t want shorter hours unless their daily pay was maintained, but employers were unwilling and/or unable to offer this. Powderly’s rival, labor leader Samuel Gompers, agreed that “the movement of ’86 did not have the advantage of favorable conditions” (Gompers, 1925). Nelson (1986) points to divisions among workers, which probably had much to do with the failure in 1886 of the drive for the eight-hour day. Some insisted on eight hours with ten hours’ pay, but others were willing to accept eight hours with eight hours’ pay,

Haymarket Square Bombing

The eight-hour push of 1886 was, in Norman Ware’s words, “a flop” (Ware, 1929). Lack of will and organization among workers was undoubtedly important, but its collapse was aided by violence that marred strikes and political rallies in Chicago and Milwaukee. The 1886 drive for eight-hours literally blew up in organized labor’s face. At Haymarket Square in Chicago an anarchist bomb killed fifteen policemen during an eight-hour rally, and in Milwaukee’s Bay View suburb nine strikers were killed as police tried to disperse roving pickets. The public backlash and fear of revolution damned the eight-hour organizers along with the radicals and dampened the drive toward eight hours — although it is estimated that the strikes of May 1886 shortened the workweek for about 200,000 industrial workers, especially in New York City and Cincinnati.

The AFL’s Strategy

After the demise of the Knights of Labor, the American Federation of Labor (AFL) became the strongest labor union in the U.S. It held shorter hours as a high priority. The inside cover of its Proceedings carried two slogans in large type: “Eight hours for work, eight hours for rest, eight hours for what we will” and “Whether you work by the piece or work by the day, decreasing the hours increases the pay.” (The latter slogan was coined by Ira Steward’s wife, Mary.) In the aftermath of 1886, the American Federation of Labor adopted a new strategy of selecting each year one industry in which it would attempt to win the eight-hour day, after laying solid plans, organizing, and building up a strike fund war chest by taxing nonstriking unions. The United Brotherhood of Carpenters and Joiners was selected first and May 1, 1890 was set as a day of national strikes. It is estimated that nearly 100,000 workers gained the eight-hour day as a result of these strikes in 1890. However, other unions turned down the opportunity to follow the carpenters’ example and the tactic was abandoned. Instead, the length of the workweek continued to erode during this period, sometimes as the result of a successful local strike, more often as the result of broader economic forces.

The Spread of Hours Legislation

Massachusetts’ first hours law in 1874 set sixty hours per week as the legal maximum for women, in 1892 this was cut to 58, in 1908 to 56, and in 1911 to 54. By 1900, 26 percent of states had maximum hours laws covering women, children and, in some, adult men (generally only those in hazardous industries). The percentage of states with maximum hours laws climbed to 58 percent in 1910, 76 percent in 1920, and 84 percent in 1930. Steinberg (1982) calculates that the percent of employees covered climbed from 4 percent nationally in 1900, to 7 percent in 1910, and 12 percent in 1920 and 1930. In addition, these laws became more restrictive with the average legal standard falling from a maximum of 59.3 hours per week in 1900 to 56.7 in 1920. According to her calculations, in 1900 about 16 percent of the workers covered by these laws were adult men, 49 percent were adult women and the rest were minors.

Court Rulings

The banner years for maximum hours legislation were right around 1910. This may have been partly a reaction to the Supreme Court’s ruling upholding female-hours legislation in the Muller vs. Oregon case (1908). The Court’s rulings were not always completely consistent during this period, however. In 1898 the Court upheld a maximum eight-hour day for workmen in the hazardous industries of mining and smelting in Utah in Holden vs. Hardy. In Lochner vs. New York (1905), it rejected as unconstitutional New York’s ten-hour day for bakers, which was also adopted (at least nominally) out of concerns for safety. The defendant showed that mortality rates in baking were only slightly above average, and lower than those for many unregulated occupations, arguing that this was special interest legislation, designed to favor unionized bakers. Several state courts, on the other hand, supported laws regulating the hours of men in only marginally hazardous work. By 1917, in Bunting vs. Oregon, the Supreme Court seemingly overturned the logic of the Lochner decision, supporting a state law that required overtime payment for all men working long hours. The general presumption during this period was that the courts would allow regulation of labor concerning women and children, who were thought to be incapable of bargaining on an equal footing with employers and in special need of protection. Men were allowed freedom of contract unless it could be proven that regulating their hours served a higher good for the population at large.

New Arguments about Shorter Hours

During the first decades of the twentieth century, arguments favoring shorter hours moved away from Steward’s line that shorter hours increased pay and reduced unemployment to arguments that shorter hours were good for employers because they made workers more productive. A new cadre of social scientists began to offer evidence that long hours produced health-threatening, productivity-reducing fatigue. This line of reasoning, advanced in the court brief of Louis Brandeis and Josephine Goldmark, was crucial in the Supreme Court’s decision to support state regulation of women’s hours in Muller vs. Oregon. Goldmark’s book, Fatigue and Efficiency (1912) was a landmark. In addition, data relating to hours and output among British and American war workers during World War I helped convince some that long hours could be counterproductive. Businessmen, however, frequently attacked the shorter hours movement as merely a ploy to raise wages, since workers were generally willing to work overtime at higher wage rates.

Federal Legislation in the 1910s

In 1912 the Federal Public Works Act was passed, which provided that every contract to which the U.S. government was a party must contain an eight-hour day clause. Three year later LaFollette’s Bill established maximum hours for maritime workers. These were preludes to the most important shorter-hours law enacted by Congress during this period — 1916’s Adamson Act, which was passed to counter a threatened nationwide strike, granted rail workers the basic eight hour day. (The law set eight hours as the basic workday and required higher overtime pay for longer hours.)

World War I and Its Aftermath

Labor markets became very tight during World War I as the demand for workers soared and the unemployment rate plunged. These forces put workers in a strong bargaining position, which they used to obtain shorter work schedules. The move to shorter hours was also pushed by the federal government, which gave unprecedented support to unionization. The federal government began to intervene in labor disputes for the first time, and the National War Labor Board “almost invariably awarded the basic eight-hour day when the question of hours was at issue” in labor disputes (Cahill, 1932). At the end of the war everyone wondered if organized labor would maintain its newfound power and the crucial test case was the steel industry. Blast furnace workers generally put in 84-hour workweeks. These abnormally long hours were the subject of much denunciation and a major issue in a strike that began in September 1919. The strike failed (and organized labor’s power receded during the 1920s), but four years later US Steel reduced its workday from twelve to eight hours. The move came after much arm-twisting by President Harding but its timing may be explained by immigration restrictions and the loss of immigrant workers who were willing to accept such long hours (Shiells, 1990).

The Move to a Five-day Workweek

During the 1920s agitation for shorter workdays largely disappeared, now that the workweek had fallen to about 50 hours. However, pressure arose to grant half-holidays on Saturday or Saturday off — especially in industries whose workers were predominantly Jewish. By 1927 at least 262 large establishments had adopted the five-day week, while only 32 had it by 1920. The most notable action was Henry Ford’s decision to adopt the five-day week in 1926. Ford employed more than half of the nation’s approximately 400,000 workers with five-day weeks. However, Ford’s motives were questioned by many employers who argued that productivity gains from reducing hours ceased beyond about forty-eight hours per week. Even the reformist American Labor Legislation Review greeted the call for a five-day workweek with lukewarm interest.

Changing Attitudes in the 1920s

Hunnicutt (1988) argues that during the 1920s businessmen and economists began to see shorter hours as a threat to future economic growth. With the development of advertising — the “gospel of consumption” — a new vision of progress was proposed to American workers. It replaced the goal of leisure time with a list of things to buy and business began to persuade workers that more work brought more tangible rewards. Many workers began to oppose further decreases in the length of the workweek. Hunnicutt concludes that a new work ethic arose as Americans threw off the psychology of scarcity for one of abundance.

Hours’ Reduction during the Great Depression

Then the Great Depression hit the American economy. By 1932 about half of American employers had shortened hours. Rather than slash workers’ real wages, employers opted to lay-off many workers (the unemployment rate hit 25 percent) and tried to protect the ones they kept on by the sharing of work among them. President Hoover’s Commission for Work Sharing pushed voluntary hours reductions and estimated that they had saved three to five million jobs. Major employers like Sears, GM, and Standard Oil scaled down their workweeks and Kellogg’s and the Akron tire industry pioneered the six-hour day. Amid these developments, the AFL called for a federally-mandated thirty-hour workweek.

The Black-Connery 30-Hours Bill and the NIRA

The movement for shorter hours as a depression-fighting work-sharing measure built such a seemingly irresistible momentum that by 1933 observers predicting that the “30-hour week was within a month of becoming federal law” (Hunnicutt, 1988). During the period after the 1932 election but before Franklin Roosevelt’s inauguration, Congressional hearings on thirty hours began, and less than one month into FDR’s first term, the Senate passed, 53 to 30, a thirty-hour bill authored by Hugo Black. The bill was sponsored in the House by William Connery. Roosevelt originally supported the Black-Connery proposals, but soon backed off, uneasy with a provision forbidding importation of goods produced by workers whose weeks were longer than thirty hours, and convinced by arguments of business that trying to legislate fewer hours might have disastrous results. Instead, FDR backed the National Industrial Recovery Act (NIRA). Hunnicutt argues that an implicit deal was struck in the NIRA. Labor leaders were persuaded by NIRA Section 7a’s provisions — which guaranteed union organization and collective bargaining — to support the NIRA rather than the Black-Connery Thirty-Hour Bill. Business, with the threat of thirty hours hanging over its head, fell raggedly into line. (Most historians cite other factors as the key to the NIRA’s passage. See Barbara Alexander’s article on the NIRA in this encyclopedia.) When specific industry codes were drawn up by the NIRA-created National Recovery Administration (NRA), shorter hours were deemphasized. Despite a plan by NRA Administrator Hugh Johnson to make blanket provisions for a thirty-five hour workweek in all industry codes, by late August 1933, the momentum toward the thirty-hour week had dissipated. About half of employees covered by NRA codes had their hours set at forty per week and nearly 40 percent had workweeks longer than forty hours.

The FSLA: Federal Overtime Law

Hunnicutt argues that the entire New Deal can be seen as an attempt to keep shorter-hours advocates at bay. After the Supreme Court struck down the NRA, Roosevelt responded to continued demands for thirty hours with the Works Progress Administration, the Wagner Act, Social Security, and, finally, the Fair Labor Standards Acts, which set a federal minimum wage and decreed that overtime beyond forty hours per week would be paid at one-and-a-half times the base rate in covered industries.

The Demise of the Shorter Hours’ Movement

As the Great Depression ended, average weekly work hours slowly climbed from their low reached in 1934. During World War II hours reached a level almost as high as at the end of World War I. With the postwar return of weekly work hours to the forty-hour level the shorter hours movement effectively ended. Occasionally organized labor’s leaders announced that they would renew the push for shorter hours, but they found that most workers didn’t desire a shorter workweek.

The Case of Kellogg’s

Offsetting isolated examples of hours reductions after World War II, there were noteworthy cases of backsliding. Hunnicutt (1996) has studied the case of Kellogg’s in great detail. In 1946, 87% of women and 71% of men working at Kellogg’s voted to return to the six-hour day, with the end of the war. Over the course of the next decade, however, the tide turned. By 1957 most departments had opted to switch to 8-hour shifts, so that only about one-quarter of the work force, mostly women, retained a six-hour shift. Finally, in 1985, the last department voted to adopt an 8-hour workday. Workers, especially male workers, began to favor additional money more than the extra two hours per day of free time. In interviews they explained that they needed the extra money to buy a wide range of consumer items and to keep up with the neighbors. Several men told about the friction that resulted when men spent too much time around the house: “The wives didn’t like the men underfoot all day.” “The wife always found something for me to do if I hung around.” “We got into a lot of fights.” During the 1950s, the threat of unemployment evaporated and the moral condemnation for being a “work hog” no longer made sense. In addition, the rise of quasi-fixed employment costs (such as health insurance) induced management to push workers toward a longer workday.

The Current Situation

As the twentieth century ended there was nothing resembling a shorter hours “movement.” The length of the workweek continues to fall for most groups — but at a glacial pace. Some Americans complain about a lack of free time but the vast majority seem content with an average workweek of roughly forty hours — channeling almost all of their growing wages into higher incomes rather than increased leisure time.

Causes of the Decline in the Length of the Workweek

Supply, Demand and Hours of Work

The length of the workweek, like other labor market outcomes, is determined by the interaction of the supply and demand for labor. Employers are torn by conflicting pressures. Holding everything else constant, they would like employees to work long hours because this means that they can utilize their equipment more fully and offset any fixed costs from hiring each worker (such as the cost of health insurance — common today, but not a consideration a century ago). On the other hand, longer hours can bring reduced productivity due to worker fatigue and can bring worker demands for higher hourly wages to compensate for putting in long hours. If they set the workweek too high, workers may quit and few workers will be willing to work for them at a competitive wage rate. Thus, workers implicitly choose among a variety of jobs — some offering shorter hours and lower earnings, others offering longer hours and higher earnings.

Economic Growth and the Long-Term Reduction of Work Hours

Historically employers and employees often agreed on very long workweeks because the economy was not very productive (by today’s standards) and people had to work long hours to earn enough money to feed, clothe and house their families. The long-term decline in the length of the workweek, in this view, has primarily been due to increased economic productivity, which has yielded higher wages for workers. Workers responded to this rise in potential income by “buying” more leisure time, as well as by buying more goods and services. In a recent survey, a sizeable majority of economic historians agreed with this view. Over eighty percent accepted the proposition that “the reduction in the length of the workweek in American manufacturing before the Great Depression was primarily due to economic growth and the increased wages it brought” (Whaples, 1995). Other broad forces probably played only a secondary role. For example, roughly two-thirds of economic historians surveyed rejected the proposition that the efforts of labor unions were the primary cause of the drop in work hours before the Great Depression.

Winning the Eight-Hour Day in the Era of World War I

The swift reduction of the workweek in the period around World War I has been extensively analyzed by Whaples (1990b). His findings support the consensus that economic growth was the key to reduced work hours. Whaples links factors such as wages, labor legislation, union power, ethnicity, city size, leisure opportunities, age structure, wealth and homeownership, health, education, alternative employment opportunities, industrial concentration, seasonality of employment, and technological considerations to changes in the average workweek in 274 cities and 118 industries. He finds that the rapid economic expansion of the World War I period, which pushed up real wages by more than 18 percent between 1914 and 1919, explains about half of the drop in the length of the workweek. The reduction of immigration during the war was important, as it deprived employers of a group of workers who were willing to put in long hours, explaining about one-fifth of the hours decline. The rapid electrification of manufacturing seems also to have played an important role in reducing the workweek. Increased unionization explains about one-seventh of the reduction, and federal and state legislation and policies that mandated reduced workweeks also had a noticeable role.

Cross-sectional Patterns from 1919

In 1919 the average workweek varied tremendously, emphasizing the point that not all workers desired the same workweek. The workweek exceeded 69 hours in the iron blast furnace, cottonseed oil, and sugar beet industries, but fell below 45 hours in industries such as hats and caps, fur goods, and women’s clothing. Cities’ averages also differed dramatically. In a few Midwestern steel mill towns average workweeks exceeded 60 hours. In a wide range of low-wage Southern cities they reached the high 50s, but in high-wage Western ports, like Seattle, the workweek fell below 45 hours.

Whaples (1990a) finds that among the most important city-level determinants of the workweek during this period were the availability of a pool of agricultural workers, the capital-labor ratio, horsepower per worker, and the amount of employment in large establishments. Hours rose as each of these increased. Eastern European immigrants worked significantly longer than others, as did people in industries whose output varied considerably from season to season. High unionization and strike levels reduced hours to a small degree. The average female employee worked about six and a half fewer hours per week in 1919 than did the average male employee. In city-level comparisons, state maximum hours laws appear to have had little affect on average work hours, once the influences of other factors have been taken into account. One possibility is that these laws were passed only after economic forces lowered the length of the workweek. Overall, in cities where wages were one percent higher, hours were about -0.13 to -0.05 percent lower. Again, this suggests that during the era of declining hours, workers were willing to use higher wages to “buy” shorter hours.

Annotated Bibliography

Perhaps the most comprehensive survey of the shorter hours movement in the U.S. is David Roediger and Philip Foner’s Our Own Time: A History of American Labor and the Working Day (1989). It contends that “the length of the working day has been the central issue for the American labor movement during its most vigorous periods of activity, uniting workers along lines of craft, gender, and ethnicity.” Critics argue that its central premise is flawed because workers have often been divided about the optimal length of the workweek. It explains the point of view of organized labor and recounts numerous historically important events and arguments, but does not attempt to examine in detail the broader economic forces that determined the length of the workweek. An earlier useful comprehensive work is Marion Cahill’s Shorter Hours: A Study of the Movement since the Civil War (1932).

Benjamin Hunnicutt’s Work Without End: Abandoning Shorter Hours for the Right to Work (1988) focuses on the period from 1920 to 1940 and traces the political, intellectual, and social “dialogues” that changed the American concept of progress from dreams of more leisure to an “obsession” with the importance of work and wage-earning. This work’s detailed analysis and insights are valuable, but it draws many of its inferences from what intellectuals said about shorter hours, rather than spending time on the actual decision makers — workers and employers. Hunnicutt’s Kellogg’s Six-Hour Day (1996), is important because it does exactly this — interviewing employees and examining the motives and decisions of a prominent employer. Unfortunately, it shows that one must carefully interpret what workers say on the subject, as they are prone to reinterpret their own pasts so that their choices can be more readily rationalized. (See EH.NET’s review: http://eh.net/book_reviews/kelloggs-six-hour-day/.)

Economists have given surprisingly little attention to the determinants of the workweek. The most comprehensive treatment is Robert Whaples’ “The Shortening of the American Work Week” (1990), which surveys estimates of the length of the workweek, the shorter hours movement, and economic theories about the length of the workweek. Its core is an extensive statistical examination of the determinants of the workweek in the period around World War I.

References

Atack, Jeremy and Fred Bateman. “How Long Was the Workday in 1880?” Journal of Economic History 52, no. 1 (1992): 129-160.

Cahill, Marion Cotter. Shorter Hours: A Study of the Movement since the Civil War. New York: Columbia University Press, 1932.

Carr, Lois Green. “Emigration and the Standard of Living: The Seventeenth Century Chesapeake.” Journal of Economic History 52, no. 2 (1992): 271-291.

Coleman, Mary T. and John Pencavel. “Changes in Work Hours of Male Employees, 1940-1988.” Industrial and Labor Relations Review 46, no. 2 (1993a): 262-283.

Coleman, Mary T. and John Pencavel. “Trends in Market Work Behavior of Women since 1940.” Industrial and Labor Relations Review 46, no. 4 (1993b): 653-676.

Douglas, Paul. Real Wages in the United States, 1890-1926. Boston: Houghton, 1930.

Fogel, Robert. The Fourth Great Awakening and the Future of Egalitarianism. Chicago: University of Chicago Press, 2000.

Fogel, Robert and Stanley Engerman. Time on the Cross: The Economics of American Negro Slavery. Boston: Little, Brown, 1974.

Gallman, Robert. “The Agricultural Sector and the Pace of Economic Growth: U.S. Experience in the Nineteenth Century.” In Essays in Nineteenth-Century Economic History: The Old Northwest, edited by David Klingaman and Richard Vedder. Athens, OH: Ohio University Press, 1975.

Goldmark, Josephine. Fatigue and Efficiency. New York: Charities Publication Committee, 1912.

Gompers, Samuel. Seventy Years of Life and Labor: An Autobiography. New York: Dutton, 1925.

Greis, Theresa Diss. The Decline of Annual Hours Worked in the United States, since 1947. Manpower and Human Resources Studies, no. 10, Wharton School, University of Pennsylvania, 1984.

Grob, Gerald. Workers and Utopia: A Study of Ideological Conflict in the American Labor Movement, 1865-1900. Evanston: Northwestern University Press, 1961.

Hunnicutt, Benjamin Kline. Work Without End: Abandoning Shorter Hours for the Right to Work. Philadelphia: Temple University Press, 1988.

Hunnicutt, Benjamin Kline. Kellogg’s Six-Hour Day. Philadelphia: Temple University Press, 1996.

Jones, Ethel. “New Estimates of Hours of Work per Week and Hourly Earnings, 1900-1957.” Review of Economics and Statistics 45, no. 4 (1963): 374-385.

Juster, F. Thomas and Frank P. Stafford. “The Allocation of Time: Empirical Findings, Behavioral Models, and Problems of Measurement.” Journal of Economic Literature 29, no. 2 (1991): 471-522.

Licht, Walter. Working for the Railroad: The Organization of Work in the Nineteenth Century. Princeton: Princeton University Press, 1983.

Margo, Robert. “The Labor Force in the Nineteenth Century.” In The Cambridge Economic History of the United States, Volume II, The Long Nineteenth Century, edited by Stanley Engerman and Robert Gallman, 207-243. New York: Cambridge University Press, 2000.

Nelson, Bruce. “‘We Can’t Get Them to Do Aggressive Work': Chicago’s Anarchists and the Eight-Hour Movement.” International Labor and Working Class History 29 (1986).

Ng, Kenneth and Nancy Virts. “The Value of Freedom.” Journal of Economic History 49, no. 4 (1989): 958-965.

Owen, John. “Workweeks and Leisure: An Analysis of Trends, 1948-1975.” Monthly Labor Review 99 (1976).

Owen, John. “Work-time Reduction in the United States and Western Europe.” Monthly Labor Review 111 (1988).

Powderly, Terence. Thirty Years of Labor, 1859-1889. Columbus: Excelsior, 1890.

Ransom, Roger and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. New York: Cambridge University Press, 1977.

Rodgers, Daniel. The Work Ethic in Industrial America, 1850-1920. Chicago: University of Chicago Press, 1978.

Roediger, David. “Ira Steward and the Antislavery Origins of American Eight-Hour Theory.” Labor History 27 (1986).

Roediger, David and Philip Foner. Our Own Time: A History of American Labor and the Working Day. New York: Verso, 1989.

Schor, Juliet B. The Overworked American: The Unexpected Decline in Leisure. New York: Basic Books, 1992.

Shiells, Martha Ellen, “Collective Choice of Working Conditions: Hours in British and U.S. Iron and Steel, 1890-1923.” Journal of Economic History 50, no. 2 (1990): 379-392.

Steinberg, Ronnie. Wages and Hours: Labor and Reform in Twentieth-Century America. New Brunswick, NJ: Rutgers University Press, 1982.

United States, Department of Interior, Census Office. Report on the Statistics of Wages in Manufacturing Industries, by Joseph Weeks, 1880 Census, Vol. 20. Washington: GPO, 1883.

United States Senate. Senate Report 1394, Fifty-Second Congress, Second Session. “Wholesale Prices, Wages, and Transportation.” Washington: GPO, 1893.

Ware, Caroline. The Early New England Cotton Manufacture: A Study of Industrial Beginnings. Boston: Houghton-Mifflin, 1931.

Ware, Norman. The Labor Movement in the United States, 1860-1895. New York: Appleton, 1929.

Weiss, Thomas and Lee Craig. “Agricultural Productivity Growth during the Decade of the Civil War.” Journal of Economic History 53, no. 3 (1993): 527-548.

Whaples, Robert. “The Shortening of the American Work Week: An Economic and Historical Analysis of Its Context, Causes, and Consequences.” Ph.D. dissertation, University of Pennsylvania, 1990a.

Whaples, Robert. “Winning the Eight-Hour Day, 1909-1919.” Journal of Economic History 50, no. 2 (1990b): 393-406.

Whaples, Robert. “Where Is There Consensus Among American Economic Historians? The Results of a Survey on Forty Propositions.” Journal of Economic History 55, no. 1 (1995): 139-154.

Citation: Whaples, Robert. “Hours of Work in U.S. History”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/hours-of-work-in-u-s-history/

Economic History of Hong Kong

Catherine R. Schenk, University of Glasgow

Hong Kong’s economic and political history has been primarily determined by its geographical location. The territory of Hong Kong is comprised of two main islands (Hong Kong Island and Lantau Island) and a mainland hinterland. It thus forms a natural geographic port for Guangdong province in Southeast China. In a sense, there is considerable continuity in Hong Kong’s position in the international economy since its origins were as a commercial entrepot for China’s regional and global trade, and this is still a role it plays today. From a relatively unpopulated territory at the beginning of the nineteenth century, Hong Kong grew to become one of the most important international financial centers in the world. Hong Kong also underwent a rapid and successful process of industrialization from the 1950s that captured the imagination of economists and historians in the 1980s and 1990s.

Hong Kong from 1842 to 1949

After being ceded by China to the British under the Treaty of Nanking in 1842, the colony of Hong Kong quickly became a regional center for financial and commercial services based particularly around the Hongkong and Shanghai Bank and merchant companies such as Jardine Matheson. In 1841 there were only 7500 Chinese inhabitants of Hong Kong and a handful of foreigners, but by 1859 the Chinese community was over 85,000 supplemented by about 1600 foreigners. The economy was closely linked to commercial activity, dominated by shipping, banking and merchant companies. Gradually there was increasing diversification to services and retail outlets to meet the needs of the local population, and also shipbuilding and maintenance linked to the presence of the British naval and merchant shipping. There was some industrial expansion in the nineteenth century; notably sugar refining, cement and ice factories among the foreign sector, alongside smaller-scale local workshop manufactures. The mainland territory of Hong Kong was ceded to British rule by two further treaties in this period; Kowloon in 1860 and the New Territories in 1898.

Hong Kong was profoundly affected by the disastrous events in Mainland China in the inter-war period. After overthrow of the dynastic system in 1911, the Kuomintang (KMT) took a decade to pull together a republican nation-state. The Great Depression and fluctuations in the international price of silver then disrupted China’s economic relations with the rest of the world in the 1930s. From 1937, China descended into the Sino-Japanese War. Two years after the end of World War II, the civil war between the KMT and Chinese Communist Party pushed China into a downward economic spiral. During this period, Hong Kong suffered from the slowdown in world trade and in China’s trade in particular. However, problems on the mainland also diverted business and entrepreneurs from Shanghai and other cities to the relative safety and stability of the British colonial port of Hong Kong.

Post-War Industrialization

After the establishment of the People’s Republic of China (PRC) in 1949, the mainland began a process of isolation from the international economy, partly for ideological reasons and partly because of Cold War embargos on trade imposed first by the United States in 1949 and then by the United Nations in 1951. Nevertheless, Hong Kong was vital to the international economic links that the PRC continued in order to pursue industrialization and support grain imports. Even during the period of self-sufficiency in the 1960s, Hong Kong’s imports of food and water from the PRC were a vital source of foreign exchange revenue that ensured Hong Kong’s usefulness to the mainland. In turn, cheap food helped to restrain rises in the cost of living in Hong Kong thus helping to keep wages low during the period of labor-intensive industrialization.

The industrialization of Hong Kong is usually dated from the embargoes of the 1950s. Certainly, Hong Kong’s prosperity could no longer depend on the China trade in this decade. However, as seen above, industry emerged in the nineteenth century and it began to expand in the interwar period. Nevertheless, industrialization accelerated after 1945 with the inflow of refugees, entrepreneurs and capital fleeing the civil war on the mainland. The most prominent example is immigrants from Shanghai who created the cotton spinning industry in the colony. Hong Kong’s industry was founded in the textile sector in the 1950s before gradually diversifying in the 1960s to clothing, electronics, plastics and other labor-intensive production mainly for export.

The economic development of Hong Kong is unusual in a variety of respects. First, industrialization was accompanied by increasing numbers of small and medium-sized enterprises (SME) rather than consolidation. In 1955, 91 percent of manufacturing establishments employed fewer than one hundred workers, a proportion that increased to 96.5 percent by 1975. Factories employing fewer than one hundred workers accounted for 42 percent of Hong Kong’s domestic exports to the U.K. in 1968, amounting to HK$1.2 billion. At the end of 2002, SMEs still amounted to 98 percent of enterprises, providing 60 percent of total private employment.

Second, until the late 1960s, the government did not engage in active industrial planning. This was partly because the government was preoccupied with social spending on housing large flows of immigrants, and partly because of an ideological sympathy for free market forces. This means that Hong Kong fits outside the usual models of Asian economic development based on state-led industrialization (Japan, South Korea, Singapore, Taiwan) or domination of foreign firms (Singapore) or large firms with close relations to the state (Japan, South Korea). Low taxes, lax employment laws, absence of government debt, and free trade are all pillars of the Hong Kong experience of economic development.

In fact, of course, the reality was very different from the myth of complete laissez-faire. The government’s programs of public housing, land reclamation, and infrastructure investment were ambitious. New industrial towns were built to house immigrants, provide employment and aid industry. The government subsidized industry indirectly through this public housing, which restrained rises in the cost of living that would have threatened Hong Kong’s labor-cost advantage in manufacturing. The government also pursued an ambitious public education program, creating over 300,000 new primary school places between 1954 and 1961. By 1966, 99.8% of school-age children were attending primary school, although free universal primary school was not provided until 1971. Secondary school provision was expanded in the 1970s, and from 1978 the government offered compulsory free education for all children up to the age of 15. The hand of government was much lighter on international trade and finance. Exchange controls were limited to a few imposed by the U.K., and there were no controls on international flows of capital. Government expenditure even fell from 7.5% of GDP in the 1960s to 6.5% in the 1970s. In the same decades, British government spending as a percent of GDP rose from 17% to 20%.

From the mid-1950s Hong Kong’s rapid success as a textile and garment exporter generated trade friction that resulted in voluntary export restraints in a series of treaties with the U.K. beginning in 1959. Despite these agreements, Hong Kong’s exporters continued to exploit their flexibility and adaptability to increase production and find new markets. Indeed, exports increased from 54% of GDP in the 1960s to 64% in the 1970s. Figure 1 shows the annual changes in the growth of real GDP per capita. In the period from 1962 until the onset of the oil crisis in 1973, the average growth rate was 6.5% per year. From 1976 to 1996 GDP grew at an average of 5.6% per year. There were negative shocks in 1967-68 as a result of local disturbances from the onset of the Cultural Revolution in the PRC, and again in 1973 to 1975 from the global oil crisis. In the early 1980s there was another negative shock related to politics, as the terms of Hong Kong’s return to PRC control in 1997 were formalized.

 Annual percentage change of per capita GDP 1962-2001

Reintegration with China, 1978-1997

The Open Door Policy of the PRC announced by Deng Xiao-ping at the end of 1978 marked a new era for Hong Kong’s economy. With the newly vigorous engagement of China in international trade and investment, Hong Kong’s integration with the mainland accelerated as it regained its traditional role as that country’s main provider of commercial and financial services. From 1978 to 1997, visible trade between Hong Kong and the PRC grew at an average rate of 28% per annum. At the same time, Hong Kong firms began to move their labor-intensive activities to the mainland to take advantage of cheaper labor. The integration of Hong Kong with the Pearl River delta in Guangdong is the most striking aspect of these trade and investment links. At the end of 1997, the cumulative value of Hong Kong’s direct investment in Guangdong was estimated at US$48 billion, accounting for almost 80% of the total foreign direct investment there. Hong Kong companies and joint ventures in Guangdong province employed about five million people. Most of these businesses were labor-intensive assembly for export, but from 1997 onward there has been increased investment in financial services, tourism and retail trade.

While manufacturing was moved out of the colony during the 1980s and 1990s, there was a surge in the service sector. This transformation of the structure of Hong Kong’s economy from manufacturing to services was dramatic. Most remarkably it was accomplished without faltering growth rates overall, and with an average unemployment rate of only 2.5% from 1982 to 1997. Figure 2 shows that the value of manufacturing peaked in 1992 before beginning an absolute decline. In contrast, the value of commercial and financial services soared. This is reflected in the contribution of services and manufacturing to GDP shown in Figure 3. Employment in the service sector rose from 52% to 80% of the labor force from 1981 to 2000 while manufacturing employment fell from 39% to 10% in the same period.

 GDP by economic activity at current prices  Contribution to Hong Kong's GDP at factor prices

Asian Financial Crisis, 1997-2002

The terms for the return of Hong Kong to Chinese rule in July 1997 carefully protected the territory’s separate economic characteristics, which have been so beneficial to the Chinese economy. Under the Basic Law, a “one country-two systems” policy was formulated which left Hong Kong monetarily and economically separate from the mainland with exchange and trade controls remaining in place as well as restrictions on the movement of people. Hong Kong was hit hard by the Asian Financial Crisis that struck the region in mid-1997, just at the time of the handover of the colony back to Chinese administrative control. The crisis prompted a collapse in share prices and the property market that affected the ability of many borrowers to repay bank loans. Unlike most Asian countries, Hong Kong Special Administrative Region and mainland China maintained their currencies’ exchange rates with the U.S. dollar rather than devaluing. Along with the Sudden Acute Respiratory Syndrome (SARS) threat in 2002, the Asian Financial Crisis pushed Hong Kong into a new era of recession with a rise in unemployment (6% on average from 1998-2003) and absolute declines in output and prices. The longer-term impact of the crisis has been to increase the intensity and importance of Hong Kong’s trade and investment links with the PRC. Since the PRC did not fare as badly from the regional crisis, the economic prospects for Hong Kong have been tied more closely to the increasingly prosperous mainland.

Suggestions for Further Reading

For a general history of Hong Kong from the nineteenth century, see S. Tsang, A Modern History of Hong Kong, London: IB Tauris, 2004. For accounts of Hong Kong’s economic history see, D.R. Meyer, Hong Kong as a Global Metropolis, Cambridge: Cambridge University Press, 2000; C.R. Schenk, Hong Kong as an International Financial Centre: Emergence and Development, 1945-65, London: Routledge, 2001; and Y-P Ho, Trade, Industrial Restructuring and Development in Hong Kong, London: Macmillan, 1992. Useful statistics and summaries of recent developments are available on the website of the Hong Kong Monetary Authority www.info.gov.hk/hkma.

Citation: Schenk, Catherine. “Economic History of Hong Kong”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/economic-history-of-hong-kong/

Health Insurance in the United States

Melissa Thomasson, Miami University

This article describes the development of the U.S. health insurance system and its growth in the twentieth century. It examines the roles of important factors including medical technology, hospitals and physicians, and government policy culminating in the development of Medicare and Medicaid.

1900-1920: Sickness Insurance versus Health Insurance

Prior to 1920, the state of medical technology generally meant that very little could be done for many patients, and that most patients were treated in their homes. Table 1 provides a list of pioneering early advances in medicine. Hospitals did not assume their modern form until after the turn of the century when antiseptic methods were well established. Even then, surgery was often performed in private homes until the 1920s.

Table 1: Milestones in Medical Technology

1850-1870: Louis Pasteur, Joseph Lister and others develop understanding of bacteriology, antisepsis, and immunology.1870-1910: Identification of various infectious agents including spirochaeta pallida (syphilis), typhus, pneumococcus, and malaria. Diphtheria antitoxin developed. Surgery fatality rates fall.

1887: S.S.K. von Basch invents instrument to measure blood pressure.

1895: Wilhelm Roentgen develops X-rays.

1910: Salvarsan (for syphilis) proves to be first drug treatment that destroys disease without injuring patient.

1920-1946: Insulin isolated (1922), sulfa developed (1935), large-scale production of synthetic penicillin begins (1946).

1955: Jonas Salk announces development of vaccine for polio.

Medical Expenditures Initially Low

Given the rudimentary state of medical technology before 1920, most people had very low medical expenditures. A 1918 Bureau of Labor Statistics survey of 211 families living in Columbus, Ohio found that only 7.6 of their average annual medical expenditures paid for hospital care (Ohio Report, p. 116). In fact, the chief cost associated with illness was not the cost of medical care, but rather the fact that sick people couldn’t work and didn’t get paid. A 1919 State of Illinois study reported that lost wages due to sickness were four times larger than the medical expenditures associated with treating the illness (State of Illinois, pp. 15-17). As a result, most people felt they didn’t need health insurance. Instead, households purchased “sickness” insurance — similar to today’s “disability” insurance — to provide income replacement in the event of illness.1

Insurance Companies Initially Unwilling to Offer Health Insurance Policies

The low demand for health insurance at the time was matched by the unwillingness of commercial insurance companies to offer private health insurance policies. Commercial insurance companies did not believe that health was an insurable commodity because of the high potential for adverse selection and moral hazard. They felt that they lacked the information to accurately calculate risks and write premiums accordingly. For example, people in poor health may claim they to be healthy and then sign up for health insurance. A problem with moral hazard may arise if people change their behavior — perhaps engaging in more risky activities — after they purchase health insurance. According to The Insurance Monitor, “the opportunities for fraud [in health insurance] upset all statistical calculations…. Health and sickness are vague terms open to endless construction. Death is clearly defined, but to say what shall constitute such loss of health as will justify insurance compensation is no easy task” (July 1919, vol. 67 (7), p. 38).

Failure of Compulsory, Nationalized Health Insurance

The fact that people generally felt actual health insurance (as opposed to sickness insurance) was unnecessary prior to 1920 also helped to defeat proposals for compulsory, nationalized health insurance in the same period. Although many European nations had adopted some form of compulsory, nationalized health insurance by 1920, proposals sponsored by the American Association for Labor Legislation (AALL) to enact compulsory health insurance in several states were never enacted (see Numbers 1978). Compulsory health insurance failed in this period for several reasons. First, popular support for the legislation was low because of the low demand for health insurance in general. Second, physicians, pharmacists and commercial insurance companies were strong opponents of the legislation. Physicians opposed the legislation because they feared that government intervention would limit their fees. Pharmacists opposed the legislation because it provided prescription drugs they feared would undermine their business. While commercial insurance firms did not offer health insurance during this period, a large part of their business was offering burial insurance to pay funeral costs. Under the proposed legislation, commercial firms would be excluded from offering burial insurance. As a result, they opposed the legislation, which they feared would also open the door towards greater government intervention in the insurance business.

1920-1930: The Rising Price of Medical Care

As the twentieth century progressed, several changes occurred that tended to increase the role that medicine played in people’s lives and to shift the focus of treatment of acute illness from homes to hospitals. These changes caused the price of medical care to rise as demand for medical care increased and the cost of supplying medical care rose with increased standards of quality for physicians and hospitals.

Increases in the Demand for Medical Care

As the population shifted from rural areas to urban centers, families lived in smaller homes with less room to care for sick family members (Faulkner 1960, p. 509). Given that health insurance is a normal good, rising incomes also helped to increase demand. Advances in medical technology along with the growing acceptance of medicine as a science led to the development of hospitals as treatment centers and helped to encourage sick people to visit physicians and hospitals. Rosenberg (1987) notes that “by the 1920s… prospective patients were influenced not only by the hope of healing, but by the image of a new kind of medicine — precise, scientific and effective” (p. 150). This scientific aura began to develop in part as licensure and standards of care among practitioners increased, which led to an increase in the cost of providing medical care.

Rising Medical Costs

Physician quality began to improve after several changes brought about by the American Medical Association (AMA) in the 1910s. In 1904, the AMA formed the Council on Medical Education (CME) to standardize the requirements for medical licensure. The CME invited Abraham Flexner of the Carnegie Foundation for the Advancement of Teaching to evaluate the status of medical education. Flexner’s highly critical report on medical education was published in 1910. According to Flexner, the current methods of medical education had “… resulted in enormous over-production at a low level, and that, whatever the justification in the past, the present situation… can be more effectively met by a reduced output of well trained men than by further inflation with an inferior product” (Flexner, p. 16). Flexner argued for stricter entrance requirements, better facilities, higher fees, and tougher standards. Following the publication of the Flexner Report, the number of medical schools in the United States dropped from 131 in 1910 to 95 in 1915. By 1922, the number of medical schools in the U.S. had fallen even further to 81 (Journal of the American Medical Association, August 12, 1922, p. 633). These increased requirements for physician licensure, education and the accreditation of medical schools restricted physician supply, putting upward pressure on the costs of physicians’ services.2

After Flexner’s report, a further movement towards standardization and accreditation came in 1913, when the American College of Surgeons (ACS) was founded. Would-be members of the ACS had to meet strict standards. For a hospital to gain the accreditation of the ACS, it had to meet a set of standards relating to the staff, records, and diagnostic and therapeutic facilities available. Of 692 large hospitals examined in 1918, only 13 percent were approved. By 1932, 93 percent of the 1,600 hospitals examined met ACS requirements (Shyrock 1979, p. 348).

Increasing requirements for licensure and accreditation, in addition to a rising demand for medical care, eventually led to rising costs. In 1927, the Committee on the Costs of Medical Care (CCMC) was formed to investigate the medical expenses of American families. Comprised of physicians, economists, and public health specialists, the CCMC published 27 research reports, offering reliable estimates of national health care expenditures. According to one CCMC study, the average American family had medical expenses totaling $108 in 1929, with hospital expenditures comprising 14 percent of the total bill (Falk, Rorem, and Ring 1933, p. 89). In 1929, medical charges for urban families with incomes between $2,000 and $3,000 per year averaged $67 if there were no hospitalizations, but averaged $261 if there were any illnesses that required hospitalization (see Falk, Rorem, and Ring). By 1934, Michael M. Davis, a leading advocate of reform, noted that hospital costs had risen to nearly 40 percent of a family’s medical bill (Davis 1934, p. 211). By the end of the 1920s, families began to demand greater amounts of medical care, and the costs of medical care began to increase.

1930-1940: The Birth of Blue Cross and Blue Shield

Blue Cross: Hospital Insurance

As the demand for hospital care increased in the 1920s, a new payment innovation developed at the end of the decade that would revolutionize the market for health insurance. The precursor to Blue Cross was founded in 1929 by a group of Dallas teachers who contracted with Baylor University Hospital to provide 21 days of hospitalization for a fixed $6.00 payment. The Baylor plan developed as a way to ensure that people paid their bills. One official connected with the plan compared hospital bills to cosmetics, noting that the nation’s cosmetic bill was actually more than the nation’s hospital bill, but that “We spend a dollar or so at a time for cosmetics and do not notice the high cost. The ribbon counter clerk can pay 50¢, 75¢, or $1 a month, yet…. it would take about twenty years to set aside a large hospital bill” (The American Foundation 1937, p. 1023).

Pre-paid hospital service plans grew over the course of the Great Depression. Pre-paid hospital care was mutually advantageous to both subscribers and hospitals during the early 1930s, when consumers and hospitals suffered from falling incomes. While the pre-paid plans allowed consumers to affordably pay for hospital care, they also benefited hospitals by providing them with a way to earn income during a time of falling hospital revenue. Only 62 percent of beds in private hospitals were occupied on average, compared to 89 percent of beds in public hospitals that accepted charity care (Davis and Rorem 1932, p. 5). As one pediatrician in the Midwest noted, “Things went swimmingly as long as endowed funds allowed the hospitals to carry on. When the funds from endowments disappeared the hospitals got into trouble and thus the various plans to help the hospitals financially developed” (American Foundation 1937, p. 756).

The American Hospital Association (AHA) encouraged hospitals in such endeavors ostensibly as a means of relieving “… from financial embarrassment and even from disaster in the emergency of sickness those who are in receipt of limited incomes” (Reed 1947, p. 14). However, the prepayment plans also clearly benefited hospitals by giving them a constant stream of income. Since single-hospital plans generated greater competition among hospitals, community hospitals began to organize with each other to offer hospital coverage and to reduce inter-hospital competition. These plans eventually combined under the auspices of the AHA under the name Blue Cross.

Blue Cross Designed to Reduce Price Competition among Hospitals

The AHA designed the Blue Cross guidelines so as to reduce price competition among hospitals. Prepayment plans seeking the Blue Cross designation had to provide subscribers with free choice of physician and hospital, a requirement that eliminated single-hospital plans from consideration. Blue Cross plans also benefited from special state-level enabling legislation allowing them to act as non-profit corporations, to enjoy tax-exempt status, and to be free from the usual insurance regulations. Originally, the reason for this exemption was that Blue Cross plans were considered to be in society’s best interest since they often provided benefits to low-income individuals (Eilers 1963, p. 82). Without the enabling legislation, Blue Cross plans would have had to organize under the laws for insurance companies. If they organized as stock companies, the plans would have had to meet reserve requirements to ensure their solvency. Organizing as mutual companies meant that they would either have to meet reserve requirements or be subject to assessment liability.3 Given that most plans had little financial resources available to them, they would not have been able to meet the requirements.

The enabling legislation freed the plans from the traditional insurance reserve requirements because the Blue Cross plans were underwritten by hospitals. Hospitals contracted with the plans to provide subscriber services, and agreed to provide service benefits even during periods when the plans lacked funds to provide reimbursement. Under the enabling legislation, the plans “enjoy the advantages of exemption from the regular insurance laws of the state, are freed from the obligation of maintaining the high reserves required of commercial insurance companies and are relieved of paying taxes” (Anderson 1944, p. 11).4 Enabling laws served to increase the amount of health insurance sold in states in which they were implemented, causing growth in the market (Thomasson 2002).

Blue Shield: Insurance for Physician Services

Despite the success of Blue Cross and pre-paid hospitalization policies, physicians were much slower in providing pre-paid care. Blue Cross and Blue Shield developed separately, with little coordination between them (McDavitt 1946). Physicians worried that a third-party system of payment would lower their incomes by interfering with the physician-patient relationship and restricting the ability of physicians to price discriminate. However, in the 1930s, physicians were faced with two situations that spurred them to develop their own pre-paid plans. First, Blue Cross plans were becoming popular, and some physicians feared that hospitals would move into the realm of providing insurance for physician services, thus limiting physician autonomy. In addition, advocates of compulsory health insurance looked to the emerging social security legislation as a logical means of providing national health care. Compulsory health insurance was even more anathema to physicians than voluntary health insurance. It became clear to physicians that in order to protect their interests, they would be better off pre-empting both hospitals and compulsory insurance proponents by sculpting their own plan.

Thus, to protect themselves from competition with Blue Cross, as well as to provide an alternative to compulsory insurance, physicians began to organize a framework for pre-paid plans that covered physician services. In this regard, the American Medical Association (AMA) adopted a set of ten principles in 1934 “… which were apparently promulgated for the primary purposes of preventing hospital service plans from underwriting physician services and providing an answer to the proponents of compulsory medical insurance” (Hedinger 1966, p. 82). Within these rules were provisions that ensured that voluntary health insurance would remain under physician supervision and not be subject to the control of non-physicians. In addition, physicians wanted to retain their ability to price discriminate (to charge different rates to different customers, based on their ability to pay).

These principles were reflected in the actions of physicians as they established enabling legislation similar to that which allowed Blue Cross plans to operate as non-profits. Like the Blue Cross enabling legislation, these laws allowed Blue Shield plans to be tax-exempt and free from the provisions of insurance statutes. Physicians lobbied to ensure that they would be represented on the boards of all such plans, and acted to ensure that all plans required free choice of physician. In 1939, the California Physicians’ Service (CPS) began to operate as the first prepayment plan designed to cover physicians’ services. Open to employees earning less than $3,000 annually, the CPS provided physicians’ services to employee groups for the fee of $1.70 per month for employees (Scofea, p. 5). To further these efforts, the AMA encouraged state and local medical societies to form their own prepayment plans. These physician-sponsored plans ultimately affiliated and became known as Blue Shield in 1946.

Blue Shield plans offered medical and surgical benefits for hospitalized members, although certain plans also covered visits to doctors’ offices. While some plans were like the Blue Cross plans in that they offered service benefits to low-income subscribers (meaning that the plans directly reimbursed physicians for services), most Blue Shield plans operated on a mixed service-indemnity basis. Doctors charged patients who were subscribers to Blue Shield the difference between their actual charges and the amount for which they were reimbursed by Blue Shield. In this manner, doctors could retain their power to price discriminate by charging different prices to different patients.

1940-1960: Growth in the Health Insurance Market

After the success of Blue Cross and Blue Shield in the 1930s, continued growth in the market occurred for several reasons. The supply of health insurance increased once commercial insurance companies decided to enter the market for health coverage. Demand for health insurance increased as medical technology further advanced, and as government policies encouraged the popularity of health insurance as a form of employee compensation.

Growth in Supply: Commercial Insurance Companies Enter the Market

Blue Cross and Blue Shield were first to enter the health insurance market because commercial insurance companies were reluctant to even offer health insurance early in the century. As previously mentioned, they feared that they would not be able to overcome problems relating to adverse selection, so that offering health insurance would not be profitable. The success of Blue Cross and Blue Shield showed just how easily adverse selection problems could be overcome: by focusing on providing health insurance only to groups of employed workers. This would allow commercial insurance companies to avoid adverse selection because they would insure relatively young, healthy people who did not individually seek health insurance. After viewing the success of Blue Cross and Blue Shield, commercial health insurance companies began to move rapidly into the health insurance market. As shown in Figure 1, the market for health insurance exploded in size in the 1940s, growing from a total enrollment of 20,662,000 in 1940 to nearly 142,334,000 in 1950 (Health Insurance Institute 1961, Source Book, p. 10). As the Superintendent of Insurance in New York, Louis H. Pink, noted in 1939

… There are twenty stock insurance companies which are today issuing in this state Individual Medical Reimbursement, Hospitalization, and Sickness Expense Policies. About half of these have only recently gone into this field. It is no doubt the interest aroused by the non-profit associations which has induced the regular insurance companies to extend their activities in this way (Pink 1939).

Figure 1: Number of Persons with Health Insurance (thousands), 1940-1960

Source: Source Book of Health Insurance Data, 1965.

Community Rating versus Experience Rating

The success of commercial companies was aided by two factors. First, the competitiveness of Blue Cross and Blue Shield was limited by the fact that their non-profit status required that they community rate their policies. Under a system of community rating, insurance companies charge the same premium to sicker people as they do to healthy people. Since they were not considered to be nonprofit organizations, commercial insurance companies were not required to community rate their policies. Instead, commercial insurance companies could engage in experience rating, whereby they charged sicker people higher premiums and healthier people lower premiums. As a result, commercial companies could often offer relatively healthy groups lower premiums than the Blue Cross and Blue Shield plans, and gain their business. Thus, the commercial health insurance business boomed, as shown in Figure 2.

Figure 2: Enrollment in Commercial Insurance Plans v. Blue Cross and Blue Shield

Source: Source Book of Health Insurance Data, 1965.

Figure 2 illustrates the growth of commercial insurance relative to Blue Cross and Blue Shield. So successful was commercial insurance that by the early 1950s, commercial plans had more subscribers than Blue Cross and Blue Shield. In 1951, 41.5 million people were enrolled in group or individual hospital insurance plans offered by commercial insurance companies, while only 40.9 million people were enrolled in Blue Cross and Blue Shield plans (Health Insurance Institute 1965, Source Book, p. 14).

Growth in Demand: Government Policies that Encouraged Health Insurance

Offering insurance policies to employee groups not only benefited insurers, but also benefited employers. During World War II, wage and price controls prevented employers from using wages to compete for scarce labor. Under the 1942 Stabilization Act, Congress limited the wage increases that could be offered by firms, but permitted the adoption of employee insurance plans. In this way, health benefit packages offered one means of securing workers. In the 1940s, two major rulings also reinforced the foundation of the employer-provided health insurance system. First, in 1945 the War Labor Board ruled that employers could not modify or cancel group insurance plans during the contract period. Then, in 1949, the National Labor Relations Board ruled in a dispute between the Inland Steel Co. and the United Steelworkers Union that the term “wages” included pension and insurance benefits. Therefore, when negotiating for wages, the union was allowed to negotiate benefit packages on behalf of workers as well. This ruling, affirmed later by the U.S. Supreme Court, further reinforced the employment-based system.5

Perhaps the most influential aspect of government intervention that shaped the employer-based system of health insurance was the tax treatment of employer-provided contributions to employee health insurance plans. First, employers did not have to pay payroll tax on their contributions to employee health plans. Further, under certain circumstances, employees did not have to pay income tax on their employer’s contributions to their health insurance plans. The first such exclusion occurred under an administrative ruling handed down in 1943 which stated that payments made by the employer directly to commercial insurance companies for group medical and hospitalization premiums of employees were not taxable as employee income (Yale Law Journal, 1954, pp. 222-247). While this particular ruling was highly restrictive and limited in its applicability, it was codified and extended in 1954. Under the 1954 Internal Revenue Code (IRC), employer contributions to employee health plans were exempt from employee taxable income. As a result of this tax-advantaged form of compensation, the demand for health insurance further increased throughout the 1950s (Thomasson 2003).

The 1960s: Medicare and Medicaid

The AMA and the Defeat of Government Insurance before 1960

By the 1960s, the system of private health insurance in the United States was well established. In 1958, nearly 75 percent of Americans had some form of private health insurance coverage. By helping to implement a successful system of voluntary health insurance plans, the medical profession had staved off the government intervention and nationalized insurance that it had feared since the 1910s. In addition to ensuring that private citizens had access to voluntary coverage, the AMA also was a vocal opponent of any nationalized health insurance programs, suggesting that such proposals were socialistic and would interfere with physician income and the doctor-patient relationship. The AMA had played a significant role in defeating proposals for nationalized health insurance in 1935 (under the Social Security Act) and later in defeating the proposed Murray-Wagner-Dingell (MWD) bill in 1949. The MWD bill would have provided comprehensive nationalized health insurance to all Americans. To ensure the defeat of the proposal, the AMA charged every physician who was a member $25 for their lobbying efforts (Marmor 2000).

While serious proposals for government-sponsored health insurance were not put forth during the Eisenhower Administrations of 1952-1960, proponents of such legislation worked to ensure that their ideas would have a chance at passing in the future under more responsive administrations. They realized that the only way to enact government-sponsored health insurance would be to do so incrementally — and they began by focusing on the elderly (Marmor 2000).

Offering insurance to aged persons age 65 and over provided a means to successfully counter several criticisms that opponents to government-sponsored health insurance had aimed at previous bills. Focusing on the elderly allowed proponents to counter charges that nationalized health insurance would provide health care to individuals who were generally able to pay for it themselves. It was difficult for opponents to argue that the elderly were not among the most medically needy in society, given their fixed incomes and the fact that they were generally in poorer health and in greater need of medical care. Supporters also tried to limit the opposition of the AMA by putting forth proposals that only covered hospital services, which also stemmed criticism that said nationalized health insurance would encourage extensive — and unnecessary — utilization of medical services.

Medicare Provisions

The political atmosphere become much more favorable towards nationalized health insurance proposals after John F. Kennedy was elected to office in 1960, and especially when the Democrats won a majority in Congress in 1964. Passed in 1965, Medicare was a federal program with uniform standards that consisted of two parts. Part A represented the compulsory hospital insurance program the aged were automatically enrolled in upon reaching age 65. Part B provided supplemental medical insurance, or subsidized insurance for physicians’ services. Ironically, physicians stood to benefit tremendously from Medicare. Fearing that physicians would refuse to treat Medicare patients, legislators agreed to reimburse physicians according to their “usual, customary, and reasonable rate.” In addition, doctors could bill patients directly, so that patients had to be reimbursed by Medicare. Thus, doctors were still permitted to price discriminate by charging patients more than what the program would pay, and forcing patients to pay the difference. Funding for Medicare comes from payroll taxes, income taxes, trust fund interest, and enrollee premiums for Part B. Medicare has grown from serving 19.1 million recipients in 1966 to 39.5 million in 1999 (Henderson 2002, p. 425).

Medicaid

In contrast to Medicare, Medicaid was enacted as a means-tested, federal-state program to provide medical resources for the indigent. The federal portion of a state’s Medicaid payments is based on each state’s per capita income relative to national per capita income. Unlike Medicare, which has uniform national benefits and eligibility standards, the federal government only specifies minimum standards for Medicaid; each of the states is responsible for determining eligibility and benefits within these broad guidelines. Thus, benefits and eligibility vary widely across states. While the original legislation provided coverage for recipients of public assistance, legislative changes have expanded the scope of benefits and beneficiaries (Gruber 2000). In 1966, Medicaid provided benefits for 10 million recipients. By 1999, 37.5 million people received care under Medicaid (Henderson 2002, p. 433).

Growth of Medicare and Medicaid Expenditures
Figure 3 shows how Medicare and Medicaid expenditures have grown as a percentage of total national health care expenditures since their inception in 1966. The figure points to some interesting trends. Expenditures in both programs rose dramatically in the late 1960s as the programs began to gear up. Then, Medicare expenditures in particular rose sharply during the 1970. This growth in Medicare expenditures resulted in a major change in Medicare reimbursement policies in 1983. Instead of reimbursing according to the “usual and customary” rates, the government enacted a prospective payment system where providers were reimbursed according to set fee schedules based on diagnosis. Medicaid expenditures were fairly constant over the 1970s and 1980s, and did not begin to rise until more generous eligibility requirements were implemented in the 1990s. By 2001, Medicare and Medicaid together accounted for 32 percent of all health care expenditures in the U.S.

Figure 3:

Medicare and Medicaid as a Share of National Health Expenditures, 1966-2001

Source: Calculations by author based on data from the Centers for Medicare and Medicaid Services (http://cms.gov).

Notes: Percentages are calculated from price-adjusted data for all consumer expenditures, 1996=100.

Endnotes

1 In Canada, fraternal societies were the primary source of sickness benefits and access to a physician in the event of illness. The role of fraternal lodges in insurance declined significantly after 1929. See Emery 1996 and Emery and Emery 1999.

2 These changes may also have increased physician quality, thus leading to an increase in demand for physicians’ services that put additional pressure on prices.

3 Stock companies are companies that are owned by stockholders and who are entitled to the earnings of the company. Stock companies are required to hold reserves to guard against insolvency (see Faulkner 1960, pp. 406-29 for a detailed discussion on reserves). Mutual companies are cooperative organizations in which the control of the company and its ownership rest with the insureds. Mutual companies may be required to have reserves, or to engage in assessment liability (in which insureds must pay additional amounts if premiums fall short of claims). Both stock and mutual companies pay taxes.

4 However, the enabling legislation did not give the Blue Cross plans free rein. They required the plans to be non-profit, and to allow free choice of physician by subscribers, and some specified additional requirements. New York was the first state to enact such enabling legislation in 1934, and 32 states had adopted special enabling legislation for hospital service plans by 1943. Other states exempted Blue Cross plans by categorizing them strictly as nonprofit organizations (Eilers 1963, pp. 100-07).

5 Scofea, p. 6. See also Inland Steel Co. v. NLRB (170 F. 2d 247 (7th Cir. 1948) and Eilers, p. 19.

References

American Foundation. American Medicine, Volume II. New York: The American Foundation, 1937.

Anderson, Odin W. State Enabling Legislation for Non-Profit Hospital and Medical Plans, 1944. Ann Arbor: University of Michigan Press, 1944.

Centers for Medicaid and Medicare Services. Statistics. Retrieved from http://cms.gov/statistics/ on April 4, 2003.

Davis, Michael M. “The American Approach to Health Insurance.” Milbank Memorial Fund Quarterly 12 (July 1934): 201-17.

Davis, Michael M. and C. Rufus Rorem. The Crisis in Hospital Finance and Other Studies in Hospital Economics. Chicago: University of Chicago Press, 1932.

Eilers, Robert D. Regulation of Blue Cross and Blue Shield Plans. Homewood, IL: Richard D. Irwin, Inc., 1963.

Emery, J.C. Herbert. “Risky Business? Nonactuarial Pricing Practices and the Financial Viability of Fraternal Sickness Insurers.” Explorations in Economic History 33, no. 2 (April 1996): 195-226.

Emery, George and J.C. Herbert Emery. A Young Man’s Benefit: The Independent Order of Odd Fellows and Sickness Insurance in the United States and Canada. Montreal and Kingston: McGill-Queen’s University Press, 1999.

Falk, I.S. C. Rufus Rorem, and Martha D. Ring. The Cost of Medical Care. Chicago: University of Chicago Press, 1933.

Faulkner, Edwin J. Health Insurance. New York: McGraw-Hill, 1960.

Flexner, Abraham. Medical Education in the United States and Canada. New York: Carnegie Foundation for the Advancement of Teaching, 1910.

Gruber, Jonathan B. “Medicaid.” National Bureau of Economic Research Working Paper 7829, August 2000.

Health Insurance Institute. Source Book of Health Insurance Data, 1960. New York: Health Insurance Institute, 1961.

Health Insurance Institute. Source Book of Health Insurance Data, 1965. New York: Health Insurance Institute, 1966.

Hedinger, Fredric R. The Social Role of Blue Cross as a Device for Financing the Costs of Hospital Care: An Evaluation. Iowa City: University of Iowa, 1966.

Henderson, James W. Health Economics and Policy, second edition. Cincinnati: South-Western, 2002.

The Insurance Monitor. Walter S. Nichols, editor. 67, no. 7. (July 1919).

Journal of the American Medical Association, August 12, 1922.

Marmor, Theodore R. The Politics of Medicare, second edition. New York: Aldine de Gruyter, 2000.

McDavitt, T.V. “Voluntary Prepayment Medical Care Plans.” Journal of American Insurance, December 23, no. 2 (1946).

Numbers, Ronald L. Almost Persuaded: American Physicians and Compulsory Health Insurance, 1912-1920. Baltimore: Johns Hopkins University Press, 1978.

Pink, Louis H. “Voluntary Hospital and Medical Associations and the State,” address to the Meeting of the Medical Society of the County of Queens, Forest Hills, NY, February 28, 1939. Journal of American Insurance 16, no. 3 (March 1939).

Reed, Louis S. Blue Cross and Medical Service Plans. Washington, D.C.: U.S. Public Health Service, 1947.

Rosenberg, Charles E. The Care of Strangers. New York: Basic Books, Inc., 1987.

Scofea, Laura A. “The Development and Growth of Employer-Provided Health Insurance.” Monthly Labor Review (March 1994): 117.

Shyrock, Richard Harrison. The Development of Modern Medicine. Madison: University of Wisconsin Press, 1979.

State of Illinois. Report of the Health Insurance Commission, 1919.

State of Ohio, Health and Old Age Insurance Commission. Health, Health Insurance, Old Age Pensions. Columbus, OH: 1919.

Thomasson, Melissa A. “From Sickness to Health: The Twentieth-Century Development of U.S. Health Insurance.” Explorations in Economic History 39 (July 2002): 233-53.

Thomasson, Melissa A. “The Importance of Group Coverage: How Tax Policy Shaped U.S. Health Insurance.” American Economic Review, forthcoming 2003.

Yale Law Journal, “Taxation of Employee Accident and Health Plans before and under the 1954 Code.” 64, no. 2 (1954): 222-47.

Citation: Thomasson, Melissa. “Health Insurance in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. April 17, 2003. URL http://eh.net/encyclopedia/health-insurance-in-the-united-states/

Smoot-Hawley Tariff

Anthony O’Brien, Lehigh University

The Smoot-Hawley Tariff of 1930 was the subject of enormous controversy at the time of its passage and remains one of the most notorious pieces of legislation in the history of the United States. In the popular press and in political discussions the usual assumption is that the Smoot-Hawley Tariff was a policy disaster that significantly worsened the Great Depression. During the controversy over passage of the North American Free Trade Agreement (NAFTA) in the 1990s, Vice President Al Gore and billionaire former presidential candidate Ross Perot met in a debate on the Larry King Live program. To help make his point that Perot’s opposition to NAFTA was wrong-headed, Gore gave Perot a framed portrait of Sen. Smoot and Rep. Hawley. Gore assumed the audience would consider Smoot and Hawley to have been exemplars of a foolish protectionism. Although the popular consensus on Smoot-Hawley is clear, the verdict among scholars is more mixed, particularly with respect to the question of whether the tariff significantly worsened the Great Depression.

Background to Passage of the Tariff

The Smoot-Hawley Tariff grew out of the campaign promises of Herbert Hoover during the 1928 presidential election. Hoover, the Republican candidate, had pledged to help farmers by raising tariffs on imports of farm products. Although the 1920s were generally a period of prosperity in the United States, this was not true of agriculture; average farm incomes actually declined between 1920 and 1929. During the campaign Hoover had focused on plans to raise tariffs on farm products, but the tariff plank in the 1928 Republican Party platform had actually referred to the potential of more far-reaching increases:

[W]e realize that there are certain industries which cannot now successfully compete with foreign producers because of lower foreign wages and a lower cost of living abroad, and we pledge the next Republican Congress to an examination and where necessary a revision of these schedules to the end that American labor in the industries may again command the home market, may maintain its standard of living, and may count upon steady employment in its accustomed field.

In a longer perspective, the Republican Party had been in favor of a protective tariff since its founding in the 1850s. The party drew significant support from manufacturing interests in the Midwest and Northeast that believed they benefited from high tariff barriers against foreign imports. Although the free trade arguments dear to most economists were espoused by few American politicians during the 1920s, the Democratic Party was generally critical of high tariffs. In the 1920s the Democratic members of Congress tended to represent southern agricultural interests — which saw high tariffs as curtailing foreign markets for their exports, particularly cotton — or unskilled urban workers — who saw the tariff as driving up the cost of living.

The Republicans did well in the 1928 election, picking up 30 seats in the House — giving them a 267 to 167 majority — and seven seats in the Senate — giving them a 56 to 39 majority. Hoover easily defeated the Democratic presidential candidate, New York Governor Al Smith, capturing 58 percent of the popular vote and 444 of 531 votes in the Electoral College. Hoover took office on March 4, 1929 and immediately called a special session of Congress to convene on April 15 for the purpose of raising duties on agricultural products. Once the session began it became clear, however, that the Republican Congressional leadership had in mind much more sweeping tariff increases.

The House concluded its work relatively quickly and passed a bill on May 28 by a vote of 264 to 147. The bill faced a considerably more difficult time in the Senate. A block of Progressive Republicans, representing midwestern and western states, held the balance of power in the Senate. Some of these Senators had supported the third-party candidacy of Wisconsin Senator Robert LaFollette during the 1924 presidential election and they were much less protectionist than the Republican Party as a whole. It proved impossible to put together a majority in the Senate to pass the bill and the special session ended in November 1929 without a bill being passed.

By the time Congress reconvened the following spring the Great Depression was well underway. Economists date the onset of the Great Depression to the cyclical peak of August 1929, although the stock market crash of October 1929 is the more traditional beginning. By the spring of 1930 it was already clear that the downturn would be severe. The impact of the Depression helped to secure the final few votes necessary to put together a slim majority in the Senate in favor of passage of the bill. Final passage in the Senate took place on June 13, 1930 by a vote of 44 to 42. Final passage took place in the House the following day by a vote of 245 to 177. The vote was largely on party lines. Republicans in the House voted 230 to 27 in favor of final passage. Ten of the 27 Republicans voting no were Progressives from Wisconsin and Minnesota. Democrats voted 150 to 15 against final passage. Ten of the 15 Democrats voting for final passage were from Louisiana or Florida and represented citrus or sugar interests that received significant new protection under the bill.

President Hoover had expressed reservations about the wide-ranging nature of the bill and had privately expressed fears that the bill might provoke retaliation from America’s trading partners. He received a petition signed by more than 1,000 economists, urging him to veto the bill. Ultimately, he signed the Smoot-Hawley bill into law on June 17, 1930.

Tariff Levels under Smoot-Hawley

Calculating the extent to which Smoot-Hawley raised tariffs is not straightforward. The usual summary measure of tariff protection is the ratio of total tariff duties collected to the value of imports. This measure is misleading when applied to the early 1930s. Most of the tariffs in the Smoot-Hawley bill were specific — such as $1.125 per ton of pig iron — rather than ad valorem — or a percentage of the value of the product. During the early 1930s the prices of many products declined, causing the specific tariff to become an increasing percentage of the value of the product. The chart below shows the ratio of import duties collected to the value of dutiable imports. The increase shown for the early 1930s was partly due to declining prices and, therefore, exaggerates the effects of the Smoot-Hawley rate increases.

Source: U.S. Bureau of the Census, Historical Statistics of the United States, Colonial Times to 1970, Washington, D.C.: USGPO, 1975, Series 212.

A more accurate measure of the increase in tariff rates attributable to Smoot-Hawley can be found in a study carried out by the U.S. Tariff Commission. This study calculated the ad valorem rates that would have prevailed on actual U.S. imports in 1928, if the Smoot-Hawley rates been in effect then. These rates were compared with the rates prevailing under the Tariff Act of 1922, known as the Fordney-McCumber Tariff. The results are reproduced in Table 1 for the broad product categories used in tariff schedules and for total dutiable imports.

Table 1
Tariffs Rates under Fordney-McCumber vs. Smoot-Hawley

Equivalent ad valorem rates
Product Fordney-McCumber Smoot-Hawley
Chemicals 29.72% 36.09%
Earthenware, and Glass 48.71 53.73
Metals 33.95 35.08
Wood 24.78 11.73
Sugar 67.85 77.21
Tobacco 63.09 64.78
Agricultural Products 22.71 35.07
Spirits and Wines 38.83 47.44
Cotton Manufactures 40.27 46.42
Flax, Hemp, and Jute 18.16 19.14
Wool and Manufactures 49.54 59.83
Silk Manufactures 56.56 59.13
Rayon Manufactures 52.33 53.62
Paper and Books 24.74 26.06
Sundries 36.97 28.45
Total 38.48 41.14

Source: U.S. Tariff Commission, The Tariff Review, July 1930, Table II, p. 196.

By this measure, Smoot-Hawley raised average tariff rates by about 2 ½ percentage points from the already high rates prevailing under the Fordney-McCumber Tariff of 1922.

The Basic Macroeconomics of the Tariff

Economists are almost uniformly critical of tariffs. One of the bedrock principles of economics is that voluntary trade makes everyone involved better off. For the U.S. government to interfere with trade between Canadian lumber producers and U.S. lumber importers — as it did under Smoot-Hawley by raising the tariff on lumber imports — makes both parties to the trade worse off. In a larger sense, it also hurts the efficiency of the U.S. economy by making it rely on higher priced U.S. lumber rather than less expensive Canadian lumber.

But what is the effect of a tariff on the overall level of employment and production in an economy? The usual answer is that a tariff will leave the overall level of employment and production in an economy largely unaffected. Although the popular view is very different, most economists do not believe that tariffs either create jobs or destroy jobs in aggregate. Economists believe that the overall level of jobs and production in the economy is determined by such things as the capital stock, the population, the state of technology, and so on. These factors are not generally affected by tariffs. So, for instance, a tariff on imports of lumber might drive up housing prices and cause a reduction in the number of houses built. But economists believe that the unemployment in the housing industry will not be long-lived. Economists are somewhat divided on why this is true. Some believe that the economy automatically adjusts rapidly to reallocate labor and machinery that are displaced from one use — such as making houses — into other uses. Other economists believe that this adjustment does not take place automatically, but can be brought about through active monetary or fiscal policy. In either view, the economy is seen as ordinarily being at its so-called full-employment or potential level and deviating from that level only for brief periods of time. Tariffs have the ability to change the mix of production and the mix of jobs available in an economy, but not to change the overall level of production or the overall level of jobs. The macroeconomic impact of tariffs is therefore very limited.

In the case of the Smoot-Hawley Tariff, however, the U.S. economy was in depression in 1930. No active monetary or fiscal policies were carried out and the economy was not making much progress back to full employment. In fact, the cyclical trough was not reached until March 1933 and the economy did not return to full employment until 1941. Under these circumstances is it possible for Smoot-Hawley to have had a significant impact on the level of employment and production and would that impact have been positive or negative?

A simple view of the determination of equilibrium Gross Domestic Product (Y) holds that it is equal to the sum of aggregate expenditures. Aggregate expenditures are divided into four categories: spending by households on consumption goods (C), spending by households and firms on investment goods — such as houses, and machinery and equipment (I), spending by the government on goods and services (G), and net exports, which are the difference between spending on exports by foreign households and firms (EX) and spending on imports by domestic households and firms (IM). So, in the basic algebra of the principles of economics course, at equilibrium, Y = C + I + G + (EX – IM).

The usual story of the Great Depression is that some combination of falling consumption spending and falling investment spending had resulted in the equilibrium level of GDP being far below its full employment level. By raising tariffs on imports, Smoot-Hawley would have reduced the level of imports, but would not have had any direct effect on exports. This simple analysis seems to lead to a surprising conclusion: by reducing imports, Smoot-Hawley would have raised the level of aggregate expenditures in the economy (by increasing net exports or (EX – IM)) and, therefore, increased the level of GDP relative to what it would otherwise have been.

A potential flaw in this argument is that it assumes that Smoot-Hawley did not have a negative impact on U.S. exports. In fact, it may have had a negative impact on exports if foreign governments were led to retaliate against the passage of Smoot-Hawley by raising tariffs on imports of U.S. goods. If net exports fell as a result of Smoot-Hawley, then the tariff would have had a negative macroeconomic impact; it would have made the Depression worse. In 1934 Joseph Jones wrote a very influential book in which he argued that widespread retaliation against Smoot-Hawley had, in fact, taken place. Jones’s book helped to establish the view among the public and among scholars that the passage of Smoot-Hawley had been a policy blunder that had worsened the Great Depression.

Did Retaliation Take Place?

This is a simplified analysis and there are other ways in which Smoot-Hawley could have had a macroeconomic impact, such as by increasing the price level in the U.S. relative to foreign price levels. But in recent years there has been significant scholarly interest in the question of whether Smoot-Hawley did provoke significant retaliation and, therefore, made the Depression worse. Clearly it is possible to overstate the extent of retaliation and Jones almost certainly did. For instance, the important decision by Britain to abandon a century-long commitment to free trade and raise tariffs in 1931 was not affected to any significant extent by Smoot-Hawley.

On the other hand, the case for retaliation by Canada is fairly clear. Then, as now, Canada was easily the largest trading partner of the United States. In 1929, 18 percent of U.S. merchandise exports went to Canada and 11 percent of U.S. merchandise imports came from Canada. At the time of the passage of Smoot-Hawley the Canadian Prime Minister was William Lyon Mackenzie King of the Liberal Party. King had been in office for most of the period since 1921 and had several times reduced Canadian tariffs. He held the position that tariffs should be used to raise revenue, but should not be used for protection. In early 1929 he was contemplating pushing for further tariff reductions, but this option was foreclosed by Hoover’s call for a special session of Congress to consider tariff increases.

As Smoot-Hawley neared passage King came under intense pressure from the Canadian Conservative Party and its leader, Richard Bedford Bennett, to retaliate. In May 1930 Canada imposed so-called countervailing duties on 16 products imported from the United States. The duties on these products — which represented about 30 percent of the value of all U.S. merchandise exports to Canada — were raised to the levels charged by the United States. In a speech, King made clear the retaliatory nature of these increases:

[T]he countervailing duties ? [are] designed to give a practical illustration to the United States of the desire of Canada to trade at all times on fair and equal terms?. For the present we raise the duties on these selected commodities to the level applied against Canadian exports of the same commodities by other countries, but at the same time we tell our neighbour ? we are ready in the future ? to consider trade on a reciprocal basis?.

In the election campaign the following July, Smoot-Hawley was a key issue. Bennett, the Conservative candidate, was strongly in favor in retaliation. In one campaign speech he declared:

How many thousands of American workmen are living on Canadian money today? They’ve got the jobs and we’ve got the soup kitchens?. I will not beg of any country to buy our goods. I will make [tariffs] fight for you. I will use them to blast a way into markets that have been closed.

Bennett handily won the election and pushed through the Canadian Parliament further tariff increases.

What Was the Impact of the Tariff on the Great Depression?

If there was retaliation for Smoot-Hawley, was this enough to have made the tariff a significant contributor to the severity of the Great Depression? Most economists are skeptical because foreign trade made up a small part of the U.S. economy in 1929 and the magnitude of the decline in GDP between 1929 and 1933 was so large. Table 2 gives values for nominal GDP, for real GDP (in 1929 dollars), for nominal and real net exports, and for nominal and real exports. In real terms, net exports did decline by about $.7 billion between 1929 and 1933, but this amounts to less than one percent of 1929 real GDP and is dwarfed by the total decline in real GDP between 1929 and 1933.

Table 2
GDP and Exports, 1929-1933

Year Nominal GDP Real GDP Nominal Net Exports Real Net Exports Nominal Exports Real Exports
1929 $103.1 $103.1 $0.4 $0.3 $5.9 $5.9
1930 $90.4 $93.3 $0.3 $0.0 $4.4 $4.9
1931 $75.8 $86.1 $0.0 -$0.4 $2.9 $4.1
1932 $58.0 $74.7 $0.0 -$0.3 $2.0 $3.3
1933 $55.6 $73.2 $0.1 -$0.4 $2.0 $3.3

Source: U.S. Department of Commerce, National Income and Product Accounts of the United States, Vol. I, 1929-1958, Washington, D.C.: USGPO, 1993.

If we focus on the decline in exports, we can construct an upper bound for the negative impact of Smoot-Hawley. Between 1929 and 1931, real exports declined by an amount equal to about 1.7% of 1929 real GDP. Declines in aggregate expenditures are usually thought to have a multiplied effect on equilibrium GDP. The best estimates are that the multiplier is roughly two. In that case, real GDP would have declined by about 3.4% between 1929 and 1931 as a result of the decline in real exports. Real GDP actually declined by about 16.5% between 1929 and 1931, so the decline in real exports can account for about 21% of the total decline in real GDP. The decline in real exports, then, may well have played an important, but not crucial, role in the decline in GDP during the first two years of the Depression. Bear in mind, though, that not all — perhaps not even most — of the decline in exports can be attributed to retaliation for Smoot-Hawley. Even if Smoot-Hawley had not been passed, U.S. exports would have fallen as incomes declined in Canada, the United Kingdom, and in other U.S. trading partners and as tariff rates in some of these countries increased for reasons unconnected to Smoot-Hawley.

Hawley-Smoot or Smoot-Hawley: A Note on Usage

Congressional legislation is often referred to by the names of the member of the House of Representatives and the member of the Senate who have introduced the bill. Tariff legislation always originates in the House of Representatives and according to convention the name of its House sponsor, in this case Representative Willis Hawley of Oregon, would precede the name of its Senate sponsor, Senator Reed Smoot of Utah — hence, Hawley-Smoot. In this instance, though, Senator Smoot was far better known than Representative Hawley and so the legislation is usually referred to as the Smoot-Hawley Tariff. The more formal name of the legislation was the U.S. Tariff Act of 1930.)

Further Reading

The Republican Party platform for 1928 is reprinted as: “Republican Platform [of 1928]” in Arthur M. Schlesinger, Jr., Fred L. Israel, and William P. Hansen, editors, History of American Presidential Elections, 1789-1968, New York: Chelsea House, 1971, Vol. 3. Herbert Hoover’s views on the tariff can be found in Herbert Hoover, The Future of Our Foreign Trade, Washington, D.C.: GPO, 1926 and Herbert Hoover, The Memoirs of Herbert Hoover: The Cabinet and the Presidency, 1920-1933, New York: Macmillan, 1952, Chapter 41. Trade statistics for this period can be found in U.S. Department of Commerce, Economic Analysis of Foreign Trade of the United States in Relation to the Tariff. Washington, D.C.: GPO, 1933 and in the annual supplements to the Survey of Current Business.

A classic account of the political process that resulted in the Smoot-Hawley Tariff is given in E. E. Schattschneider, Politics, Pressures and the Tariff, New York: Prentice-Hall, 1935. The best case for the view that there was extensive foreign retaliation against Smoot-Hawley is given in Joseph Jones, Tariff Retaliation: Repercussions of the Hawley-Smoot Bill, Philadelphia: University of Pennsylvania Press, 1934. The Jones book should be used with care; his argument is generally considered to be overstated. The view that party politics was of supreme importance in passage of the tariff is well argued in Robert Pastor, Congress and the Politics of United States Foreign Economic Policy, 1929-1976, Berkeley: University of California Press, 1980.

A discussion of the potential macroeconomic impact of Smoot-Hawley appears in Rudiger Dornbusch and Stanley Fischer, “The Open Economy: Implications for Monetary and Fiscal Policy.” In The American Business Cycle: Continuity and Change, edited by Robert J. Gordon, NBER Studies in Business Cycles, Volume 25, Chicago: University of Chicago Press, 1986, pp. 466-70. See, also, the article by Barry Eichengreen listed below. An argument that Smoot-Hawley is unlikely to have had a significant macroeconomic effect is given in Peter Temin, Lessons from the Great Depression, Cambridge, MA: MIT Press, 1989, p. 46. For an argument emphasizing the importance of Smoot-Hawley in explaining the Great Depression, see Alan Meltzer, “Monetary and Other Explanations of the Start of the Great Depression,” Journal of Monetary Economics, 2 (1976): 455-71.

Recent journal articles that deal with the issues discussed in this entry are:

Callahan, Colleen, Judith A. McDonald and Anthony Patrick O’Brien. “Who Voted for Smoot-Hawley?” Journal of Economic History 54, no. 3 (1994): 683-90.

Crucini, Mario J. and James Kahn. “Tariffs and Aggregate Economic Activity: Lessons from the Great Depression.” Journal of Monetary Economics 38, no. 3 (1996): 427-67.

Eichengreen, Barry. “The Political Economy of the Smoot-Hawley Tariff.” Research in Economic History 12 (1989): 1-43.

Irwin, Douglas. “The Smoot-Hawley Tariff: A Quantitative Assessment.” Review of Economics and Statistics 80, no. 2 (1998): 326-334.

Irwin Douglas and Randall S. Kroszner. “Log-Rolling and Economic Interests in the Passage of the Smoot-Hawley Tariff.” Carnegie-Rochester Series on Public Policy 45 (1996): 173-200.

McDonald Judith, Anthony Patrick O’Brien, and Colleen Callahan. “Trade Wars: Canada’s Reaction to the Smoot-Hawley Tariff.” Journal of Economic History 57, no. 4 (1997): 802-26.

Citation: O’Brien, Anthony. “Smoot-Hawley Tariff”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/smoot-hawley-tariff/

Economic History of Hawai’i

Sumner La Croix, University of Hawai’i and East-West Center

The Hawaiian Islands are a chain of 132 islands, shoals, and reefs extending over 1,523 miles in the Northeast Pacific Ocean. Eight islands — Hawai’i, Maui, O’ahu, Kaua’i, Moloka’i, Lana’i, Ni’ihau, and Kaho’olawe — possess 99 percent of the land area (6,435 square miles) and are noted for their volcanic landforms, unique flora and fauna, and diverse climates.

From Polynesian Settlement to Western Contact

The Islands were uninhabited until sometime around 400 AD when Polynesian voyagers sailing double-hulled canoes arrived from the Marquesas Islands (Kirch, 1985, p. 68). Since the settlers had no written language and virtually no contact with the Western world until 1778, our knowledge of Hawai’i’s pre-history comes primarily from archaeological investigations and oral legends. A relatively egalitarian society and subsistence economy were coupled with high population growth rates until about 1100 when continued population growth led to a major expansion of the areas of settlement and cultivation. Perhaps under pressures of increasing resource scarcity, a new, more hierarchical social structure emerged, characterized by chiefs (ali’i) and subservient commoners (maka’ainana). In the two centuries prior to Western contact, there is considerable evidence that ruling chiefs (ali’i nui) competed to extend their lands by conquest and that this led to cycles of expansion and retrenchment.

Captain James Cook’s ships reached Hawai’i in 1778, thereby ending a long period of isolation for the Islands. Captain James King observed in 1779 that Hawaiians were generally “above the middle size” of Europeans, a rough indicator that Hawaiians generally had a diet superior to eighteenth-century Europeans. At contact, Hawaiian social and political institutions were similar to those found in other Polynesian societies. Hawaiians were sharply divided into three main social classes: ali’i (chiefs), maka’ainana (commoners), and kahuna (priests). Oral legends tell us that the Islands were usually divided into six to eight small kingdoms consisting of an island or part of an island, each governed by an ali’i nui (ruling chief). The ali’i nui had extensive rights to all lands and material goods and the ability to confiscate or redistribute material wealth at any time. Redistribution usually occurred only when a new ruling chief took office or when lands were conquered or lost. The ali’i nui gave temporary land grants to ali’i who, in turn, gave temporary land grants to konohiki (managers), who then “contracted” with maka’ainana, the great majority of the populace, to work the lands.

The Hawaiian society and economy has its roots in extended families (‘ohana) working cooperatively on an ahupua’a, a land unit running from the mountains to the sea. Numerous tropical root, tuber, and tree crops were cultivated. Taro, a wetland crop, was cultivated primarily in windward areas, while sweet potatoes and yams, both dryland crops, were cultivated in drier leeward areas. The maka’ainana apparently lived well above subsistence levels, with extensive time available for cultural activities, sports, and games. There were unquestionably periods of hardship, but these times tended to be associated with drought or other causes of poor harvest.

Unification of Hawai’i and Population Decline

The long-prevailing political equilibrium began to disintegrate shortly after the introduction of guns and the spread of new diseases to the Islands. In 1784, the most powerful ali’i nui, Kamehameha, began a war of conquest, and with his superior use of modern weapons and western advisors, he subdued all other chiefdoms, with the exception of Kaua’i, by 1795. Each chief in his ruling coalition received the right to administer large areas of land, consisting of smaller strips on various islands. Sumner La Croix and James Roumasset (1984) have argued that the strip system conveyed durability to the newly unified kingdom (by making it more costly for an ali’i to accumulate a power base on one island) and facilitated monitoring of ali’i production by the new king. In 1810, Kamehameha reached a negotiated settlement with Kaumuali’i, the ruling chief of Kaua’i, which brought the island under his control, thereby bringing the entire island chain under a single monarchy.

Exposure to Western diseases produced a massive decline in the native population of Hawai’i from 1778 through 1900 (Table 1). Estimates of Hawai’i’s population at the time of contact vary wildly, from approximately 110,000 to one million people (Bushnell, 1993; Dye, 1994). The first missionary census in 1831-1832 counted 130,313 people. A substantial portion of the decline can be attributed to a series of epidemics beginning after contact, including measles, influenza, diarrhea, and whooping cough. The introduction of venereal diseases was a factor behind declining crude birth rates. The first accurate census conducted in the Islands revealed a population of 80,641 in 1849. The native Hawaiian population reached its lowest point in 1900 when the U.S. census revealed only 39,656 full or part Hawaiians.

Table 1: Population of Hawai’i

Year

Total Population

Native Hawaiian Population

1778

110,000-1,000,000

110,000-1,000,000

1831-32

130,313

Na

1853

73,137

71,019

1872

56,897

51,531

1890

89,990

40,622

1900

154,001

39,656

1920

255,881

41,750

1940

422,770

64,310

1960

632,772

102,403

1980

964,691

115,500

2000

1,211,537

239,655

Sources: Total population from http://www.hawaii.gov/dbedt/db99/index.html, Table 1.01, Dye (1994), and Bushnell (1993). Native Hawaiian population for 1853-1960 from Schmitt (1977), p. 25. Data from the 2000 census includes people declaring “Native Hawaiian” as their only race or one of two races. See http://factfinder.census.gov/servlet/DTTable?_ts=18242084330 for the 2000 census population.

The Rise and Fall of Sandalwood and Whaling

With the unification of the Islands came the opening of foreign trade. Trade in sandalwood, a wood in demand in China for ornamental uses and burning as incense, began in 1805. The trade was interrupted by the War of 1812 and then flourished from 1816 to the late 1820s before fading away in the 1830s and 1840s (Kuykendall, 1957, I, pp. 86-87). La Croix and Roumasset (1984) have argued that the centralized organization of the sandalwood trade under King Kamehameha provided the king with incentives to harvest sandalwood efficiently. The adoption of a decentralized production system by his successor (Liholiho) led to the sandalwood being treated by ali’i as a common property resource. The reallocation of resources from agricultural production to sandalwood production not only led to rapid exhaustion of the sandalwood resource but also to famine.

As the sandalwood industry declined, Hawai’i became the base for the north-central Pacific whaling trade. The impetus for the new trade was the 1818 discovery of the “Offshore Ground” west of Peru and the 1820 discovery of rich sperm whale grounds off the coast of Japan. The first whaling ship visited the Islands in 1820, and by the late 1820s over 150 whaling ships were stopping in Hawai’i annually. While ship visits declined somewhat during the 1830s, by 1843 over 350 whaling ships annually visited the two major ports of Honolulu and Lahaina. Through the 1850s over 500 whaling ships visited Hawai’i annually. The demise of the Pacific whaling fleet during the U.S. Civil War and the rapid rise of the petroleum industry led to steep declines in the number of ships visiting Hawai’i, and after 1870 only a trickle of ships continued to visit.

Missionaries and Land Tenure

In 1819, King Kamehameha’s successor, Liholiho, abandoned the system of religious practices known as the kapu system and ordered temples (heiau) and images of the gods desecrated and burnt. In April 1820, missionaries from New England arrived and began filling the religious void with conversions to protestant Christianity. Over the next two decades as church attendance became widespread, the missionaries suppressed many traditional Hawaiian cultural practices, operated over 1,000 common schools, and instructed the ali’i in western political economy. The king promulgated a constitution with provisions for a Hawai’i legislature in 1840. It was followed, later in the decade, by laws establishing a cabinet, civil service, and judiciary. Under the 1852 constitution, male citizens received the right to vote in elections for a legislative lower house. Missionaries and other foreigners regularly served in cabinets through the end of the monarchy.

In 1844, the government began a 12-year program, known as the Great Mahele (Division), to dismantle the traditional system of land tenure. King Kauikeaouli gave up his interest in all island lands, retaining ownership only in selected estates. Ali’i had the right to take out fee simple title to lands held at the behest of the king. Maka’ainana had the right to claim fee simple title to small farms (kuleana). At the end of the claiming period, maka’ainana received less than ~40,000 acres of land, while the government (~1.5 million acres), the king (~900,000 acres), and the ali’i (~1.5 million acres) all received substantial shares. Foreigners were initially not allowed to own land in fee simple, but an 1850 law overturned this restriction. By the end of the 19th century, commoners and chiefs had sold, lost, or given up their lands, with foreigners and large estates owning most non-government lands.

Lilikala Kame’eleihiwa (1992) found the origins of the Mahele in the traditional duty of a king to undertake a redistribution of land and the difficulty of such an undertaking during the initial years of missionary influence. By contrast, La Croix and Roumasset (1990) found the origins of the Mahele in the rising value of Hawaii land in sugar cultivation, with fee simple title facilitating investment in the land, irrigation facilities, and processing factories.

Sugar, Immigration, and Population Increase

The first commercially-viable sugar plantation, Ladd and Co., was started on Kaua’i in 1835, and the sugar industry achieved moderate growth through the 1850s. Hawai’i’s sugar exports to California soared during the U.S. Civil War, but the end of hostilities in 1865 also meant the end of the sugar boom. The U.S. tariff on sugar posed a major obstacle to expanding sugar production in Hawai’i during peacetime, as the high tariff, ranging from 20 to 42 percent between 1850 and 1870, limited the extent of profitable sugar cultivation in the islands. Sugar interests helped elect King Kalakaua to the Hawaiian throne over the British-leaning Queen Emma in February 1874, and Kalakaua immediately sought a trade agreement with the United States. The 1876 reciprocity treaty between Hawai’i and the United States allowed duty-free sales of Hawai’i sugar and other selected agricultural products in the United States as well as duty-free sales of most U.S. manufactured goods in Hawai’i. Sugar exports from Hawai’i to the United States soared after the treaty’s promulgation, rising from 21 million pounds in 1876 to 114 million pounds in 1883 to 224.5 million pounds in 1890 (Table 2).

Table 2: Hawai’i Sugar Production (1000 short tons)

Year

Exports

Year

Production

Year

Production

1850

.4

1900

289.5

1950

961

1860

.7

1910

529.9

1960

935.7

1870

9.4

1920

560.4

1970

1162.1

1880

31.8

1930

939.3

1990

819.6

1890

129.9

1940

976.7

1999

367.5

Sources: Data for 1850-1970 are from Schmitt (1977), pp. 418-420. Data for 1990 and 1999 are from http://www.hawaii.gov/dbedt/db99/index.html, Table 22.09. Data for 1850-1880 are exports. Data for 1910-1990 are converted to 96° raw value.

The reciprocity treaty set the tone for Hawai’i’s economy and society over the next 80 years by establishing the sugar industry as the Hawai’i’s leading industry and altering the demographic composition of the Islands via the industry’s labor demands. Rapid expansion of the sugar industry after reciprocity sharply increased its demand for labor: Plantation employment rose from 3,921 in 1872 to 10,243 in 1882 to 20,536 in 1892. The increase in labor demand occurred while the native Hawaiian population continued its precipitous decline, and the Hawai’i government responded to labor shortages by allowing sugar planters to bring in overseas contract laborers bound to serve at fixed wages for 3-5 year periods. The enormous increase in the plantation workforce consisted of first Chinese, then Japanese, then Portuguese contract laborers.

The extensive investment in sugar industry lands and irrigations systems coupled with the rapid influx of overseas contract laborers changed the bargaining positions of Hawai’i and the United States when the reciprocity treaty was due for renegotiation in 1883. La Croix and Christopher Grandy (1997) argued that the profitability of the planters’ new investment was dependent on access to the U.S. market, and this improved the bargaining position of the United States. As a condition for renewal of the treaty, the United States demanded access to Pearl Bay [now Pearl Harbor]. King Kalakaua opposed this demand, and in July 1887, opponents of the government forced the king to accept a new constitution and cabinet. With the election of a new pro-American government in September 1887, the king signed an extension of the reciprocity treaty in October 1887 that granted access rights to Pearl Bay to the United States for the life of the treaty.

Annexation and the Sugar Economy

In 1890, the U.S. Congress enacted the McKinley Tariff, which allowed raw sugar to enter the United States free of duty and established a two-cent per pound bounty for domestic producers. The overall effect of the McKinley Tariff was to completely erase the advantages that the reciprocity treaty had provided to Hawaiian sugar producers over other foreign sugar producers selling in the U.S. market. The value of Hawaiian merchandise exports plunged from $13 million in 1890 to $10 million in 1891 to a low point of $8 million in 1892.

La Croix and Grandy (1997) argued that the McKinley Tariff threatened the wealth of the planters and induced important changes in Hawai’i’s domestic politics. King Kalakaua died in January 1891, and his sister succeeded him. After Queen Lili’uokalani proposed to declare a new constitution in January 1893, a group of U.S. residents, with the incautious assistance of the U.S. Minister and troops from a U.S. warship, overthrew the monarchy. The new government, dominated by the white minority, offered Hawai’i for annexation by the United States from 1893. Annexation was first opposed by U.S. President Cleveland, and then, during U.S. President McKinley’s term, failed to obtain Congressional approval. The advent of the Spanish-American War and the ensuing hostilities in the Philippines raised Hawai’i’s strategic value to the United States, and Hawai’i was annexed by a joint resolution of Congress in July 1898. Hawai’i became a U.S. territory with the passage of the Organic Act on June 14, 1900.

Economic Integration with the United States

In 1900 annexation by the United States eliminated bound labor contracts and freed the existing labor force from their contracts. After annexation, the sugar planters and the Hawaii government recruited workers from Japan, Korea, the Philippines, Spain, Portugal, Puerto Rico, England, Germany, and Russia. The ensuing flood of immigrants swelled the population of the Hawaiian Islands from 109,020 people in 1896 to 232,856 people in 1915. The growth in the plantation labor force was one factor behind the expansion of sugar production from 289,500 short tons in 1900 to 939,300 short tons in 1930. Pineapple production also expanded, from just 2,000 cases of canned fruit in 1903 to 12,808,000 cases in 1931.

La Croix and Price Fishback (2000) established that European and American workers on sugar plantations were paid job-specific wage premiums relative to Asian workers and that the premium paid for unskilled American workers fell by one third between 1901 and 1915 and for European workers by 50 percent or more over the same period. While similar wage gaps disappeared during this period on the U.S. West Coast, Hawai’i plantations were able to maintain a portion of the wage gaps because they constantly found new low-wage immigrants to work in the Hawai’i market. Immigrant workers from Asia failed, however, to climb many rungs up the job ladder on Hawai’i sugar plantations, and this was a major factor behind labor unrest in the sugar industry. Edward Beechert (1985) concluded that large-scale strikes on sugar plantations during 1909 and 1920 improved the welfare of sugar plantation workers but did not lead to recognition of labor unions. Between 1900 and 1941, many sugar workers responded to limited advancement and wage prospects on the sugar plantation by leaving the plantations for jobs in Hawai’i’s growing urban areas.

The rise of the sugar industry and the massive inflow of immigrant workers into Hawaii was accompanied by a decline in the Native Hawaiian population and its overall welfare (La Croix and Rose, 1999). Native Hawaiians and their political representatives argued that government lands should be made available for homesteading to enable Hawaiians to resettle in rural areas and to return to farming occupations. The U.S. Congress enacted legislation in 1921 to reserve specified rural and urban lands for a new Hawaiian Homes Program. La Croix and Louis Rose have argued that the Hawaiian Homes Program has functioned poorly, providing benefits for only a small portion of the Hawaiian population over the course of the twentieth century.

Five firms-Castle & Cooke, Alexander & Baldwin, C. Brewer & Co., Theo. Davies & Co., and American Factors-came to dominate the sugar industry. Originally established to provide financial, labor recruiting, transportation, and marketing services to plantations, they gradually acquired the plantations and also gained control over other vital industries such as banking, insurance, retailing, and shipping. By 1933, their plantations produced 96 percent of the sugar crop. The “Big Five’s” dominance would continue until the rise of the tourism industry and statehood induced U.S. and foreign firms to enter Hawai’i’s markets.

The Great Depression hit Hawai’i hard, as employment in the sugar and pineapple industries declined during the early 1930s. In December 1936, about one-quarter of Hawai’i’s labor force was unemployed. Full recovery would not occur until the military began a buildup in the mid-1930s in reaction to Japan’s occupation of Manchuria. With the Japanese invasion of China in 1937, the number of U.S. military personnel in Hawai’i increased to 48,000 by September 1940.

World War II and its Aftermath

The Japanese attack on the American Pacific Fleet at Pearl Harbor on December 7, 1941 led to a declaration of martial law, a state that continued until October 24, 1944. The war was accompanied by a massive increase in American armed service personnel in Hawai’i, with numbers increasing from 28,000 in 1940 to 378,000 in 1944. The total population increased from 429,000 in 1940 to 858,000 in 1944, thereby substantially increasing the demand for retail, restaurant, and other consumer services. An enormous construction program to house the new personnel was undertaken in 1941 and 1942. The wartime interruption of commercial shipping reduced the tonnage of civilian cargo arriving in Hawai’i by more than 50 percent. Employees working in designated high priority organizations, including sugar plantations, had their jobs and wages frozen in place by General Order 18 which also suspended union activity.

In March 1943, the National Labor Relations Board was allowed to resume operations, and the International Longshoreman’s Union (ILWU) organized 34 of Hawai’i’s 35 sugar plantations, the pineapple plantations, and the longshoremen by November 1945. The passage of the Hawai’i Employment Relations Act in 1945 facilitated union organizing by providing agricultural workers with the same union organizing rights as industrial workers.

After the War, Hawai’i’s economy stagnated, as demobilized armed services personnel left Hawai’i for the U.S. mainland. With the decline in population, real per capita personal income declined at an annual rate of 5.7 percent between 1945 and 1949 (Schmitt, 1976, pp. 148, 167). During this period, Hawai’i’s newly formed unions embarked on a series of disruptive strikes covering West Coast and Hawai’i longshoremen (1946-1949); the sugar industry (1946); and the pineapple industry (1947, 1951). The economy began a nine-year period of moderate expansion in 1949, with the annual growth rate of real personal income averaging 2.3 percent. The expansion of propeller-driven commercial air service sent visitor numbers soaring, from 15,000 in 1946 to 171,367 in 1958, and induced construction of new hotels and other tourism facilities and infrastructure. The onset of the Korean War increased the number of armed service personnel stationed in Hawai’i from 21,000 in 1950 to 50,000 in 1958. Pineapple production and canning also displayed substantial increases over the decade, increasing from 13,697,000 cases in 1949 to 18,613,000 cases in 1956.

Integration and Growth after Statehood

In 1959, Hawai’i became the fiftieth state. The transition from territorial to statehood status was one factor behind the 1958-1973 boom, in which real per capita personal income increased at an annual rate of 4 percent. The most important factor behind the long expansion was the introduction of commercial jet service in 1959, as the jet plane dramatically reduced the money and time costs of traveling to Hawai’i. Also fueled by rapidly rising real incomes in the United States and Japan, the tourism industry would continue its rapid growth through 1990. Visitor arrivals (see Table 3) increased from 171,367 in 1958 to 6,723,531 in 1990. Growth in visitor arrivals was once again accompanied by growth in the construction industry, particularly from 1965 to 1975. The military build-up during the Vietnam War also contributed to the boom by increasing defense expenditures in Hawai’i by 3.9 percent annually from 1958 to 1973 (Schmitt, 1977, pp. 148, 668).

Table 3: Visitor Arrivals to Hawai’i

Year

Visitor Arrivals

Year

Visitor Arrivals

1930

18,651

1970

1,745,904

1940

25,373

1980

3,928,789

1950

46,593

1990

6,723,531

1960

296,249

2000

6,975,866

Source: Hawai’i Tourism Authority, http://www.hawaii.gov/dbedt/monthly/historical-r.xls at Table 5 and http://www.state.hi.us/dbedt/monthly/index2k.html.

From 1973 to 1990, growth in real per capita personal income slowed to 1.1 percent annually. The defense and agriculture sectors stagnated, with most growth generated by the relentless increase in visitor arrivals. Japan’s persistently high rates of economic growth during the 1970s and 1980s spilled over to Hawai’i in the form of huge increases in the numbers of Japanese tourists and in the value of Japanese foreign investment in Hawai’i. At the end of the 1980s, the Hawai’i unemployment rate was just 2-3 percent, employment had been steadily growing since 1983, and prospects looked good for continued expansion of both tourism and the overall economy.

The Malaise of the 1990s

From 1991 to 1998, Hawai’i’s economy was hit by several negative shocks. The 1990-1991 recession in the United States, the closure of California military bases and defense plants, and uncertainty over the safety of air travel during the 1991 Gulf War combined to reduce visitor arrivals from the United States in the early and mid-1990s. Volatile and slow growth in Japan throughout the 1990s led to declines in Japanese visitor arrivals in the late 1990s. The ongoing decline in sugar and pineapple production gathered steam in the 1990s, with only a handful of plantations still in business by 2001. The cumulative impact of these adverse shocks was severe, as real per capita personal income did not change between 1991 and 1998.

The recovery continued through summer 2001 despite a slowing U.S. economy. It came to an abrupt halt with the terrorism attack of September 11, 2001, as domestic and foreign tourism declined sharply.

References

Beechert, Edward D. Working in Hawaii: A Labor History. Honolulu: University of Hawaii Press, 1985.

Bushnell, Andrew F. “The ‘Horror’ Reconsidered: An Evaluation of the Historical Evidence for Population Decline in Hawai’i, 1778-1803.” Pacific Studies 16 (1993): 115-161.

Daws, Gavan. Shoal of Time: A History of the Hawaiian Islands. Honolulu: University of Hawaii Press, 1968.

Dye, Tom. “Population Trends in Hawai’i before 1778.” The Hawaiian Journal of History 28 (1994): 1-20.

Hitch, Thomas Kemper. Islands in Transition: The Past, Present, and Future of Hawaii’s Economy. Honolulu: First Hawaiian Bank, 1992.

Kame’eleihiwa, Lilikala. Native Land and Foreign Desires: Pehea La E Pono Ai? Honolulu: Bishop Museum Press, 1992.

Kirch, Patrick V. Feathered Gods and Fishhooks: An Introduction to Hawaiian Archaeology and Prehistory. Honolulu: University of Hawaii Press, 1985.

Kuykendall, Ralph S. A History of the Hawaiian Kingdom. 3 vols. Honolulu: University of Hawaii Press, 1938-1967.

La Croix, Sumner J., and Price Fishback. “Firm-Specific Evidence on Racial Wage Differentials and Workforce Segregation in Hawaii’s Sugar Industry.” Explorations in Economic History 26 (1989): 403-423.

La Croix, Sumner J., and Price Fishback. “Migration, Labor Market Dynamics, and Wage Differentials in Hawaii’s Sugar Industry.” Advances in Agricultural Economic History 1 (2000): 31-72.

La Croix, Sumner J., and Christopher Grandy. “The Political Instability of Reciprocal Trade and the Overthrow of the Hawaiian Kingdom.” Journal of Economic History 57 (1997): 161-189.

La Croix, Sumner J., and Louis A. Rose. “The Political Economy of the Hawaiian Homelands Program.” In The Other Side of the Frontier: Economic Explorations into Native American History, edited by Linda Barrington. Boulder, Colorado: Westview Press, 1999.

La Croix, Sumner J., and James Roumasset. “An Economic Theory of Political Change in Pre-Missionary Hawaii.” Explorations in Economic History 21 (1984): 151-168.

La Croix, Sumner J., and James Roumasset. “The Evolution of Property Rights in Nineteenth-Century Hawaii.” Journal of Economic History 50 (1990): 829-852.

Morgan, Theodore. Hawaii, A Century of Economic Change: 1778-1876. Cambridge, MA: Harvard University Press, 1948.

Schmitt, Robert C. Historical Statistics of Hawaii. Honolulu: University Press of Hawaii, 1977.

Citation: La Croix, Sumner. “Economic History of Hawai’i”. EH.Net Encyclopedia, edited by Robert Whaples. September 27, 2001. URL http://eh.net/encyclopedia/economic-history-of-hawaii/

Medieval Guilds

Gary Richardson, University of California, Irvine

Guilds existed throughout Europe during the Middle Ages. Guilds were groups of individuals with common goals. The term guild probably derives from the Anglo-Saxon root geld which meant ‘to pay, contribute.’ The noun form of geld meant an association of persons contributing money for some common purpose. The root also meant ‘to sacrifice, worship.’ The dual definitions probably reflected guilds’ origins as both secular and religious organizations.

The term guild had many synonyms in the Middle Ages. These included association, brotherhood, college, company, confraternity, corporation, craft, fellowship, fraternity, livery, society, and equivalents of these terms in Latin, Germanic, Scandinavian, and Romance languages such as ambach, arte, collegium, corporatio, fraternitas, gilda, innung, corps de métier, societas, and zunft. In the late nineteenth century, as a professional lexicon evolved among historians, the term guild became the universal reference for these groups of merchants, artisans, and other individuals from the ordinary (non-priestly and non-aristocratic) classes of society which were not part of the established religious, military, or governmental hierarchies.

Much of the academic debate about guilds stems from confusion caused by incomplete lexicographical standardization. Scholars study guilds in one time and place and then assume that their findings apply to guilds everywhere and at all times or assert that the organizations that they studied were the one type of true guild, while other organizations deserved neither the distinction nor serious study. To avoid this mistake, this encyclopedia entry begins with the recognition that guilds were groups whose activities, characteristics, and composition varied greatly across centuries, regions, and industries.

Guild Activities and Taxonomy

Guilds filled many niches in medieval economy and society. Typical taxonomies divide urban occupational guilds into two types: merchant and craft.

Merchant guilds were organizations of merchants who were involved in long-distance commerce and local wholesale trade, and may also have been retail sellers of commodities in their home cities and distant venues where they possessed rights to set up shop. The largest and most influential merchant guilds participated in international commerce and politics and established colonies in foreign cities. In many cases, they evolved into or became inextricably intertwined with the governments of their home towns.

Merchant guilds enforced contracts among members and between members and outsiders. Guilds policed members’ behavior because medieval commerce operated according to the community responsibility system. If a merchant from a particular town failed to fulfill his part of a bargain or pay his debts, all members of his guild could be held liable. When they were in a foreign port, their goods could be seized and sold to alleviate the bad debt. They would then return to their hometown, where they would seek compensation from the original defaulter.

Merchant guilds also protected members against predation by rulers. Rulers seeking revenue had an incentive to seize money and merchandise from foreign merchants. Guilds threatened to boycott the realms of rulers who did this, a practice known as withernam in medieval England. Since boycotts impoverished both kingdoms which depended on commerce and governments for whom tariffs were the principal source of revenue, the threat of retaliation deterred medieval potentates from excessive expropriations.

Merchant guilds tended to be wealthier and of higher social status than craft guilds. Merchants’ organizations usually possessed privileged positions in religious and secular ceremonies and inordinately influenced local governments.

Craft guilds were organized along lines of particular trades. Members of these guilds typically owned and operated small businesses or family workshops. Craft guilds operated in many sectors of the economy. Guilds of victuallers bought agricultural commodities, converted them to consumables, and sold finished foodstuffs. Examples included bakers, brewers, and butchers. Guilds of manufacturers made durable goods, and when profitable, exported them from their towns to consumers in distant markets. Examples include makers of textiles, military equipment, and metal ware. Guilds of a third type sold skills and services. Examples include clerks, teamsters, and entertainers.

These occupational organizations engaged in a wide array of economic activities. Some manipulated input and output markets to their own advantage. Others established reputations for quality, fostering the expansion of anonymous exchange and making everyone better off. Because of the underlying economic realities, victualling guilds tended towards the former. Manufacturing guilds tended towards the latter. Guilds of service providers fell somewhere in between. All three types of guilds managed labor markets, lowered wages, and advanced their own interests at their subordinates’ expense. These undertakings had a common theme. Merchant and craft guilds acted to increase and stabilize members’ incomes.

Non-occupational guilds also operated in medieval towns and cities. These organizations had both secular and religious functions. Historians refer to these organizations as social, religious, or parish guilds as well as fraternities and confraternities. The secular activities of these organizations included providing members with mutual insurance, extending credit to members in times of need, aiding members in courts of law, and helping the children of members afford apprenticeships and dowries.

The principal pious objective was the salvation of the soul and escape from Purgatory. The doctrine of Purgatory was the belief that there lay between Heaven and Hell an intermediate place, by passing though which the souls of the dead might cleanse themselves of guilt attached to the sins committed during their lifetime by submitting to a graduated scale of divine punishment. The suffering through which they were cleansed might be abbreviated by the prayers of the living, and most especially by masses. Praying devoutly, sponsoring masses, and giving alms were three of the most effective methods of redeeming one’s soul. These works of atonement could be performed by the penitent on their own or by someone else on their behalf.

Guilds served as mechanisms for organizing, managing, and financing the collective quest for eternal salvation. Efforts centered on three types of tasks. The first were routine and participatory religious services. Members of guilds gathered at church on Sundays and often also on other days of the week. Members marked ceremonial occasions, such as the day of their patron saint or Good Friday, with prayers, processions, banquets, masses, the singing of psalms, the illumination of holy symbols, and the distribution of alms to the poor. Some guilds kept chaplains on call. Others hired priests when the need arose. These clerics hosted regular religious services, such as vespers each evening or mass on Sunday morning, and prayed for the souls of members living and deceased.

The second category consisted of actions performed on members’ behalf after their deaths and for the benefit of their souls. Postmortem services began with funerals and burials, which guilds arranged for the recently departed. The services were elaborate and extensive. On the day before internment, members gathered around the corpse, lit candles, and sung a placebo and a dirge, which were the vespers and matins from the Office of the Dead. On the day of internment, a procession marched from churchyard to graveyard, buried the body, distributed alms, and attended mass. Additional masses numbering one to forty occurred later that day and sometimes for months thereafter. Postmortem prayers continued even further into the future and in theory into perpetuity. All guilds prayed for the souls of deceased members. These prayers were a prominent part of all guild events. Many guilds also hired priests to pray for the souls of the deceased. A few guilds built chantries where priests said those prayers.

The third category involved indoctrination and monitoring to maintain the piety of members. The Christian catechism of the era contained clear commandments. Rest on the Sabbath and religious holidays. Be truthful. Do not deceive others. Be chaste. Do not commit adultery. Be faithful to your family. Obey authorities. Be modest. Do not covet thy neighbors’ possessions. Do not steal. Do not gamble. Work hard. Support the church. Guild ordinances echoed these exhortations. Members should neither gamble nor lie nor steal nor drink to excess. They should restrain their gluttony, lust, avarice, and corporal impulses. They should pray to the Lord, live like His son, and give alms to the poor.

Righteous living was important because members’ fates were linked together. The more pious one’s brethren, the more helpful their prayers, and the quicker one escaped from purgatory. The worse one’s brethren, the less salutary their supplications and the longer one suffered during the afterlife. So, in hopes of minimizing purgatorial pain and maximizing eternal happiness, guilds beseeched members to restrain physical desires and forgo worldly pleasures.

Guilds also operated in villages and the countryside. Rural guilds performed the same tasks as social and religious guilds in towns and cities. Recent research on medieval England indicates that guilds operated in most, if not all, villages. Villages often possessed multiple guilds. Most rural residents belonged to a guild. Some may have joined more than one organization.

Guilds often spanned multiple dimensions of this taxonomy. Members of craft guilds participated in wholesale commerce. Members of merchant guilds opened retail shops. Social and religious guilds evolved into occupational associations. All merchant and craft guilds possessed religious and fraternal features.

In sum, guild members sought prosperity in this life and providence in the next. Members wanted high and stable incomes, quick passage through Purgatory, and eternity in Heaven. Guilds helped them coordinate their collective efforts to attain these goals.

Guild Structure and Organization

To attain their collective goals, guild members had to cooperate. If some members slacked off, all would suffer. Guilds that wished to lower the costs of labor had to get all masters to reduce wages. Guilds that wished to raise the prices of products had to get all members to restrict output. Guilds that wished to develop respected reputations had to get all members to sell superior merchandise. Guild members contributed money – to pay priests and purchase pious paraphernalia – and contributed time, emotion, and personal energy, as well. Members participated in frequent religious services, attended funerals, and prayed for the souls of the brethren. Members had to live piously, abstaining both from the pleasures of the flesh and the material temptations of secular life. Members also had to administer their associations. The need for coordination was a common denominator.

To convince members to cooperate and advance their common interests, guilds formed stable, self-enforcing associations that possessed structures for making and implementing collective decisions.

A guild’s members met at least once a year (and in most cases more often) to elect officers, audit accounts, induct new members, debate policies, and amend ordinances. Officers such as aldermen, stewards, deans, and clerks managed the guild’s day to day affairs. Aldermen directed guild activities and supervised lower-ranking officers. Stewards kept guild funds, and their accounts were periodically audited. Deans summoned members to meetings, feasts, and funerals, and in many cases, policed members’ behavior. Clerks kept records. Decisions were usually made by majority vote among the master craftsmen.

These officers administered a nexus of agreements among a guild’s members. Details of these agreements varied greatly from guild to guild, but the issues addressed were similar in all cases. Members agreed to contribute certain resources and/or take certain actions that furthered the guild’s occupational and spiritual endeavors. Officers of the guild monitored members’ contributions. Manufacturing guilds, for example, employed officers known as searchers who scrutinized members’ merchandise to make sure it met guild standards and inspected members’ shops and homes seeking evidence of attempts to circumvent the rules. Members who failed to fulfill their obligations faced punishments of various sorts.

Punishments varied across transgressions, guilds, time, and space, but a pattern existed. First time offenders were punished lightly, perhaps suffering public scolding and paying small monetary fines, and repeat offenders punished harshly. The ultimate threat was expulsion. Guilds could do nothing harsher because laws protected persons and property from arbitrary expropriations and physical abuse. The legal system set the rights of individuals above the interests of organizations. Guilds were voluntary associations. Members facing harsh punishments could quit the guild and walk away. The most the guild could extract was the value of membership. Abundant evidence indicates that guilds enforced agreements in this manner.

Other game-theoretic options existed, of course. Guilds could have punished uncooperative members by taking actions with wider consequences. Members of a manufacturing guild who caught one of their own passing off shoddy merchandise under the guilds’ good name could have punished the offender by collectively lowering the quality of their products for a prolonged period. That would lower the offender’s income, albeit at the cost of lowering the income of all other members as well. Similarly, members of a guild that caught one of their brethren shirking on prayers and sinning incessantly could have punished the offender by collectively forsaking the Lord and descending into debauchery. Then, no one would or could pray for the soul of the offender, and his period in Purgatory would be extended significantly. In broader terms, cheaters could have been punished by any action that reduced the average incomes of all guild members or increased the pain that all members expected to endure in Purgatory. In theory, such threats could have convinced even the most recalcitrant members to contribute to the common good.

But, no evidence exists that craft guilds ever operated in such a manner. None of the hundreds of surviving guild ordinances contains threats of such a kind. No surviving guild documents describe punishing the innocent along with the guilty. Guilds appear to have eschewed indiscriminant retaliation for several salient reasons. First, monitoring members’ behavior was costly and imperfect. Time and risk preferences varied across individuals. Uncertainty of many kinds influenced craftsmen’s decisions. Some members would have attempted to cheat regardless of the threatened punishment. Punishments, in other words, would have occurred in equilibrium. The cost of carrying out an equilibrium-sustaining threat of expulsion would have been lower than the cost of carrying out an equilibrium-sustaining threat that reduced average income. Thus, expelling members caught violating the rules was an efficient method of enforcing the rules. Second, punishing free riders by indiscriminately harming all guild members may not have been a convincing threat. Individuals may not have believed that threats of mutual assured destruction would be carried out. The incentive to renegotiate was strong. Third, skepticism probably existed about threats to do onto others as they had done onto you. That concept contradicted a fundamental teaching of the church, to do onto others as you would have them do onto you. It also contradicted Jesus’ admonition to turn the other cheek. Thus, indiscriminant retaliation based upon hair-trigger strategies was not an organizing principle likely to be adopted by guilds whose members hoped to speed passage through Purgatory.

A hierarchy existed in large guilds. Masters were full members who usually owned their own workshops, retail outlets, or trading vessels. Masters employed journeymen, who were laborers who worked for wages on short term contracts or a daily basis (hence the term journeyman, from the French word for day). Journeymen hoped to one day advance to the level of master. To do this, journeymen usually had to save enough money to open a workshop and pay for admittance, or if they were lucky, receive a workshop through marriage or inheritance.

Masters also supervised apprentices, who were usually boys in their teens who worked for room, board, and perhaps a small stipend in exchange for a vocational education. Both guilds and government regulated apprenticeships, usually to ensure that masters fulfilled their part of the apprenticeship agreement. Terms of apprenticeships varied, usually lasting from five to nine years.

The internal structure of guilds varied widely across Europe. Little is known for certain about the structure of smaller guilds, since they left few written documents. Most of the evidence comes from large, successful associations whose internal records survive to the present day. The description above is based on such documents. It seems likely that smaller organizations fulfilled many of the same functions, but their structure was probably less formal and more horizontal.

Relationships between guilds and governments also varied across Europe. Most guilds aspired to attain recognition as a self-governing association with the right to possess property and other legal privileges. Guilds often purchased these rights from municipal and national authorities. In England, for example, a guild which wished to possess property had to purchase from the royal government a writ allowing it to do so. But, most guilds operated without formal sanction from the government. Guilds were spontaneous, voluntary, and self-enforcing associations.

Guild Chronology and Impact

Reconstructing the history of guilds poses several problems. Few written records survive from the twelfth century and earlier. Surviving documents consist principally of the records of rulers – kings, princes, churches – that taxed, chartered, and granted privileges to organizations. Some evidence also exists in the records of notaries and courts, which recorded and enforced contracts between guild masters and outsiders, such as the parents of apprentices. From the fourteenth and fifteenth centuries, records survive in larger numbers. Surviving records include statute books and other documents describing the internal organization and operation of guilds. The evidence at hand links the rise and decline of guilds to several important events in the history of Western Europe.

In the late Roman Empire, organizations resembling guilds existed in most towns and cities. These voluntary associations of artisans, known as collegia, were occasionally regulated by the state but largely left alone. They were organized along trade lines and possessed a strong social base, since their members shared religious observances and fraternal dinners. Most of these organizations disappeared during the Dark Ages, when the Western Roman Empire disintegrated and urban life collapsed. In the Eastern Empire, some collegia appear to have survived from antiquity into the Middle Ages, particularly in Constantinople, where Leo the Wise codified laws concerning commerce and crafts at the beginning of the tenth century and sources reveal an unbroken tradition of state management of guilds from ancient times. Some scholars suspect that in the West, a few of the most resilient collegia in the surviving urban areas may have evolved in an unbroken descent into medieval guilds, but the absence of documentary evidence makes it appear unlikely and unprovable.

In the centuries following the Germanic invasions, evidence indicates that numerous guild-like associations existed in towns and rural areas. These organizations functioned as modern burial and benefit societies, whose objectives included prayers for the souls of deceased members, payments of weregilds in cases of justifiable homicide, and supporting members involved in legal disputes. These rural guilds were descendents of Germanic social organizations known as gilda which the Roman historian Tacitus referred to as convivium.

During the eleventh through thirteenth centuries, considerable economic development occurred. The sources of development were increases in the productivity of medieval agriculture, the abatement of external raiding by Scandinavian and Muslim brigands, and population increases. The revival of long-distance trade coincided with the expansion of urban areas. Merchant guilds formed an institutional foundation for this commercial revolution. Merchant guilds flourished in towns throughout Europe, and in many places, rose to prominence in urban political structures. In many towns in England, for example, the merchant guild became synonymous with the body of burgesses and evolved into the municipal government. In Genoa and Venice, the merchant aristocracy controlled the city government, which promoted their interests so well as to preclude the need for a formal guild.

Merchant guilds’ principal accomplishment was establishing the institutional foundations for long-distance commerce. Italian sources provide the best picture of guilds’ rise to prominence as an economic and social institution. Merchant guilds appear in many Italian cities in the twelfth century. Craft guilds became ubiquitous during the succeeding century.

In northern Europe, merchant guilds rose to prominence a few generations later. In the twelfth and early thirteenth centuries, local merchant guilds in trading cities such as Lubeck and Bremen formed alliances with merchants throughout the Baltic region. The alliance system grew into the Hanseatic League which dominated trade around the Baltic and North Seas and in Northern Germany.

Social and religious guilds existed at this time, but few records survive. Small numbers of craft guilds developed, principally in prosperous industries such as cloth manufacturing, but records are also rare, and numbers appear to have been small.

As economic expansion continued in the thirteenth and fourteenth centuries, the influence of the Catholic Church grew, and the doctrine of Purgatory developed. The doctrine inspired the creation of countless religious guilds, since the doctrine provided members with strong incentives to want to belong to a group whose prayers would help one enter heaven and it provided guilds with mechanisms to induce members to exert effort on behalf of the organization. Many of these religious associations evolved into occupational guilds. Most of the Livery Companies of London, for example, began as intercessory societies around this time.

The number of guilds continued to grow after the Black Death. There are several potential explanations. The decline in population raised per-capita incomes, which encouraged the expansion of consumption and commerce, which in turn necessitated the formation of institutions to satisfy this demand. Repeated epidemics decreased family sizes, particularly in cities, where the typical adult had on average perhaps 1.5 surviving children, few surviving siblings, and only a small extended family, if any. Guilds replaced extended families in a form of fictive kinship. The decline in family size and impoverishment of the church also forced individuals to rely on their guild more in times of trouble, since they no longer could rely on relatives and priests to sustain them through periods of crisis. All of these changes bound individuals more closely to guilds, discouraged free riding, and encouraged the expansion of collective institutions.

For nearly two centuries after the Black Death, guilds dominated life in medieval towns. Any town resident of consequence belonged to a guild. Most urban residents thought guild membership to be indispensable. Guilds dominated manufacturing, marketing, and commerce. Guilds dominated local politics and influenced national and international affairs. Guilds were the center of social and spiritual life.

The heyday of guilds lasted into the sixteenth century. The Reformation weakened guilds in most newly Protestant nations. In England, for example, the royal government suppressed thousands of guilds in the 1530s and 1540s. The king and his ministers dispatched auditors to every guild in the realm. The auditors seized spiritual paraphernalia and funds retained for religious purposes, disbanded guilds which existed for purely pious purposes, and forced craft and merchant guilds to pay large sums for the right to remain in operation. Those guilds that did still lost the ability to provide members with spiritual services.

In Protestant nations after the Reformation, the influence of guilds waned. Many turned to governments for assistance. They requested monopolies on manufacturing and commerce and asked courts to force members to live up to their obligations. Guilds lingered where governments provided such assistance. Guilds faded where governments did not. By the seventeenth century, the power of guilds had withered in England. Guilds retained strength in nations which remained Catholic. France abolished its guilds during the French Revolution in 1791, and Napoleon’s armies disbanded guilds in most of the continental nations which they occupied during the next two decades.

References

Basing, Patricia. Trades and Crafts in Medieval Manuscripts. London: British Library, 1990.

Cooper, R.C.H. The Archives of the City of London Livery Companies and Related Organizations. London: Guildhall Library, 1985.

Davidson, Clifford. Technology, Guilds, and Early English Drama. Early Drama, Art, and Music Monograph Series, 23. Kalamazoo, MI: Medieval Institute Publications, Western Michigan University, 1996

Epstein, S. R. “Craft Guilds, Apprenticeships, and Technological Change in Pre-Industrial Europe.” Journal of Economic History 58 (1998): 684-713.

Epstein, Steven. Wage and Labor Guilds in Medieval Europe. Chapel Hill, NC: University of North Carolina Press, 1991.

Gross, Charles. The Gild Merchant; A Contribution to British Municipal History. Oxford: Clarendon Press, 1890.

Gustafsson, Bo. “The Rise and Economic Behavior of Medieval Craft Guilds: An Economic-Theoretical Interpretation.” Scandinavian Journal of Economics 35, no. 1 (1987): 1-40.

Hanawalt, Barbara. “Keepers of the Lights: Late Medieval English Parish Gilds.” Journal of Medieval and Renaissance Studies 14 (1984).

Hatcher, John and Edward Miller. Medieval England: Towns, Commerce and Crafts, 1086 – 1348. London: Longman, 1995.

Hickson, Charles R. and Earl A. Thompson. “A New Theory of Guilds and European Economic Development.” Explorations in Economic History. 28 (1991): 127-68.

Lopez, Robert. The Commercial Revolution of the Middle Ages, 950-1350. Englewood Cliffs, NJ: Prentice-Hall, 1971.

Mokyr, Joel. The Lever of Riches: Technological Creativity and Economic Progress. Oxford: Oxford University Press, 1990

Pirenne, Henri. Medieval Cities: Their Origins and the Revival of Trade. Frank Halsey (translator). Princeton: Princeton University Press, 1952.

Richardson, Gary. “A Tale of Two Theories: Monopolies and Craft Guilds in Medieval England and Modern Imagination.” Journal of the History of Economic Thought (2001).

Richardson, Gary. “Brand Names Before the Industrial Revolution.” UC Irvine Working Paper, 2000.

Richardson, Gary. “Guilds, Laws, and Markets for Manufactured Merchandise in Late-Medieval England,” Explorations in Economic History 41 (2004): 1–25.

Richardson, Gary. “Christianity and Craft Guilds in Late Medieval England: A Rational Choice Analysis” Rationality and Society 17 (2005): 139-89

Richardson, Gary. “The Prudent Village: Risk Pooling Institutions in Medieval English Agriculture,” Journal of Economic History 65, no. 2 (2005): 386–413.

Smith, Toulmin. English Gilds. London: N. Trübner & Co., 1870.

Swanson, Heather. 1983. Building Craftsmen in Late Medieval York. York: University of York, 1983.

Thrupp, Sylvia. The Merchant Class of Medieval London 1300-1500. Chicago: University of Chicago Press, 1989.

Unwin, George. The Guilds and Companies of London. London: Methuen & Company, 1904.

Ward, Joseph. Metropolitan Communities: Trade Guilds, Identity, and Change in Early Modern London. Palo Alto: Stanford University Press, 1997.

Westlake, H. F. The Parish Gilds of Mediaeval England. London: Society for Promotion of Christian Knowledge, 1919.

Citation: Richardson, Gary. “Medieval Guilds”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/medieval-guilds/