EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

The Economic History of the Fur Trade: 1670 to 1870

Ann M. Carlos, University of Colorado
Frank D. Lewis, Queen’s University

Introduction

A commercial fur trade in North America grew out of the early contact between Indians and European fisherman who were netting cod on the Grand Banks off Newfoundland and on the Bay of Gaspé near Quebec. Indians would trade the pelts of small animals, such as mink, for knives and other iron-based products, or for textiles. Exchange at first was haphazard and it was only in the late sixteenth century, when the wearing of beaver hats became fashionable, that firms were established who dealt exclusively in furs. High quality pelts are available only where winters are severe, so the trade took place predominantly in the regions we now know as Canada, although some activity took place further south along the Mississippi River and in the Rocky Mountains. There was also a market in deer skins that predominated in the Appalachians.

The first firms to participate in the fur trade were French, and under French rule the trade spread along the St. Lawrence and Ottawa Rivers, and down the Mississippi. In the seventeenth century, following the Dutch, the English developed a trade through Albany. Then in 1670, a charter was granted by the British crown to the Hudson’s Bay Company, which began operating from posts along the coast of Hudson Bay (see Figure 1). For roughly the next hundred years, this northern region saw competition of varying intensity between the French and the English. With the conquest of New France in 1763, the French trade shifted to Scottish merchants operating out of Montreal. After the negotiation of Jay’s Treaty (1794), the northern border was defined and trade along the Mississippi passed to the American Fur Company under John Jacob Astor. In 1821, the northern participants merged under the name of the Hudson’s Bay Company, and for many decades this merged company continued to trade in furs. Finally, in the 1990s, under pressure from animal rights groups, the Hudson’s Bay Company, which in the twentieth century had become a large Canadian retailer, ended the fur component of its operation.

Figure 1
Hudson’s Bay Company Hinterlands
 Hudson's Bay Company Hinterlands (map)

Source: Ray (1987, plate 60)

The fur trade was based on pelts destined either for the luxury clothing market or for the felting industries, of which hatting was the most important. This was a transatlantic trade. The animals were trapped and exchanged for goods in North America, and the pelts were transported to Europe for processing and final sale. As a result, forces operating on the demand side of the market in Europe and on the supply side in North America determined prices and volumes; while intermediaries, who linked the two geographically separated areas, determined how the trade was conducted.

The Demand for Fur: Hats, Pelts and Prices

However much hats may be considered an accessory today, they were for centuries a mandatory part of everyday dress, for both men and women. Of course styles changed, and, in response to the vagaries of fashion and politics, hats took on various forms and shapes, from the high-crowned, broad-brimmed hat of the first two Stuarts to the conically-shaped, plainer hat of the Puritans. The Restoration of Charles II of England in 1660 and the Glorious Revolution in 1689 brought their own changes in style (Clarke, 1982, chapter 1). What remained a constant was the material from which hats were made – wool felt. The wool came from various animals, but towards the end of the fifteenth century beaver wool began to be predominate. Over time, beaver hats became increasingly popular eventually dominating the market. Only in the nineteenth century did silk replace beaver in high-fashion men’s hats.

Wool Felt

Furs have long been classified as either fancy or staple. Fancy furs are those demanded for the beauty and luster of their pelt. These furs – mink, fox, otter – are fashioned by furriers into garments or robes. Staple furs are sought for their wool. All staple furs have a double coating of hair with long, stiff, smooth hairs called guard hairs which protect the shorter, softer hair, called wool, that grows next to the animal skin. Only the wool can be felted. Each of the shorter hairs is barbed and once the barbs at the ends of the hair are open, the wool can be compressed into a solid piece of material called felt. The prime staple fur has been beaver, although muskrat and rabbit have also been used.

Wool felt was used for over two centuries to make high-fashion hats. Felt is stronger than a woven material. It will not tear or unravel in a straight line; it is more resistant to water, and it will hold its shape even if it gets wet. These characteristics made felt the prime material for hatters especially when fashion called for hats with large brims. The highest quality hats would be made fully from beaver wool, whereas lower quality hats included inferior wool, such as rabbit.

Felt Making

The transformation of beaver skins into felt and then hats was a highly skilled activity. The process required first that the beaver wool be separated from the guard hairs and the skin, and that some of the wool have open barbs, since felt required some open-barbed wool in the mixture. Felt dates back to the nomads of Central Asia, who are said to have invented the process of felting and made their tents from this light but durable material. Although the art of felting disappeared from much of western Europe during the first millennium, felt-making survived in Russia, Sweden, and Asia Minor. As a result of the Medieval Crusades, felting was reintroduced through the Mediterranean into France (Crean, 1962).

In Russia, the felting industry was based on the European beaver (castor fiber). Given their long tradition of working with beaver pelts, the Russians had perfected the art of combing out the short barbed hairs from among the longer guard hairs, a technology that they safeguarded. As a consequence, the early felting trades in England and France had to rely on beaver wool imported from Russia, although they also used domestic supplies of wool from other animals, such rabbit, sheep and goat. But by the end of the seventeenth century, Russian supplies were drying up, reflecting the serious depletion of the European beaver population.

Coincident with the decline in European beaver stocks was the emergence of a North American trade. North American beaver (castor canadensis) was imported through agents in the English, French and Dutch colonies. Although many of the pelts were shipped to Russia for initial processing, the growth of the beaver market in England and France led to the development of local technologies, and more knowledge of the art of combing. Separating the beaver wool from the felt was only the first step in the felting process. It was also necessary that some of the barbs on the short hairs be raised or open. On the animal these hairs were naturally covered with keratin to prevent the barbs from opening, thus to make felt, the keratin had to be stripped from at least some of the hairs. The process was difficult to refine and entailed considerable experimentation by felt-makers. For instance, one felt maker “bundled [the skins] in a sack of linen and boiled [them] for twelve hours in water containing several fatty substances and nitric acid” (Crean, 1962, p. 381). Although such processes removed the keratin, they did so at the price of a lower quality wool.

The opening of the North American trade not only increased the supply of skins for the felting industry, it also provided a subset of skins whose guard hairs had already been removed and the keratin broken down. Beaver pelts imported from North America were classified as either parchment beaver (castor sec – dry beaver), or coat beaver (castor gras – greasy beaver). Parchment beaver were from freshly caught animals, whose skins were simply dried before being presented for trade. Coat beaver were skins that had been worn by the Indians for a year or more. With wear, the guard hairs fell out and the pelt became oily and more pliable. In addition, the keratin covering the shorter hairs broke down. By the middle of the seventeenth century, hatters and felt-makers came to learn that parchment and coat beaver could be combined to produce a strong, smooth, pliable, top-quality waterproof material.

Until the 1720s, beaver felt was produced with relatively fixed proportions of coat and parchment skins, which led to periodic shortages of one or the other type of pelt. The constraint was relaxed when carotting was developed, a chemical process by which parchment skins were transformed into a type of coat beaver. The original carrotting formula consisted of salts of mercury diluted in nitric acid, which was brushed on the pelts. The use of mercury was a big advance, but it also had serious health consequences for hatters and felters, who were forced to breathe the mercury vapor for extended periods. The expression “mad as a hatter” dates from this period, as the vapor attacked the nervous systems of these workers.

The Prices of Parchment and Coat Beaver

Drawn from the accounts of the Hudson’s Bay Company, Table 1 presents some eighteenth century prices of parchment and coat beaver pelts. From 1713 to 1726, before the carotting process had become established, coat beaver generally fetched a higher price than parchment beaver, averaging 6.6 shillings per pelt as compared to 5.5 shillings. Once carotting was widely used, however, the prices were reversed, and from 1730 to 1770 parchment exceeded coat in almost every year. The same general pattern is seen in the Paris data, although there the reversal was delayed, suggesting slower diffusion in France of the carotting technology. As Crean (1962, p. 382) notes, Nollet’s L’Art de faire des chapeaux included the exact formula, but it was not published until 1765.

A weighted average of parchment and coat prices in London reveals three episodes. From 1713 to 1722 prices were quite stable, fluctuating within the narrow band of 5.0 and 5.5 shillings per pelt. During the period, 1723 to 1745, prices moved sharply higher and remained in the range of 7 to 9 shillings. The years 1746 to 1763 saw another big increase to over 12 shillings per pelt. There are far fewer prices available for Paris, but we do know that in the period 1739 to 1753 the trend was also sharply higher with prices more than doubling.

Table 1
Price of Beaver Pelts in Britain: 1713-1763
(shillings per skin)

Year Parchment Coat Averagea Year Parchment Coat Averagea
1713 5.21 4.62 5.03 1739 8.51 7.11 8.05
1714 5.24 7.86 5.66 1740 8.44 6.66 7.88
1715 4.88 5.49 1741 8.30 6.83 7.84
1716 4.68 8.81 5.16 1742 7.72 6.41 7.36
1717 5.29 8.37 5.65 1743 8.98 6.74 8.27
1718 4.77 7.81 5.22 1744 9.18 6.61 8.52
1719 5.30 6.86 5.51 1745 9.76 6.08 8.76
1720 5.31 6.05 5.38 1746 12.73 7.18 10.88
1721 5.27 5.79 5.29 1747 10.68 6.99 9.50
1722 4.55 4.97 4.55 1748 9.27 6.22 8.44
1723 8.54 5.56 7.84 1749 11.27 6.49 9.77
1724 7.47 5.97 7.17 1750 17.11 8.42 14.00
1725 5.82 6.62 5.88 1751 14.31 10.42 12.90
1726 5.41 7.49 5.83 1752 12.94 10.18 11.84
1727 7.22 1753 10.71 11.97 10.87
1728 8.13 1754 12.19 12.68 12.08
1729 9.56 1755 12.05 12.04 11.99
1730 8.71 1756 13.46 12.02 12.84
1731 6.27 1757 12.59 11.60 12.17
1732 7.12 1758 13.07 11.32 12.49
1733 8.07 1759 15.99 14.68
1734 7.39 1760 13.37 13.06 13.22
1735 8.33 1761 10.94 13.03 11.36
1736 8.72 7.07 8.38 1762 13.17 16.33 13.83
1737 7.94 6.46 7.50 1763 16.33 17.56 16.34
1738 8.95 6.47 8.32

a A weighted average of the prices of parchment, coat and half parchment beaver pelts. Weights are based on the trade in these types of furs at Fort Albany. Prices of the individual types of pelts are not available for the years, 1727 to 1735.

Source: Carlos and Lewis, 1999.

The Demand for Beaver Hats

The main cause of the rising beaver pelt prices in England and France was the increasing demand for beaver hats, which included hats made exclusively with beaver wool and referred to as “beaver hats,” and those hats containing a combination of beaver and a lower cost wool, such as rabbit. These were called “felt hats.” Unfortunately, aggregate consumption series for the eighteenth century Europe are not available. We do, however, have Gregory King’s contemporary work for England which provides a good starting point. In a table entitled “Annual Consumption of Apparell, anno 1688,” King calculated that consumption of all types of hats was about 3.3 million, or nearly one hat per person. King also included a second category, caps of all sorts, for which he estimated consumption at 1.6 million (Harte, 1991, p. 293). This means that as early as 1700, the potential market for hats in England alone was nearly 5 million per year. Over the next century, the rising demand for beaver pelts was a result of a number factors including population growth, a greater export market, a shift toward beaver hats from hats made of other materials, and a shift from caps to hats.

The British export data indicate that demand for beaver hats was growing not just in England, but in Europe as well. In 1700 a modest 69,500 beaver hats were exported from England and almost the same number of felt hats; but by 1760, slightly over 500,000 beaver hats and 370,000 felt halts were shipped from English ports (Lawson, 1943, app. I). In total, over the seventy years to 1770, 21 million beaver and felt hats were exported from England. In addition to the final product, England exported the raw material, beaver pelts. In 1760, £15,000 in beaver pelts were exported along with a range of other furs. The hats and the pelts tended to go to different parts of Europe. Raw pelts were shipped mainly to northern Europe, including Germany, Flanders, Holland and Russia; whereas hats went to the southern European markets of Spain and Portugal. In 1750, Germany imported 16,500 beaver hats, while Spain imported 110,000 and Portugal 175,000 (Lawson, 1943, appendices F & G). Over the first six decades of the eighteenth century, these markets grew dramatically, such that the value of beaver hat sales to Portugal alone was £89,000 in 1756-1760, representing about 300,000 hats or two-thirds of the entire export trade.

European Intermediaries in the Fur Trade

By the eighteenth century, the demand for furs in Europe was being met mainly by exports from North America with intermediaries playing an essential role. The American trade, which moved along the main water systems, was organized largely through chartered companies. At the far north, operating out of Hudson Bay, was the Hudson’s Bay Company, chartered in 1670. The Compagnie d’Occident, founded in 1718, was the most successful of a series of monopoly French companies. It operated through the St. Lawrence River and in the region of the eastern Great Lakes. There was also an English trade through Albany and New York, and a French trade down the Mississippi.

The Hudson’s Bay Company and the Compagnie d’Occident, although similar in title, had very different internal structures. The English trade was organized along hierarchical lines with salaried managers, whereas the French monopoly issued licenses (congés) or leased out the use of its posts. The structure of the English company allowed for more control from the London head office, but required systems that could monitor the managers of the trading posts (Carlos and Nicholas, 1990). The leasing and licensing arrangements of the French made monitoring unnecessary, but led to a system where the center had little influence over the conduct of the trade.

The French and English were distinguished as well by how they interacted with the Natives. The Hudson’s Bay Company established posts around the Bay and waited for the Indians, often middlemen, to come to them. The French, by contrast, moved into the interior, directly trading with the Indians who harvested the furs. The French arrangement was more conducive to expansion, and by the end of the seventeenth century, they had moved beyond the St. Lawrence and Ottawa rivers into the western Great Lakes region (see Figure 1). Later they established posts in the heart of the Hudson Bay hinterland. In addition, the French explored the river systems to the south, setting up a post at the mouth of the Mississippi. As noted earlier, after Jay’s Treaty was signed, the French were replaced in the Mississippi region by U.S. interests which later formed the American Fur Company (Haeger, 1991).

The English takeover of New France at the end of the French and Indian Wars in 1763 did not, at first, fundamentally change the structure of the trade. Rather, French management was replaced by Scottish and English merchants operating in Montreal. But, within a decade, the Montreal trade was reorganized into partnerships between merchants in Montreal and traders who wintered in the interior. The most important of these arrangements led to the formation of the Northwest Company, which for the first two decades of the nineteenth century, competed with the Hudson’s Bay Company (Carlos and Hoffman, 1986). By the early decades of the nineteenth century, the Hudson’s Bay Company, the Northwest Company, and the American Fur Company had, combined, a system of trading posts across North America, including posts in Oregon and British Columbia and on the Mackenzie River. In 1821, the Northwest Company and the Hudson’s Bay Company merged under the name of the Hudson’s Bay Company. The Hudson’s Bay Company then ran the trade as a monopsony until the late 1840s when it began facing serious competition from trappers to the south. The Company’s role in the northwest changed again with the Canadian Confederation in 1867. Over the next decades treaties were signed with many of the northern tribes forever changing the old fur trade order in Canada.

The Supply of Furs: The Harvesting of Beaver and Depletion

During the eighteenth century, the changing technology of felt production and the growing demand for felt hats were met by attempts to increase the supply of furs, especially the supply of beaver pelts. Any permanent increase, however, was ultimately dependent on the animal resource base. How that base changed over time must be a matter of speculation since no animal counts exist from that period; nevertheless, the evidence we do have points to a scenario in which over-harvesting, at least in some years, gave rise to serious depletion of the beaver and possibly other animals such as marten that were also being traded. Why the beaver were over-harvested was closely related to the prices Natives were receiving, but important as well was the nature of Native property rights to the resource.

Harvests in the Fort Albany and York Factory Regions

That beaver populations along the Eastern seaboard regions of North America were depleted as the fur trade advanced is widely accepted. In fact the search for new sources of supply further west, including the region of Hudson Bay, has been attributed in part to dwindling beaver stocks in areas where the fur trade had been long established. Although there has been little discussion of the impact that the Hudson’s Bay Company and the French, who traded in the region of Hudson Bay, were having on the beaver stock, the remarkably complete records of the Hudson’s Bay Company provide the basis for reasonable inferences about depletion. From 1700 there is an uninterrupted annual series of fur returns at Fort Albany; the fur returns from York Factory begin in 1716 (see Figure 1).

The beaver returns at Fort Albany and York Factory for the period 1700 to 1770 are described in Figure 2. At Fort Albany the number of beaver skins over the period 1700 to 1720 averaged roughly 19,000, with wide year-to-year fluctuations; the range was about 15,000 to 30,000. After 1720 and until the late 1740s average returns declined by about 5,000 skins, and remained within the somewhat narrower range of roughly 10,000 to 20,000 skins. The period of relative stability was broken in the final years of the 1740s. In 1748 and 1749, returns increased to an average of nearly 23,000. Following these unusually strong years, the trade fell precipitously so that in 1756 fewer than 6,000 beaver pelts were received. There was a brief recovery in the early 1760s but by the end decade trade had fallen below even the mid-1750s levels. In 1770, Fort Albany took in just 3,600 beaver pelts. This pattern – unusually large returns in the late 1740s and low returns thereafter – indicates that the beaver in the Fort Albany region were being seriously depleted.

Figure 2
Beaver Traded at Fort Albany and York Factory 1700 – 1770

Source: Carlos and Lewis, 1993.

The beaver returns at York Factory from 1716 to 1770, also described in Figure 2, have some of the key features of the Fort Albany data. After some low returns early on (from 1716 to 1720), the number of beaver pelts increased to an average of 35,000. There were extraordinary returns in 1730 and 1731, when the average was 55,600 skins, but beaver receipts then stabilized at about 31,000 over the remainder of the decade. The first break in the pattern came in the early 1740s shortly after the French established several trading posts in the area. Surprisingly perhaps, given the increased competition, trade in beaver pelts at the Hudson’s Bay Company post increased to an average of 34,300, this over the period 1740 to 1743. Indeed, the 1742 return of 38,791 skins was the largest since the French had established any posts in the region. The returns in 1745 were also strong, but after that year the trade in beaver pelts began a decline that continued through to 1770. Average returns over the rest of the decade were 25,000; the average during the 1750s was 18,000, and just 15,500 in the 1760s. The pattern of beaver returns at York Factory – high returns in the early 1740s followed by a large decline – strongly suggests that, as in the Fort Albany hinterland, the beaver population had been greatly reduced.

The overall carrying capacity of any region, or the size of the animal stock, depends on the nature of the terrain and the underlying biological determinants such as birth and death rates. A standard relationship between the annual harvest and the animal population is the Lotka-Volterra logistic, commonly used in natural resource models to relate the natural growth of a population to the size of that population:
F(X) = aX – bX2, a, b > 0 (1)

where X is the population, F(X) is the natural growth in the population, a is the maximum proportional growth rate of the population, and b = a/X, where X is the upper limit to population size. The population dynamics of the species exploited depends on the harvest each period:

DX = aX – bX2- H (2)

where DX is the annual change in the population and H is the harvest. The choice of parameter a and maximum population X is central to the population estimates and have been based largely on estimates from the beaver ecology literature and Ontario provincial field reports of beaver densities (Carlos and Lewis, 1993).

Simulations based on equation 2 suggest that, until the 1730s, beaver populations remained at levels roughly consistent with maximum sustained yield management, sometimes referred to as the biological optimum. But after the 1730s there was a decline in beaver stocks to about half the maximum sustained yield levels. The cause of the depletion was closely related to what was happening in Europe. There, buoyant demand for felt hats and dwindling local fur supplies resulted in much higher prices for beaver pelts. These higher prices, in conjunction with the resulting competition from the French in the Hudson Bay region, led the Hudson’s Bay Company to offer much better terms to Natives who came to their trading posts (Carlos and Lewis, 1999).

Figure 3 reports a price index for furs at Fort Albany and at York Factory. The index represents a measure of what Natives received in European goods for their furs. At Fort Albany, fur prices were close to 70 from 1713 to 1731, but in 1732, in response to higher European fur prices and the entry of la Vérendrye, an important French trader, the price jumped to 81. After that year, prices continued to rise. The pattern at York Factory was similar. Although prices were high in the early years when the post was being established, beginning in 1724 the price settled down to about 70. At York Factory, the jump in price came in 1738, which was the year la Vérendrye set up a trading post in the York Factory hinterland. Prices then continued to increase. It was these higher fur prices that led to over-harvesting and, ultimately, a decline in beaver stocks.

Figure 3
Price Index for Furs: Fort Albany and York Factory, 1713 – 1770

Source: Carlos and Lewis, 2001.

Property Rights Regimes

An increase in price paid to Native hunters did not have to lead to a decline in the animal stocks, because Indians could have chosen to limit their harvesting. Why they did not was closely related their system of property rights. One can classify property rights along a spectrum with, at one end, open access, where anyone can hunt or fish, and at the other, complete private property, where a sole owner has full control over the resource. Between, there are a range of property rights regimes with access controlled by a community or a government, and where individual members of the group do not necessarily have private property rights. Open access creates a situation where there is less incentive to conserve, because animals not harvested by a particular hunter will be available to other hunters in the future. Thus the closer is a system to open access the more likely it is that the resource will be depleted.

Across aboriginal societies in North America, one finds a range of property rights regimes. Native Americans did have a concept of trespass and of property, but individual and family rights to resources were not absolute. Sometimes referred to as the Good Samaritan principle (McManus, 1972), outsiders were not permitted to harvest furs on another’s territory for trade, but they were allowed to hunt game and even beaver for food. Combined with this limitation to private property was an Ethic of Generosity that included liberal gift-giving where any visitor to one’s encampment was to be supplied with food and shelter.

Why a social norm such as gift-giving or the related Good Samaritan principle emerged was due to the nature of the aboriginal environment. The primary objective of aboriginal societies was survival. Hunting was risky, and so rules were put in place that would reduce the risk of starvation. As Berkes et al.(1989, p. 153) notes, for such societies: “all resources are subject to the overriding principle that no one can prevent a person from obtaining what he needs for his family’s survival.” Such actions were reciprocal and especially in the sub-arctic world were an insurance mechanism. These norms, however, also reduced the incentive to conserve the beaver and other animals that were part of the fur trade. The combination of these norms and the increasing price paid to Native traders led to the large harvests in the 1740s and ultimately depletion of the animal stock.

The Trade in European Goods

Indians were the primary agents in the North American commercial fur trade. It was they who hunted the animals, and transported and traded the pelts or skins to European intermediaries. The exchange was a voluntary. In return for their furs, Indians obtained both access to an iron technology to improve production and access to a wide range of new consumer goods. It is important to recognize, however, that although the European goods were new to aboriginals, the concept of exchange was not. The archaeological evidence indicates an extensive trade between Native tribes in the north and south of North America prior to European contact.

The extraordinary records of the Hudson’s Bay Company allow us to form a clear picture of what Indians were buying. Table 2 lists the goods received by Natives at York Factory, which was by far the largest of the Hudson’s Bay Company trading posts. As is evident from the table, the commercial trade was more than in beads and baubles or even guns and alcohol; rather Native traders were receiving a wide range of products that improved their ability to meet their subsistence requirements and allowed them to raise their living standards. The items have been grouped by use. The producer goods category was dominated by firearms, including guns, shot and powder, but also includes knives, awls and twine. The Natives traded for guns of different lengths. The 3-foot gun was used mainly for waterfowl and in heavily forested areas where game could be shot at close range. The 4-foot gun was more accurate and suitable for open spaces. In addition, the 4-foot gun could play a role in warfare. Maintaining guns in the harsh sub-arctic environment was a serious problem, and ultimately, the Hudson’s Bay Company was forced to send gunsmiths to its trading posts to assess quality and help with repairs. Kettles and blankets were the main items in the “household goods” category. These goods probably became necessities to the Natives who adopted them. Then there were the luxury goods, which have been divided into two broad categories: “tobacco and alcohol,” and “other luxuries,” dominated by cloth of various kinds (Carlos and Lewis, 2001; 2002).

Table 2
Value of Goods Received at York Factory in 1740 (made beaver)

We have much less information about the French trade. The French are reported to have exchanged similar items, although given their higher transport costs, both the furs received and the goods traded tended to be higher in value relative to weight. The Europeans, it might be noted, supplied no food to the trade in the eighteenth century. In fact, Indians helped provision the posts with fish and fowl. This role of food purveyor grew in the nineteenth century as groups known as the “home guard Cree” came to live around the posts; as well, pemmican, supplied by Natives, became an important source of nourishment for Europeans involved in the buffalo hunts.

The value of the goods listed in Table 2 is expressed in terms of the unit of account, the made beaver, which the Hudson’s Bay Company used to record its transactions and determine the rate of exchange between furs and European goods. The price of a prime beaver pelt was 1 made beaver, and every other type of fur and good was assigned a price based on that unit. For example, a marten (a type of mink) was a made beaver, a blanket was 7 made beaver, a gallon of brandy, 4 made beaver, and a yard of cloth, 3? made beaver. These were the official prices at York Factory. Thus Indians, who traded at these prices, received, for example, a gallon of brandy for four prime beaver pelts, two yards of cloth for seven beaver pelts, and a blanket for 21 marten pelts. This was barter trade in that no currency was used; and although the official prices implied certain rates of exchange between furs and goods, Hudson’s Bay Company factors were encouraged to trade at rates more favorable to the Company. The actual rates, however, depended on market conditions in Europe and, most importantly, the extent of French competition in Canada. Figure 3 illustrates the rise in the price of furs at York Factory and Fort Albany in response to higher beaver prices in London and Paris, as well as to a greater French presence in the region (Carlos and Lewis, 1999). The increase in price also reflects the bargaining ability of Native traders during periods of direct competition between the English and French and later the Hudson’s Bay Company and the Northwest Company. At such times, the Native traders would play both parties off against each other (Ray and Freeman, 1978).

The records of the Hudson’s Bay Company provide us with a unique window to the trading process, including the bargaining ability of Native traders, which is evident in the range of commodities received. Natives only bought goods they wanted. Clear from the Company records is that it was the Natives who largely determined the nature and quality of those goods. As well the records tell us how income from the trade was being allocated. The breakdown differed by post and varied over time; but, for example, in 1740 at York Factory, the distribution was: producer goods – 44 percent; household goods – 9 percent; alcohol and tobacco – 24 percent; and other luxuries – 23 percent. An important implication of the trade data is that, like many Europeans and most American colonists, Native Americans were taking part in the consumer revolution of the eighteenth century (de Vries, 1993; Shammas, 1993). In addition to necessities, they were consuming a remarkable variety of luxury products. Cloth, including baize, duffel, flannel, and gartering, was by far the largest class, but they also purchased beads, combs, looking glasses, rings, shirts, and vermillion among a much longer list. Because these items were heterogeneous in nature, the Hudson’s Bay Company’s head office went to great lengths to satisfy the specific tastes of Native consumers. Attempts were also made, not always successfully, to introduce new products (Carlos and Lewis, 2002).

Perhaps surprising, given the emphasis that has been placed on it in the historical literature, was the comparatively small role of alcohol in the trade. At York Factory, Native traders received in 1740 a total of 494 gallons of brandy and “strong water,” which had a value of 1,976 made beaver. More than twice this amount was spent on tobacco in that year, nearly five times was spent on firearms, twice was spent on cloth, and more was spent on blankets and kettles than on alcohol. Thus, brandy, although a significant item of trade, was by no means a dominant one. In addition, alcohol could hardly have created serious social problems during this period. The amount received would have allowed for no more than ten two-ounce drinks per year for the adult Native population living in the region.

The Labor Supply of Natives

Another important question can be addressed using the trade data. Were Natives “lazy and improvident” as they have been described by some contemporaries, or were they “industrious” like the American colonists and many Europeans? Central to answering this question is how Native groups responded to the price of furs, which began rising in the 1730s. Much of the literature argues that Indian trappers reduced their effort in response to higher fur prices; that is, they had backward-bending supply curves of labor. The view is that Natives had a fixed demand for European goods that, at higher fur prices, could be met with fewer furs, and hence less effort. Although widely cited, this argument does not stand up. Not only were higher fur prices accompanied by larger total harvests of furs in the region, but the pattern of Native expenditure also points to a scenario of greater effort. From the late 1730s to the 1760s, as the price of furs rose, the share of expenditure on luxury goods increased dramatically (see Figure 4). Thus Natives were not content simply to accept their good fortune by working less; rather they seized the opportunity provided to them by the strong fur market by increasing their effort in the commercial sector, thereby dramatically augmenting the purchases of those goods, namely the luxuries, that could raise their living standards.

Figure 4
Native Expenditure Shares at York Factory 1716 – 1770

Source: Carlos and Lewis, 2001.

A Note on the Non-commercial Sector

As important as the fur trade was to Native Americans in the sub-arctic regions of Canada, commerce with the Europeans comprised just one, relatively small, part of their overall economy. Exact figures are not available, but the traditional sectors; hunting, gathering, food preparation and, to some extent, agriculture must have accounted for at least 75 to 80 percent of Native labor during these decades. Nevertheless, despite the limited time spent in commercial activity, the fur trade had a profound effect on the nature of the Native economy and Native society. The introduction of European producer goods, such as guns, and household goods, mainly kettles and blankets, changed the way Native Americans achieved subsistence; and the European luxury goods expanded the range of products that allowed them to move beyond subsistence. Most importantly, the fur trade connected Natives to Europeans in ways that affected how and how much they chose to work, where they chose to live, and how they exploited the resources on which the trade and their survival was based.

References

Berkes, Fikret, David Feeny, Bonnie J. McCay, and James M. Acheson. “The Benefits of the Commons.” Nature 340 (July 13, 1989): 91-93.

Braund, Kathryn E. Holland.Deerskins and Duffels: The Creek Indian Trade with Anglo-America, 1685-1815. Lincoln: University of Nebraska Press, 1993.

Carlos, Ann M., and Elizabeth Hoffman. “The North American Fur Trade: Bargaining to a Joint Profit Maximum under Incomplete Information, 1804-1821.” Journal of Economic History 46, no. 4 (1986): 967-86.

Carlos, Ann M., and Frank D. Lewis. “Indians, the Beaver and the Bay: The Economics of Depletion in the Lands of the Hudson’s Bay Company, 1700-1763.” Journal of Economic History 53, no. 3 (1993): 465-94.

Carlos, Ann M., and Frank D. Lewis. “Property Rights, Competition and Depletion in the Eighteenth-Century Canadian Fur Trade: The Role of the European Market.” Canadian Journal of Economics 32, no. 3 (1999): 705-28.

Carlos, Ann M., and Frank D. Lewis. “Property Rights and Competition in the Depletion of the Beaver: Native Americans and the Hudson’s Bay Company.” In The Other Side of the Frontier: Economic Explorations in Native American History, edited by Linda Barrington, 131-149. Boulder, CO: Westview Press, 1999.

Carlos, Ann M., and Frank D. Lewis. “Trade, Consumption, and the Native Economy: Lessons from York Factory, Hudson Bay.” Journal of Economic History61, no. 4 (2001): 465-94.

Carlos, Ann M., and Frank D. Lewis. “Marketing in the Land of Hudson Bay: Indian Consumers and the Hudson’s Bay Company, 1670-1770.” Enterprise and Society 2 (2002): 285-317.

Carlos, Ann and Nicholas, Stephen. “Agency Problems in Early Chartered Companies: The Case of the Hudson’s Bay Company.” Journal of Economic History 50, no. 4 (1990): 853-75.

Clarke, Fiona. Hats. London: Batsford, 1982.

Crean, J. F. “Hats and the Fur Trade.” Canadian Journal of Economics and Political Science 28, no. 3 (1962): 373-386.

Corner, David. “The Tyranny of Fashion: The Case of the Felt-Hatting Trade in the Late Seventeenth and Eighteenth Centuries.” Textile History 22, no.2 (1991): 153-178.

de Vries, Jan. “Between Purchasing Power and the World of Goods: Understanding the Household Economy in Early Modern Europe.” In Consumption and the World of Goods, edited by John Brewer and Roy Porter, 85-132. London: Routledge, 1993.

Ginsburg Madeleine. The Hat: Trends and Traditions. London: Studio Editions, 1990.

Haeger, John D. John Jacob Astor: Business and Finance in the Early Republic. Detroit: Wayne State University Press, 1991.

Harte, N.B. “The Economics of Clothing in the Late Seventeenth Century.” Textile History 22, no. 2 (1991): 277-296.

Heidenreich, Conrad E., and Arthur J. Ray. The Early Fur Trade: A Study in Cultural Interaction. Toronto: McClelland and Stewart, 1976.

Helm, Jane, ed. Handbook of North American Indians 6, Subarctic. Washington: Smithsonian, 1981.

Innis, Harold. The Fur Trade in Canada (revised edition). Toronto: University of Toronto Press, 1956.

Krech III, Shepard. The Ecological Indian: Myth and History. New York: Norton, 1999.

Lawson, Murray G. Fur: A Study in English Mercantilism. Toronto: University of Toronto Press, 1943.

McManus, John. “An Economic Analysis of Indian Behavior in the North American Fur Trade.” Journal of Economic History 32, no.1 (1972): 36-53.

Ray, Arthur J. Indians in the Fur Trade: Their Role as Hunters, Trappers and Middlemen in the Lands Southwest of Hudson Bay, 1660-1870. Toronto: University of Toronto Press, 1974.

Ray, Arthur J. and Donald Freeman. “Give Us Good Measure”: An Economic Analysis of Relations between the Indians and the Hudson’s Bay Company before 1763. Toronto: University of Toronto Press, 1978.

Ray, Arthur J. “Bayside Trade, 1720-1780.” In Historical Atlas of Canada 1, edited by R. Cole Harris, plate 60. Toronto: University of Toronto Press, 1987.

Rich, E. E. Hudson’s Bay Company, 1670 – 1870. 2 vols. Toronto: McClelland and Stewart, 1960.

Rich, E.E. “Trade Habits and Economic Motivation among the Indians of North America.” Canadian Journal of Economics and Political Science 26, no. 1 (1960): 35-53.

Shammas, Carole. “Changes in English and Anglo-American Consumption from 1550-1800.” In Consumption and the World of Goods, edited by John Brewer and Roy Porter, 177-205. London: Routledge, 1993.

Wien, Thomas. “Selling Beaver Skins in North America and Europe, 1720-1760: The Uses of Fur-Trade Imperialism.” Journal of the Canadian Historical Association, New Series 1 (1990): 293-317.

Citation: Carlos, Ann and Frank Lewis. “Fur Trade (1670-1870)”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-the-fur-trade-1670-to-1870/

An Economic History of Finland

Riitta Hjerppe, University of Helsinki

Finland in the early 2000s is a small industrialized country with a standard of living ranked among the top twenty in the world. At the beginning of the twentieth century it was a poor agrarian country with a gross domestic product per capita less than half of that of the United Kingdom and the United States, world leaders at the time in this respect. Finland was part of Sweden until 1809, and a Grand Duchy of Russia from 1809 to 1917, with relatively broad autonomy in its economic and many internal affairs. It became an independent republic in 1917. While not directly involved in the fighting in World War I, the country went through a civil war during the years of early independence in 1918, and fought against the Soviet Union during World War II. Participation in Western trade liberalization and bilateral trade with the Soviet Union required careful balancing of foreign policy, but also enhanced the welfare of the population. Finland has been a member of the European Union since 1995, and has belonged to the European Economic and Monetary Union since 1999, when it adopted the euro as its currency.

Gross Domestic Product per capita in Finland and in EU 15, 1860-2004, index 2004 = 100

Sources: Eurostat (2001–2005)

Finland has large forest areas of coniferous trees, and forests have been and still are an important natural resource in its economic development. Other natural resources are scarce: there is no coal or oil, and relatively few minerals. Outokumpu, the biggest copper mine in Europe in its time, was depleted in the 1980s. Even water power is scarce, despite the large number of lakes, because of the small height differences. The country is among the larger ones in Europe in area, but it is sparsely populated with 44 people per square mile, 5.3 million people altogether. The population is very homogeneous. There are a small number of people of foreign origin, about two percent, and for historical reasons there are two official language groups, the Finnish-speaking majority and a Swedish-speaking minority. In recent years population has grown at about 0.3 percent per year.

The Beginnings of Industrialization and Accelerating Growth

Finland was an agrarian country in the 1800s, despite poor climatic conditions for efficient grain growing. Seventy percent of the population was engaged in agriculture and forestry, and half of the value of production came from these primary industries in 1900. Slash and burn cultivation finally gave way to field cultivation during the nineteenth century, even in the eastern parts of the country.

Some iron works were founded in the southwestern part of the country in order to process Swedish iron ore as early as in the seventeenth century. Significant tar burning, sawmilling and fur trading brought cash with which to buy a few imported items such as salt, and some luxuries – coffee, sugar, wines and fine cloths. The small towns in the coastal areas flourished through the shipping of these items, even if restrictive legislation in the eighteenth century required transport via Stockholm. The income from tar and timber shipping accumulated capital for the first industrial plants.

The nineteenth century saw the modest beginnings of industrialization, clearly later than in Western Europe. The first modern cotton factories started up in the 1830s and 1840s, as did the first machine shops. The first steam machines were introduced in the cotton factories and the first rag paper machine in the 1840s. The first steam sawmills were allowed to start only in 1860. The first railroad shortened the traveling time from the inland towns to the coast in 1862, and the first telegraphs came at around the same time. Some new inventions, such as electrical power and the telephone, came into use early in the 1880s, but generally the diffusion of new technology to everyday use took a long time.

The export of various industrial and artisan products to Russia from the 1840s on, as well as the opening up of British markets to Finnish sawmill products in the 1860s were important triggers of industrial development. From the 1870s on pulp and paper based on wood fiber became major export items to the Russian market, and before World War I one-third of the demand of the vast Russian empire was satisfied with Finnish paper. Finland became a very open economy after the 1860s and 1870s, with an export share equaling one-fifth of GDP and an import share of one-fourth. A happy coincidence was the considerable improvement in the terms of trade (export prices/import prices) from the late 1860s to 1900, when timber and other export prices improved in relation to the international prices of grain and industrial products.

Openness of the economies (exports+imports of goods/GDP, percent) in Finland and EU 15, 1960-2005

Sources: Heikkinen and van Zanden 2004; Hjerppe 1989.

Finland participated fully in the global economy of the first gold-standard era, importing much of its grain tariff-free and a lot of other foodstuffs. Half of the imports consisted of food, beverages and tobacco. Agriculture turned to dairy farming, as in Denmark, but with poorer results. The Finnish currency, the markka from 1865, was tied to gold in 1878 and the Finnish Senate borrowed money from Western banking houses in order to build railways and schools.

GDP grew at a slightly accelerating average rate of 2.6 percent per annum, and GDP per capita rose 1.5 percent per year on average between 1860 and 1913. The population was also growing rapidly, and from two million in the 1860s it reached three million on the eve of World War I. Only about ten percent of the population lived in towns. The investment rate was a little over 10 percent of GDP between the 1860s and 1913 and labor productivity was low compared to the leading nations. Accordingly, economic growth depended mostly on added labor inputs, as well as a growing cultivated area.

Catching up in the Interwar Years

The revolution of 1917 in Russia and Finland’s independence cut off Russian trade, which was devastating for Finland’s economy. The food situation was particularly difficult as 60 percent of grain required had been imported.

Postwar reconstruction in Europe and the consequent demand for timber soon put the economy on a swift growth path. The gap between the Finnish economy and Western economies narrowed dramatically in the interwar period, although it remained the same among the Scandinavian countries, which also experienced fast growth: GDP grew by 4.7 percent per annum and GDP per capita by 3.8 percent in 1920–1938. The investment rate rose to new heights, which also improved labor productivity. The 1930s depression was milder than in many other European countries because of the continued demand for pulp and paper. On the other hand, Finnish industries went into depression at different times, which made the downturn milder than it would have been if all the industries had experienced their troughs simultaneously. The Depression, however, had serious and long-drawn-out consequences for poor people.

The land reform of 1918 secured land for tenant farmers and farm workers. A large number of new, small farms were established, which could only support families if they had extra income from forest work. The country remained largely agrarian. On the eve of World War II, almost half of the labor force and one-third of the production were still in the primary industries. Small-scale agriculture used horses and horse-drawn machines, lumberjacks went into the forest with axes and saws, and logs were transported from the forest by horses or by floating. Tariff protection and other policy measures helped to raise the domestic grain production to 80–90 percent of consumption by 1939.

Soon after the end of World War I, Finnish sawmill products, pulp and paper found old and new markets in the Western world. The structure of exports became more one-sided, however. Textiles and metal products found no markets in the West and had to compete hard with imports on the domestic market. More than four-fifths of exports were based on wood, and one-third of industrial production was in sawmilling, other wood products, pulp and paper. Other growing industries included mining, basic metal industries and machine production, but they operated on the domestic market, protected by the customs barriers that were typical of Europe at that time.

The Postwar Boom until the 1970s

Finland came out of World War II crippled by the loss of a full tenth of its territory, and with 400.000 evacuees from Karelia. Productive units were dilapidated and the raw material situation was poor. The huge war reparations to the Soviet Union were the priority problem of the decision makers. The favorable development of the domestic machinery and shipbuilding industries, which was based on domestic demand during the interwar period and arms deliveries to the army during the War made war-reparations deliveries possible. They were paid on time and according to the agreements. At the same time, timber exports to the West started again. Gradually the productive capacity was modernized and the whole industry was reformed. Evacuees and soldiers were given land on which to settle, and this contributed to the decrease in farm size.

Finland became part of the Western European trade-liberalization movement by joining the World Bank, the International Monetary Fund (IMF) and the Bretton Woods agreement in 1948, becoming a member of the General Agreement on Tariffs and Trade (GATT) two years later, and joining Finnefta (an agreement between the European Free Trade Area (EFTA) and Finland) in 1961. The government chose not to receive Marshall Aid because of the world political situation. Bilateral trade agreements with the Soviet Union started in 1947 and continued until 1991. Tariffs were eased and imports from market economies liberated from 1957. Exports and imports, which had stayed at internationally high levels during the interwar years, only slowly returned to the earlier relative levels.

The investment rate climbed to new levels soon after War World II under a government policy favoring investments and it remained on this very high level until the end of the 1980s. The labor-force growth stopped in the early 1960s, and economic growth has since depended on increases in productivity rather than increased labor inputs. GDP growth was 4.9 percent and GDP per capita 4.3 percent in 1950–1973 – matching the rapid pace of many other European countries.

Exports and, accordingly, the structure of the manufacturing industry were diversified by Soviet and, later, on Western orders for machinery products including paper machines, cranes, elevators, and special ships such as icebreakers. The vast Soviet Union provided good markets for clothing and footwear, while Finnish wool and cotton factories slowly disappeared because of competition from low-wage countries. The modern chemical industry started to develop in the early twentieth century, often led by foreign entrepreneurs, and the first small oil refinery was built by the government in the 1950s. The government became actively involved in industrial activities in the early twentieth century, with investments in mining, basic industries, energy production and transmission, and the construction of infrastructure, and this continued in the postwar period.

The new agricultural policy, the aim of which was to secure reasonable incomes and favorable loans to the farmers and the availability of domestic agricultural products for the population, soon led to overproduction in several product groups, and further to government-subsidized dumping on the international markets. The first limitations on agricultural production were introduced at the end of the 1960s.

The population reached four million in 1950, and the postwar baby boom put extra pressure on the educational system. The educational level of the Finnish population was low in Western European terms in the 1950s, even if everybody could read and write. The underdeveloped educational system was expanded and renewed as new universities and vocational schools were founded, and the number of years of basic, compulsory education increased. Education has been government run since the 1960s and 1970s, and is free at all levels. Finland started to follow the so-called Nordic welfare model, and similar improvements in health and social care have been introduced, normally somewhat later than in the other Nordic countries. Public child-health centers, cash allowances for children, and maternity leave were established in the 1940s, and pension plans have covered the whole population since the 1950s. National unemployment programs had their beginnings in the 1930s and were gradually expanded. A public health-care system was introduced in 1970, and national health insurance also covers some of the cost of private health care. During the 1980s the income distribution became one of the most even in the world.

Slower Growth from the 1970s

The oil crises of the 1970s put the Finnish economy under pressure. Although the oil reserves of the main supplier, the Soviet Union, showed no signs of running out, the price increased in line with world market prices. This was a source of devastating inflation in Finland. On the other hand, it was possible to increase exports under the terms of the bilateral trade agreement with the Soviet Union. This boosted export demand and helped Finland to avoid the high and sustained unemployment that plagued Western Europe.

Economic growth in the 1980s was somewhat better than in most Western economies, and at the end of the 1980s Finland caught up with the sluggishly-growing Swedish GDP per capita for the first time. In the early 1990s the collapse of the Soviet trade, Western European recession and problems in adjusting to the new liberal order of international capital movement led the Finnish economy into a depression that was worse than that of the 1930s. GDP fell by over 10 percent in three years, and unemployment rose to 18 percent. The banking crisis triggered a profound structural change in the Finnish financial sector. The economy revived again to a brisk growth rate of 3.6 percent in 1994-2005: GDP growth was 2.5 percent and GDP per capita 2.1 percent between 1973 and 2005.

Electronics started its spectacular rise in the 1980s and it is now the largest single manufacturing industry with a 25 percent share of all manufacturing. Nokia is the world’s largest producer of mobile phones and a major transmission-station constructor. Connected to this development was the increase in the research-and- development outlay to three percent of GDP, one of the highest in the world. The Finnish paper companies UPM-Kymmene and M-real and the Finnish-Swedish Stora-Enso are among the largest paper producers in the world, although paper production now accounts for only 10 percent of manufacturing output. The recent discussion on the future of the industry is alarming, however. The position of the Nordic paper industry, which is based on expensive, slowly-growing timber, is threatened by new paper factories founded near the expanding consumption areas in Asia and South America, which use local, fast-growing tropical timber. The formerly significant sawmilling operations now constitute a very small percentage of the activities, although the production volumes have been growing. The textile and clothing industries have shrunk into insignificance.

What has typified the last couple of decades is the globalization that has spread to all areas. Exports and imports have increased as a result of export-favoring policies. Some 80 percent of the stocks of Finnish public companies are now in foreign hands: foreign ownership was limited and controlled until the early 1990s. A quarter of the companies operating in Finland are foreign-owned, and Finnish companies have even bigger investments abroad. Most big companies are truly international nowadays. Migration to Finland has increased, and since the collapse of the eastern bloc Russian immigrants have become the largest single foreign group. The number of foreigners is still lower than in many other countries – there are about 120.000 people with foreign background out of a population of 5.3 million.

The directions of foreign trade have been changing because trade with the rising Asian economies has been gaining in importance and Russian trade has fluctuated. Otherwise, almost the same country distribution prevails as has been common for over a century. Western Europe has a share of three-fifths, which has been typical. The United Kingdom was for long Finland’s biggest trading partner, with a share of one-third, but this started to diminish in the 1960s. Russia accounted for one-third of Finnish foreign trade in the early 1900s, but the Soviet Union had minimal trade with the West at first, and its share of the Finnish foreign trade was just a few percentage points. After World War II Soviet-Finnish trade increased gradually until it reached 25 percent of Finnish foreign trade in the 1970s and early 1980s. Trade with Russia is now gradually gaining ground again from the low point of the early 1990s, and had risen to about ten percent in 2006. This makes Russia one of Finland’s three biggest trading partners, Sweden and Germany being the other two with a ten percent share each.

The balance of payments was a continuing problem in the Finnish economy until the 1990s. Particularly in the post-World War II period inflation repeatedly eroded the competitive capacity of the economy and led to numerous devaluations of the currency. An economic policy favoring exports helped the country out of the depression of the 1990s and improved the balance of payments.

Agriculture continued its problematic development of overproduction and high subsidies, which finally became very unpopular. The number of farms has shrunk since the 1960s and the average size has recently risen to average European levels. The share of agricultural production and labor are also on the Western European levels nowadays. Finnish agriculture is incorporated into the Common Agricultural Policy of the European Union and shares its problems, even if Finnish overproduction has been virtually eliminated.

The share of forestry is equally low, even if it supplies four-fifths of the wood used in Finnish sawmills and paper factories: the remaining fifth is imported mainly from the northwestern parts of Russia. The share of manufacturing is somewhat above Western European levels and, accordingly, that of services is high but slightly lower than in the old industrialized countries.

Recent discussion on the state of the economy mainly focuses on two issues. The very open economy of Finland is very much influenced by the rather sluggish economic development of the European Union. Accordingly, not very high growth rates are to be expected in Finland either. Since the 1990s depression, the investment rate has remained at a lower level than was common in the postwar period, and this is cause for concern.

The other issue concerns the prominent role of the public sector in the economy. The Nordic welfare model is basically approved of, but the costs create tensions. High taxation is one consequence of this and political parties discuss whether or not the high public-sector share slows down economic growth.

The aging population, high unemployment and the decreasing numbers of taxpayers in the rural areas of eastern and central Finland place a burden on the local governments. There is also continuing discussion about tax competition inside the European Union: how does the high taxation in some member countries affect the location decisions of companies?

Development of Finland’s exports by commodity group 1900-2005, percent

Source: Finnish National Board of Customs, Statistics Unit

Note on classification: Metal industry products SITC 28, 67, 68, 7, 87; Chemical products SITC 27, 32, 33, 34, 5, 66; Textiles SITC 26, 61, 65, 84, 85; Wood, paper and printed products SITC 24, 25, 63, 64, 82; Food, beverages, tobacco SITC 0, 1, 4.

Development of Finland’s imports by commodity group 1900-2005, percent

Source: Finnish National Board of Customs, Statistics Unit

Note on classification: Metal industry products SITC 28, 67, 68, 7, 87; Chemical products SITC 27, 32, 33, 34, 5, 66; Textiles SITC 26, 61, 65, 84, 85; Wood, paper and printed products SITC 24, 25, 63, 64, 82; Food, beverages, tobacco SITC 0, 1, 4.

References:

Heikkinen, S. and J.L van Zanden, eds. Explorations in Economic Growth. Amsterdam: Aksant, 2004.

Heikkinen, S. Labour and the Market: Workers, Wages and Living Standards in Finland, 1850–1913. Commentationes Scientiarum Socialium 51 (1997).

Hjerppe, R. The Finnish Economy 1860–1985: Growth and Structural Change. Studies on Finland’s Economic Growth XIII. Helsinki: Bank of Finland Publications, 1989.

Jalava, J., S. Heikkinen and R. Hjerppe. “Technology and Structural Change: Productivity in the Finnish Manufacturing Industries, 1925-2000.” Transformation, Integration and Globalization Economic Research (TIGER), Working Paper No. 34, December 2002.

Kaukiainen, Yrjö. A History of Finnish Shipping. London: Routledge, 1993.

Myllyntaus, Timo. Electrification of Finland: The Transfer of a New Technology into a Late Industrializing Economy. Worcester, MA: Macmillan, Worcester, 1991.

Ojala, J., J. Eloranta and J. Jalava, editors. The Road to Prosperity: An Economic History of Finland. Helsinki: Suomalaisen Kirjallisuuden Seura, 2006.

Pekkarinen, J. and J. Vartiainen. Finlands ekonomiska politik: den långa linjen 1918–2000, Stockholm: Stiftelsen Fackföreningsrörelsens institut för ekonomisk forskning FIEF, 2001.

Citation: Hjerppe, Riitta. “An Economic History of Finland”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/an-economic-history-of-finland/

The Economic History of the International Film Industry

Gerben Bakker, University of Essex

Introduction

Like other major innovations such as the automobile, electricity, chemicals and the airplane, cinema emerged in most Western countries at the same time. As the first form of industrialized mass-entertainment, it was all-pervasive. From the 1910s onwards, each year billions of cinema-tickets were sold and consumers who did not regularly visit the cinema became a minority. In Italy, today hardly significant in international entertainment, the film industry was the fourth-largest export industry before the First World War. In the depression-struck U.S., film was the tenth most profitable industry, and in 1930s France it was the fastest-growing industry, followed by paper and electricity, while in Britain the number of cinema-tickets sold rose to almost one billion a year (Bakker 2001b). Despite this economic significance, despite its rapid emergence and growth, despite its pronounced effect on the everyday life of consumers, and despite its importance as an early case of the industrialization of services, the economic history of the film industry has hardly been examined.

This article will limit itself exclusively to the economic development of the industry. It will discuss just a few countries, mainly the U.S., Britain and France, and then exclusively to investigate the economic issues it addresses, not to give complete histories of the industries in those countries. This entry cannot do justice to developments in each and every country, given the nature of an encyclopedia article. This entry also limits itself to the evolution of the Western film industry, because it has been and still is the largest film industry in the world, in revenue terms, although this may well change in the future.

Before Cinema

In the late eighteenth century most consumers enjoyed their entertainment in an informal, haphazard and often non-commercial way. When making a trip they could suddenly meet a roadside entertainer, and their villages were often visited by traveling showmen, clowns and troubadours. Seasonal fairs attracted a large variety of musicians, magicians, dancers, fortune-tellers and sword-swallowers. Only a few large cities harbored legitimate theaters, strictly regulated by the local and national rulers. This world was torn apart in two stages.

First, most Western countries started to deregulate their entertainment industries, enabling many more entrepreneurs to enter the business and make far larger investments, for example in circuits of fixed stone theaters. The U.S. was the first with liberalization in the late eighteenth century. Most European countries followed during the nineteenth century. Britain, for example, deregulated in the mid-1840s, and France in the late 1860s. The result of this was that commercial, formalized and standardized live entertainment emerged that destroyed a fair part of traditional entertainment. The combined effect of liberalization, innovation and changes in business organization, made the industry grow rapidly throughout the nineteenth century, and integrated local and regional entertainment markets into national ones. By the end of the nineteenth century, integrated national entertainment industries and markets maximized productivity attainable through process innovations. Creative inputs, for example, circulated swiftly along the venues – often in dedicated trains – coordinated by centralized booking offices, maximizing capital and labor utilization.

At the end of the nineteenth century, in the era of the second industrial revolution, falling working hours, rising disposable income, increasing urbanization, rapidly expanding transport networks and strong population growth resulted in a sharp rise in the demand for entertainment. The effect of this boom was further rapid growth of live entertainment through process innovations. At the turn of the century, the production possibilities of the existing industry configuration were fully realized and further innovation within the existing live-entertainment industry could only increase productivity incrementally.

At this moment, in a second stage, cinema emerged and in its turn destroyed this world, by industrializing it into the modern world of automated, standardized, tradable mass-entertainment, integrating the national entertainment markets into an international one.

Technological Origins

In the early 1890s, Thomas Edison introduced the kinematograph, which enabled the shooting of films and their play-back in slot-coin machines for individual viewing. In the mid-1890s, the Lumière brothers added projection to the invention and started to play films in theater-like settings. Cinema reconfigured different technologies that all were available from the late 1880s onwards: photography (1830s), taking negative pictures and printing positives (1880s), roll films (1850s), celluloid (1868), high-sensitivity photographic emulsion (late 1880s), projection (1645) and movement dissection/ persistence of vision (1872).

After the preconditions for motion pictures had been established, cinema technology itself was invented. Already in 1860/1861 patents were filed for viewing and projecting motion pictures, but not for the taking of pictures. The scientist Jean Marey completed the first working model of a film camera in 1888 in Paris. Edison visited Georges Demeney in 1888 and saw his films. In 1891, he filed an American patent for a film camera, which had a different moving mechanism than the Marey camera. In 1890, the Englishman Friese Green presented a working camera to a group of enthusiasts. In 1893 the Frenchman Demeney filed a patent for a camera. Finally, the Lumière brothers filed a patent for their type of camera and for projection in February 1895. In December of that year they gave the first projection for a paying audience. They were followed in February 1896 by the Englishman Robert W. Paul. Paul also invented the ‘Maltese cross,’ a device which is still used in film cameras today. It is instrumental in the smooth rolling of the film, and in the correcting of the lens for the space between the exposures (Michaelis 1958; Musser 1990: 65-67; Low and Manvell 1948).

Three characteristics stand out in this innovation process. First, it was an international process of invention, taking place in several countries at the same time, and the inventors building upon and improving upon each other’s inventions. This connects to Joel Mokyr’s notion that in the nineteenth century communication became increasingly important to innovations, and many innovations depended on international communication between inventors (Mokyr 1990: 123-124). Second, it was what Mokyr calls a typical nineteenth century invention, in that it was a smart combination of many existing technologies. Many different innovations in the technologies which it combined had been necessary to make possible the innovation of cinema. Third, cinema was a major innovation in the sense that it was quickly and universally adopted throughout the western world, quicker than the steam engine, the railroad or the steamship.

The Emergence of Cinema

For about the first ten years of its existence, cinema in the United States and elsewhere was mainly a trick and a gadget. Before 1896 the coin-operated Kinematograph of Edison was present at fairs and in entertainment venues. Spectators had to throw a coin in the machine and peek through glasses to see the film. The first projections, from 1896 onwards, attracted large audiences. Lumière had a group of operators who traveled around the world with the cinematograph, and showed the pictures in theaters. After a few years films became a part of the program in vaudeville and sometimes in theater as well. At the same time traveling cinema emerged: cinemas which traveled around with a tent or mobile theater and set up shop for a short time in towns and villages. These differed from the Lumière operators and others in that they catered for the general, popular audiences, while the former were more upscale parts of theater programs, or a special program for the bourgeoisie (Musser 1990: 140, 299, 417-20).

This whole era, which in the U.S. lasted up to about 1905, was a time in which cinema seemed just one of many new fashions, and it was not at all certain that it would persist, or that it would be forgotten or marginalized quickly, such as happened to the boom in skating rinks and bowling alleys at the time. This changed when Nickelodeons, fixed cinemas with a few hundred seats, emerged and quickly spread all over the country between 1905 and 1907. From this time onwards cinema changed into an industry in its own right, which was distinct from other entertainments, since it had its own buildings and its own advertising. The emergence of fixed cinemas coincided which a huge growth phase in the business in general; film production increased greatly, and film distribution developed into a special activity, often managed by large film producers. However, until about 1914, besides the cinemas, films also continued to be combined with live entertainment in vaudeville and other theaters (Musser 1990; Allen 1980).

Figure 1 shows the total length of negatives released on the U.S., British and French film markets. In the U.S., the total released negative length increased from 38,000 feet in 1897, to two million feet in 1910, to twenty million feet in 1920. Clearly, the initial U.S. growth between 1893 and 1898 was very strong: the market increased by over three orders of magnitude, but from an infinitesimal initial base. Between 1898 and 1906, far less growth took place, and in this period it may well have looked like the cinematograph would remain a niche product, a gimmick shown at fairs and used to be interspersed with live entertainment. From 1907, however, a new, sharp sustained growth phase starts: The market increased further again by two orders of magnitude – and from a far higher base this time. At the same time, the average film length increased considerably, from eighty feet in 1897 to seven hundred feet in 1910 to three thousand feet in 1920. One reel of film held about 1,500 feet and had a playing time of about fifteen minutes.

Between the mid-1900s and 1914 the British and French markets were growing at roughly the same rates as the U.S. one. World War I constituted a discontinuity: from 1914 onwards European growth rates are far lower those in the U.S.

The prices the Nickelodeons charged were between five and ten cents, for which spectators could stay as long as they liked. Around 1910, when larger cinemas emerged in hot city center locations, more closely resembling theaters than the small and shabby Nickelodeons, prices increased. They varied from between one dollar to one dollar and-a-half for ‘first run’ cinemas to five cents for sixth-run neighborhood cinemas (see also Sedgwick 1998).

Figure 1

Total Released Length on the U.S., British and French Film Markets (in Meters), 1893-1922

Note: The length refers to the total length of original negatives that were released commercially.

See Bakker 2005, appendix I for the method of estimation and for a discussion of the sources.

Source: Bakker 2001b; American Film Institute Catalogue, 1893-1910; Motion Picture World, 1907-1920.

The Quality Race

Once Nickelodeons and other types of cinemas were established, the industry entered a new stage with the emergence of the feature film. Before 1915, cinemagoers saw a succession of many different films, each between one and fifteen minutes, of varying genres such as cartoons, newsreels, comedies, travelogues, sports films, ‘gymnastics’ pictures and dramas. After the mid-1910s, going to the cinema meant watching a feature film, a heavily promoted dramatic film with a length that came closer to that of a theater play, based on a famous story and featuring famous stars. Shorts remained only as side dishes.

The feature film emerged when cinema owners discovered that films with a far higher quality and length, enabled them to ask far higher ticket prices and get far more people into their cinemas, resulting in far higher profits, even if cinemas needed to pay far more for the film rental. The discovery that consumers would turn their back on packages of shorts (newsreels, sports, cartoons and the likes) as the quality of features increased set in motion a quality race between film producers (Bakker 2005). They all started investing heavily in portfolios of feature films, spending large sums on well-known stars, rights to famous novels and theater plays, extravagant sets, star directors, etc. A contributing factor in the U.S. was the demise of the Motion Picture Patents Company (MPPC), a cartel that tried to monopolize film production and distribution. Between about 1908 and 1912 the Edison-backed MPPC had restricted quality artificially by setting limits on film length and film rental prices. When William Fox and the Department of Justice started legal action in 1912, the power of the MPPC quickly waned and the ‘independents’ came to dominate the industry.

In the U.S., the motion picture industry became the internet of the 1910s. When companies put the word motion pictures in their IPO investors would flock to it. Many of these companies went bankrupt, were dissolved or were taken over. A few survived and became the Hollywood studios most of which we still know today: Paramount, Metro-Goldwyn-Mayer (MGM), Warner Brothers, Universal, Radio-Keith-Orpheum (RKO), Twentieth Century-Fox, Columbia and United Artists.

A necessary condition for the quality race was some form of vertical integration. In the early film industry, films were sold. This meant that the cinema-owner who bought a film, would receive all the marginal revenues the film generated. In the film industry, these revenues were largely marginal profits, as most costs were fixed, so an additional film ticket sold was pure (gross) profit. Because the producer did not get any of these revenues, at the margin there was little incentive to increase quality. When outright sales made way for the rental of films to cinemas for a fixed fee, producers got a higher incentive to increase a film’s quality, because it would generate more rentals (Bakker 2005). This further increased when percentage contracts were introduced for large city center cinemas, and when producers-distributors actually started to buy large cinemas. The changing contractual relationship between cinemas and producers was paralleled between producers and distributors.

The Decline and Fall of the European Film Industry

Because the quality race happened when Europe was at war, European companies could not participate in the escalation of quality (and production costs) discussed above. This does not mean all of them were in crisis. Many made high profits during the war from newsreels, other short films, propaganda films and distribution. They also were able to participate in the shift towards the feature film, substantially increasing output in the new genre during the war (Figure 2). However, it was difficult for them to secure the massive amount of venture capital necessary to participate in the quality race while their countries were at war. Even if they would have managed it may have been difficult to justify these lavish expenditures when people were dying in the trenches.

Yet a few European companies did participate in the escalation phase. The Danish Nordisk company invested heavily in long feature-type films, and bought cinema chains and distributors in Germany, Austria and Switzerland. Its strategy ended when the German government forced it to sell its German assets to the newly founded UFA company, in return for a 33 percent minority stake. The French Pathé company was one of the largest U.S. film producers. It set up its own U.S. distribution network and invested in heavily advertised serials (films in weekly installments) expecting that this would become the industry standard. As it turned out, Pathé bet on the wrong horse and was overtaken by competitors riding high on the feature film. Yet it eventually switched to features and remained a significant company. In the early 1920s, its U.S. assets were sold to Merrill Lynch and eventually became part of RKO.

Figure 2

Number of Feature Films Produced in Britain, France and the U.S., 1911-1925

(semi-logarithmic scale)

Source: Bakker 2005 [American Film Institute Catalogue; British Film Institute; Screen Digest; Globe, World Film Index, Chirat, Longue métrage.]

Because it could not participate in the quality race, the European film industry started to decline in relative terms. Its market share at home and abroad diminished substantially (Figure 3). In the 1900s European companies supplied at least half of the films shown in the U.S. In the early 1910s this dropped to about twenty percent. In the mid-1910s, when the feature film emerged, the European market share declined to nearly undetectable levels.

By the 1920s, most large European companies gave up film production altogether. Pathé and Gaumont sold their U.S. and international business, left film making and focused on distribution in France. Éclair, their major competitor, went bankrupt. Nordisk continued as an insignificant Danish film company, and eventually collapsed into receivership. The eleven largest Italian film producers formed a trust, which terribly failed and one by one they fell into financial disaster. The famous British producer, Cecil Hepworth, went bankrupt. By late 1924, hardly any films were being made in Britain. American films were shown everywhere.

Figure 3

Market Shares by National Film Industries, U.S., Britain, France, 1893-1930

Note: EU/US is the share of European companies on the U.S. market, EU/UK is the share of European companies on the British market, and so on. For further details see Bakker 2005.

The Rise of Hollywood

Once they had lost out, it was difficult for European companies to catch up. First of all, since the sharply rising film production costs were fixed and sunk, market size was becoming of essential importance as it affected the amount of money that could be spent on a film. Exactly at this crucial moment, the European film market disintegrated, first because of war, later because of protectionism. The market size was further diminished by heavy taxes on cinema tickets that sharply increased the price of cinema compared to live entertainment.

Second, the emerging Hollywood studios benefited from first mover advantages in feature film production: they owned international distribution networks, they could offer cinemas large portfolios of films at a discount (block-booking), sometimes before they were even made (blind-bidding), the quality gap with European features was so large it would be difficult to close in one go, and, finally, the American origin of the feature films in the 1910s had established U.S. films as a kind of brand, leaving consumers with high switching costs to try out films from other national origins. It would be extremely costly for European companies to re-enter international distribution, produce large portfolios, jump-start film quality, and establish a new brand of films – all at the same time (Bakker 2005).

A third factor was the rise of Hollywood as production location. The large existing American Northeast coast film industry and the newly emerging film industry in Florida declined as U.S. film companies started to locate in Southern California. First of all, the ‘sharing’ of inputs facilitated knowledge spillovers and allowed higher returns. The studios lowered costs because creative inputs had less down-time, needed to travel less, could participate in many try-outs to achieve optimal casting and could be rented out easily to competitors when not immediately wanted. Hollywood also attracted new creative inputs through non-monetary means: even more than money creative inputs wanted to maximize fame and professional recognition. For an actress, an offer to work with the world’s best directors, costume designers, lighting specialists and make-up artists was difficult to decline.

Second, a thick market for specialized supply and demand existed. Companies could easily rent out excess studio capacity (for example, during the nighttime B-films were made), and a producer was quite likely to find the highly specific products or services needed somewhere in Hollywood (Christopherson and Storper 1987, 1989). While a European industrial ‘film’ district may have been competitive and even have a lower over-all cost/quality ratio than Hollywood, a first European major would have a substantially higher cost/quality ratio (lacking external economies) and would therefore not easily enter (see, for example, Krugman and Obstfeld 2003, chapter 6). If entry did happen, the Hollywood studios could and would buy successful creative inputs away, since they could realize higher returns on these inputs, which resulted in American films with even a higher perceived quality, thus perpetuating the situation.

Sunlight, climate and the variety of landscape in California were of course favorable to film production, but were not unique. Locations such as Florida, Italy, Spain and Southern France offered similar conditions.

The Coming of Sound

In 1927, sound films were introduced. The main innovator was Warner Brothers, backed by the bank Goldman, Sachs, which actually parachuted a vice-president to Warner. Although many other sound systems had been tried and marketed from the 1900s onwards, the electrical microphone, invented at Bell labs in the mid-1920s, sharply increased the quality of sound films and made possible the change of the industry. Sound increased the interests in the film industry of large industrial companies such as General Electric, Western Electric and RCA, as well as those of the banks who were eager the finance the new innovation, such as the Bank of America and Goldman, Sachs.

In economic terms, sound represented an exogenous jump in sunk costs (and product quality) which did not affect the basic industry structure very much: The industry structure was already highly concentrated before sound and the European, New York/Jersey and Florida film industries were already shattered. What it did do was industrialize away most of the musicians and entertainers that had complemented the silent films with sound and entertainment, especially those working in the smaller cinemas. This led to massive unemployment among musicians (see, for example, Gomery 1975; Kraft 1996).

The effect of sound film in Europe was to increase the domestic revenues of European films, because they became more culture-specific as they were in the local language, but at the same time it decreased the foreign revenues European films received (Bakker 2004b). It is difficult to completely assess the impact of sound film, as it coincided with increased protection; many European countries set quotas for the amount of foreign films that could be shown shortly before the coming of sound. In France, for example, where sound became widely adopted from 1930 onwards, the U.S. share of films dropped from eighty to fifty percent between 1926 and 1929, mainly the result of protectionist legislation. During the 1930s, the share temporarily declined to about forty percent, and then hovered to between fifty and sixty percent. In short, protectionism decreased the U.S. market share and increased the French market shares of French and other European films, while sound film increased French market share, mostly at the expense of other European films and less so at the expense of U.S. films.

In Britain, the share of releases of American films declined from eighty percent in 1927 to seventy percent in 1930, while British films increased from five percent to twenty percent, exactly in line with the requirements of the 1927 quota act. After 1930, the American share remained roughly stable. This suggests that sound film did not have a large influence, and that the share of U.S. films was mainly brought down by the introduction of the Cinematograph Films Act in 1927, which set quotas for British films. Nevertheless, revenue data, which are unfortunately lacking, would be needed to give a definitive answer, as little is known about effects on the revenue per film.

The Economics of the Interwar Film Trade

Because film production costs were mainly fixed and sunk, international sales or distribution were important, because these were additional sales without much additional cost to the producer; the film itself had already been made. Films had special characteristics that necessitated international sales. Because they essentially were copyrights rather than physical products, theoretically the costs of additional sales were zero. Film production involved high endogenous sunk costs, recouped through renting the copyright to the film. The marginal foreign revenue equaled marginal net revenue (and marginal profits after the film’s production costs had been fully amortized). All companies large or small had to take into account foreign sales when setting film budgets (Bakker 2004b).

Films were intermediate products sold to foreign distributors and cinemas. While the rent paid varied depending on perceived quality and general conditions of supply and demand, the ticket price paid by consumers generally did not vary. It only varied by cinema: highest in first-run city center cinemas and lowest in sixth-run ramshackle neighborhood cinemas. Cinemas used films to produce ‘spectator-hours’: a five-hundred-seat cinema providing one hour of film, produced five hundred spectator-hours of entertainment. If it sold three hundred tickets, the other two hundred spectator-hours produced would have perished.

Because film was an intermediate product and a capital good at that, international competition could not be on price alone, just as sales of machines depend on the price/performance ratio. If we consider a film’s ‘capacity to sell spectator-hours’ (hereafter called selling capacity) as proportional to production costs, a low-budget producer could not simply push down a film’s rental price in line with its quality in order to make a sale; even at a price of zero, some low-budget films could not be sold. The reasons were twofold.

First, because cinemas had mostly fixed costs and few variable costs, a film’s selling capacity needed to be at least as large as fixed cinema costs plus its rental price. A seven-hundred-seat cinema, with a production capacity of 39,200 spectator-hours a week, weekly fixed costs of five hundred dollars, and an average admission price of five cents per spectator-hour, needed a film selling at least ten thousand spectator-hours, and would not be prepared to pay for that (marginal) film, because it only recouped fixed costs. Films needed a minimum selling capacity to cover cinema fixed costs. Producers could only price down low-budget films to just above the threshold level. With a lower expected selling capacity, these films could not be sold at any price.

This reasoning assumes that we know a film’s selling capacity ex ante. A main feature distinguishing foreign markets from domestic ones was that uncertainty was markedly lower: from a film’s domestic launch the audience appeal was known, and each subsequent country added additional information. While a film’s audience appeal across countries was not perfectly correlated, uncertainty was reduced. For various companies, correlations between foreign and domestic revenues for entire film portfolios fluctuated between 0.60 and 0.95 (Bakker 2004b). Given the riskiness of film production, this reduction in uncertainty undoubtedly was important.

The second reason for limited price competition was the opportunity cost, given cinemas’ production capacities. If the hypothetical cinema obtained a high-capacity film for a weekly rental of twelve hundred dollars, which sold all 39,200 spectator-hours, the cinema made a profit of $260 (($0.05 times 39,200) – $1,200 – $500 = $260). If a film with half the budget and, we assume, half the selling capacity, rented for half the price, the cinema-owner would lose $120 (($0.05 times 19,600) – $600 – $500 = -$120). Thus, the cinema owner would want to pay no more than $220 for the lower budget film, given that the high budget film is available (($0.05 times 19,600) – $220- $500 = $260). So the low-capacity film with half the selling capacity of the high-capacity film would need to sell for under a fifth of the price of the high capacity film to even enable the possibility of a transaction.

These sharply increasing returns to selling capacity made the setting of production outlays important, as a right price/capacity ratio was crucial to win foreign markets.

How Films Became Branded Products

To make sure film revenues reached above cinema fixed costs, film companies transformed films into branded products. With the emergence of the feature film, they started to pay large sums to actors, actresses and directors and for rights to famous plays and novels. This is still a major characteristic of the film industry today that fascinates many people. Yet the huge sums paid for stars and stories are not as irrational and haphazard as they sometimes may seem. Actually, they might be just as ‘rational’ and have just as quantifiable a return as direct spending on marketing and promotion (Bakker 2001a).

To secure an audience, film producers borrowed branding techniques from other consumer goods’ industries, but the short product-life-cycle forced them to extend the brand beyond one product – using trademarks or stars – to buy existing ‘brands,’ such as famous plays or novels, and to deepen the product-life-cycle by licensing their brands.

Thus, the main value of stars and stories lay not in their ability to predict successes, but in their services as giant ‘publicity machines’ which optimized advertising effectiveness by rapidly amassing high levels of brand-awareness. After a film’s release, information such as word-of-mouth and reviews would affect its success. The young age at which stars reached their peak, and the disproportionate income distribution even among the superstars, confirm that stars were paid for their ability to generate publicity. Likewise, because ‘stories’ were paid several times as much as original screenplays, they were at least partially bought for their popular appeal (Bakker 2001a).

Stars and stories marked a film’s qualities to some extent, confirming that they at least contained themselves. Consumer preferences confirm that stars and stories were the main reason to see a film. Further, fame of stars is distributed disproportionately, possibly even twice as unequal as income. Film companies, aided by long-term contracts, probably captured part of the rent of their popularity. Gradually these companies specialized in developing and leasing their ‘instant brands’ to other consumer goods’ industries in the form of merchandising.

Already from the late 1930s onwards, the Hollywood studios used the new scientific market research techniques of George Gallup to continuously track the brand-awareness among the public of their major stars (Bakker 2003). Figure 4 is based on one such graph used by Hollywood. It shows that Lana Turner was a rising star, Gable was consistently a top star, while Stewart’s popularity was high but volatile. James Stewart was eleven percentage-points more popular among the richest consumers than among the poorest, while Lana Turner differed only a few percentage-points. Additional segmentation by city size seemed to matter, since substantial differences were found: Clark Gable was ten percentage-points more popular in small cities than in large ones. Of the richest consumers, 51 percent wanted to see a movie starring Gable, but altogether they constituted just 14 percent of Gable’s market, while the 57 percent poorest Gable-fans constituted 34 percent. The increases in Gable’s popularity roughly coincided with his releases, suggesting that while producers used Gable partially for the brand-awareness of his name, each use (film) subsequently increased or maintained that awareness in what seems to have been a self-reinforcing process.

Figure 4

Popularity of Clark Gable, James Stewart and Lana Turner among U.S. respondents

April 1940 – October 1942, in percentage

Source: Audience Research Inc.; Bakker 2003.

The Film Industry’s Contribution to Economic Growth and Welfare

By the late 1930s, cinema had become an important mass entertainment industry. Nearly everyone in the Western world went to the cinema and many at least once a week. Cinema had made possible a massive growth in productivity in the entertainment industry and thereby disproved the notions of some economists that productivity growth in certain service industries is inherently impossible. Between 1900 and 1938, output of the entertainment industry, measured in spectator-hours, grew substantially in the U.S., Britain and France, varying from three to eleven percent per year over a period of nearly forty years (Table 1). The output per worker increased from 2,453 spectator hours in the U.S. in 1900 to 34,879 in 1938. In Britain it increased from 16,404 to 37,537 spectator-hours and in France from 1,575 to 8,175 spectator-hours. This phenomenal growth could be explained partially by adding more capital (such as in the form of film technology and film production outlays) and partially by simply producing more efficiently with the existing amount of capital and labor. The increase in efficiency (‘total factor productivity’) varied from about one percent per year in Britain to over five percent in the U.S., with France somewhere in between. In all countries, this increase in efficiency was at least one and a half times the increase in efficiency at the level of the entire nation. For the U.S. it was as much as five times and for France it was more than three times the national increase in efficiency (Bakker 2004a).

Another noteworthy feature is that the labor productivity in entertainment varied less across countries in the late 1930s than it did in 1900. Part of the reason is that cinema technology made entertainment partially tradable and therefore forced productivity in similar directions in all countries; the tradable part of the entertainment industry would now exert competitive pressure on the non-tradable part (Bakker 2004a). It is therefore not surprising that cinema caused the lowest efficiency increase in Britain, which had already a well-developed and competitive entertainment industry (with the highest labor and capital productivity both in 1900 and in 1938) and higher efficiency increases in the U.S. and to a lesser extent in France, which had less well-developed entertainment industries in 1900.

Another way to measure the contribution of film technology to the economy in the late 1930s is by using a social savings methodology. If we assume that cinema did not exist and all demand for entertainment (measured in spectator-hours) would have to be met by live entertainment, we can calculate the extra costs to society and thus the amount saved by film technology. In the U.S., these social savings amounted to as much as 2.2 percent ($2.5 billion) of GDP, in France to just 1.4 percent ($0.16 billion) and in Britain to only 0.3 percent ($0.07 billion) of GDP.

A third and different way to look at the contribution of film technology to the economy is to look at the consumer surplus generated by cinema. Contrary to the TFP and social savings techniques used above, which assume that cinema is a substitute for live entertainment, this approach assumes that cinema is a wholly new good and that therefore the entire consumer surplus generated by it is ‘new’ and would not have existed without cinema. For an individual consumer, the surplus is the difference between the price she was willing to pay and the ticket she actually paid. This difference varies from consumer to consumer, but with econometric techniques, one can estimate the sum of individual surpluses for an entire country. The resulting national consumer surpluses for entertainment varied from about a fifth of total entertainment expenditure in the U.S., to about half in Britain and as much as three quarters in France.

All the measures show that by the late 1930s cinema was making an essential contribution in increasing total welfare as well as the entertainment industry’s productivity.

Vertical Disintegration

After the Second World War, the Hollywood film industry disintegrated: production, distribution and exhibition became separate activities that were not always owned by the same organization. Three main causes brought about the vertical disintegration. First, the U.S. Supreme Court forced the studios to divest their cinema chains in 1948. Second, changes in the social-demographic structure in the U.S. brought about a shift towards entertainment within the home: many young couples started to live in the new suburbs and wanted to stay home for entertainment. Initially, they mainly used radio for this purpose and later they switched to television (Gomery 1985). Third, television broadcasting in itself (without the social-demographic changes that increased demand for it) constituted a new distribution channel for audiovisual entertainment and thus decreased the scarcity of distribution capacity. This meant that television took over the focus on the lowest common denominator from radio and cinema, while the latter two differentiated their output and started to focus more on specific market segments.

Figure 5

Real Cinema Box Office Revenue, Real Ticket Price and Number of Screens in the U.S., 1945-2002

Note: The values are in dollars of 2002, using the EH.Net consumer price deflator.

Source: Adapted from Vogel 2004 and Robertson 2001.

The consequence was a sharp fall in real box office revenue in the decade after the war (Figure 5). After the mid-1950s, real revenue stabilized, and remained the same, with some fluctuations, until the mid-1990s. The decline in screens was more limited. After 1963 the number of screens increased again steadily to reach nearly twice the 1945 level in the 1990s. Since the 1990s there have been more movie screens in the U.S. than ever before. The proliferation of screens, coinciding with declining capacity per screen, facilitated market segmentation. Revenue per screen nearly halved in the decade after the war, then made a rebound during the 1960s, to start a long and steady decline from 1970 onwards. The real price of a cinema ticket was quite stable until the 1960s, after which it more than doubled. Since the early 1970s, the price has been declining again and nowadays the real admission price is about what it was in 1965.

It was in this adverse post-war climate that the vertical disintegration unfolded. It took place at three levels. First (obviously) the Hollywood studios divested their cinema-chains. Second, they outsourced part of their film production and most of their production factors to independent companies. This meant that the Hollywood studios would only produce part of the films they distributed themselves, that they changed the long-term, seven-year contracts with star actors for per-film contracts and that they sold off part of their studio facilities to rent them back for individual films. Third, the Hollywood studios’ main business became film distribution and financing. They specialized in planning and assembling a portfolio of films, contracting and financing most of them and marketing and distributing them world-wide.

The developments had three important effects. First, production by a few large companies was replaced by production by many small flexibly specialized companies. Southern California became an industrial district for the film industry and harbored an intricate network of these businesses, from set design companies and costume makers, to special effects firms and equipment rental outfits (Storper and Christopherson 1989). Only at the level of distribution and financing did concentration remain high. Second, films became more differentiated and tailored to specific market segments; they were now aimed at a younger and more affluent audience. Third, the European film market gained in importance: because the social-demographic changes (suburbanization) and the advent of television happened somewhat later in Europe, the drop in cinema attendance also happened later there. The result was that the Hollywood off-shored a large chunk – at times over half – of their production to Europe in the 1960s. This was stimulated by lower European production costs, difficulties in repatriating foreign film revenues and by the vertical disintegration in California, which severed the studios’ ties with their production units and facilitated outside contracting.

European production companies could better adapt to changes in post-war demand because they were already flexibly specialized. The British film production industry, for example, had been fragmented almost from its emergence in the 1890s. In the late 1930s, distribution became concentrated, mainly through the efforts of J. Arthur Rank, while the production sector, a network of flexibly specialized companies in and around London, boomed. After the war, the drop in admissions followed the U.S. with about a ten year delay (Figure 6). The drop in the number of screens experienced the same lag, but was more severe: about two-third of British cinema screens disappeared, versus only one-third in the U.S. In France, after the First World War film production had disintegrated rapidly and chaotically into a network of numerous small companies, while a few large firms dominated distribution and production finance. The result was a burgeoning industry, actually one of the fastest growing French industries in the 1930s.

Figure 6

Admissions and Number of Screens in Britain, 1945-2005

Source: Screen Digest/Screen Finance/British Film Institute and Robertson 2001.

Several European companies attempted to (re-)enter international film distribution, such as Rank in the 1930s and 1950s, the International Film Finance Corporation in the 1960s, Gaumont in the 1970s, PolyGram in the 1970s and again in the 1990s, Cannon in the 1980s. All of them failed in terms of long-run survival, even if they made profits during some years. The only postwar entry strategy that was successful in terms of survival was the direct acquisition of a Hollywood studio (Bakker 2000).

The Come-Back of Hollywood

From the mid-1970s onwards, the Hollywood studios revived. The slide of box office revenue was brought to a standstill. Revenues were stabilized by the joint effect of seven different factors. First, the blockbuster movie increased cinema attendance. This movie was heavily marketed and supported by intensive television advertisement. Jaws was one of the first of these kind of movies and an enormous success. Second, the U.S. film industry received several kinds of tax breaks from the early 1970s onwards, which were kept in force until the mid-1980s, when Hollywood was in good shape again. Third, coinciding with the blockbuster movie and tax-breaks film budgets increased substantially, resulting in a higher perceived quality and higher quality difference with television, drawing more consumers into the cinema. Fourth, a rise in multiplex cinemas, cinemas with several screens, increased consumer choice and increased the appeal of cinema by offering more variety within a specific cinema, thus decreasing the difference with television in this respect. Fifth, one could argue that the process of flexible specialization of the California film industry was completed in the early 1970s, thus making the film industry ready to adapt more flexibly to changes in the market. MGM’s sale of its studio complex in 1970 marked the final ending of an era. Sixth, new income streams from video sales and rentals and cable television increased the revenues a high-quality film could generate. Seventh, European broadcasting deregulation increased the demand for films by television stations substantially.

From the 1990s onwards further growth was driven by newer markets in Eastern Europe and Asia. Film industries from outside the West also grew substantially, such as those of Japan, Hong Kong, India and China. At the same time, the European Union started a large scale subsidy program for its audiovisual film industry, with mixed economic effects. By 1997, ten years after the start of the program, a film made in the European Union cost 500,000 euros on average, was seventy to eighty percent state-financed, and grossed 800,000 euros world-wide, reaching an audience of 150,000 persons. In contrast, the average American film cost fifteen million euros, was nearly hundred percent privately financed, grossed 58 million euros, and reached 10.5 million persons (Dale 1997). This seventy-fold difference in performance is remarkable. Even when measured in gross return on investment or gross margin, the U.S. still had a fivefold and twofold lead over Europe, respectively.[1] In few other industries does such a pronounced difference exist.

During the 1990s, the film industry moved into television broadcasting. In Europe, broadcasters often co-funded small-scale boutique film production. In the U.S., the Hollywood studios started to merge with broadcasters. In the 1950s they had experienced difficulties with obtaining broadcasting licenses, because their reputation had been compromised by the antitrust actions. They had to wait for forty years before they could finally complete what they intended.[2] Disney, for example, bought the ABC network, Paramount’s owner Viacom bought CBS, and General Electric, owner of NBC, bought Universal. At the same time, the feature film industry was also becoming more connected to other entertainment industries, such as videogames, theme parks and musicals. With video game revenues now exceeding films’ box office revenues, it seems likely that feature films will simply be the flagship part of large entertainment supply system that will exploit the intellectual property in feature films in many different formats and markets.

Conclusion

The take-off of the film industry in the early twentieth century had been driven mainly by changes in demand. Cinema industrialized entertainment by standardizing it, automating it and making it tradable. After its early years, the industry experienced a quality race that led to increasing industrial concentration. Only later did geographical concentration take place, in Southern California. Cinema made a substantial contribution to productivity and total welfare, especially before television. After television, the industry experienced vertical disintegration, the flexible specialization of production, and a self-reinforcing process of increasing distribution channels and capacity as well as market growth. Cinema, then, was not only the first in a row of media industries that industrialized entertainment, but also the first in a series of international industries that industrialized services. The evolution of the film industry thus may give insight into technological change and its attendant welfare gains in many service industries to come.

Selected Bibliography

Allen, Robert C. Vaudeville and Film, 1895-1915. New York: Arno Press, 1980.

Bächlin, Peter, Der Film als Ware. Basel: Burg-Verlag, 1945.

Bakker, Gerben, “American Dreams: The European Film Industry from Dominance to Decline.” EUI Review (2000): 28-36.

Bakker, Gerben. “Stars and Stories: How Films Became Branded Products.” Enterprise and Society 2, no. 3 (2001a): 461-502.

Bakker, Gerben. Entertainment Industrialised: The Emergence of the International Film Industry, 1890-1940. Ph.D. dissertation, European University Institute, 2001b.

Bakker, Gerben. “Building Knowledge about the Consumer: The Emergence of Market Research in the Motion Picture Industry.” Business History 45, no. 1 (2003): 101-27.

Bakker, Gerben. “At the Origins of Increased Productivity Growth in Services: Productivity, Social Savings and the Consumer Surplus of the Film Industry, 1900-1938.” Working Papers in Economic History, No. 81, Department of Economic History, London School of Economics, 2004a.

Bakker, Gerben. “Selling French Films on Foreign Markets: The International Strategy of a Medium-Sized Film Company.” Enterprise and Society 5 (2004b): 45-76.

Bakker, Gerben. “The Decline and Fall of the European Film Industry: Sunk Costs, Market Size and Market Structure, 1895-1926.” Economic History Review 58, no. 2 (2005): 311-52.

Caves, Richard E. Creative Industries: Contracts between Art and Commerce. Cambridge, MA: Harvard University Press, 2000.

Christopherson, Susan, and Michael Storper. “Flexible Specialization and Regional Agglomerations: The Case of the U.S. Motion Picture Industry.” Annals of the Association of American Geographers 77, no. 1 (1987).

Christopherson, Susan, and Michael Storper. “The Effects of Flexible Specialization on Industrial Politics and the Labor Market: The Motion Picture Industry.” Industrial and Labor Relations Review 42, no. 3 (1989): 331-47.

Gomery, Douglas, The Coming of Sound to the American Cinema: A History of the Transformation of an Industry. Ph.D. dissertation, University of Wisconsin, 1975.

Gomery, Douglas, “The Coming of television and the ‘Lost’ Motion Picture Audience.” Journal of Film and Video 37, no. 3 (1985): 5-11.

Gomery, Douglas. The Hollywood Studio System. London: MacMillan/British Film Institute, 1986; reprinted 2005.

Kraft, James P. Stage to Studio: Musicians and the Sound Revolution, 1890-1950. Baltimore: Johns Hopkins University Press, 1996.

Krugman, Paul R., and Maurice Obstfeld, International Economics: Theory and Policy (sixth edition). Reading, MA: Addison-Wesley, 2003.

Low, Rachael, and Roger Manvell, The History of the British Film, 1896-1906. London, George Allen & Unwin, 1948.

Michaelis, Anthony R. “The Photographic Arts: Cinematography.” In A History of Technology, Vol. V: The Late Nineteenth Century, c. 1850 to c. 1900, edited by Charles Singer, 734-51. Oxford, Clarendon Press, 1958, reprint 1980.

Mokyr, Joel. The Lever of Riches: Technological Creativity and Economic Progress. Oxford: Oxford University Press, 1990.

Musser, Charles. The Emergence of Cinema: The American Screen to 1907. The History of American Cinema, Vol. I. New York: Scribner, 1990.

Sedgwick, John, “Product Differentiation at the Movies: Hollywood, 1946-65.” Journal of Economic History 63 (2002): 676-705.

Sedgwick, John, and Michael Pokorny. “The Film Business in Britain and the United States during the 1930s.” Economic History Review 57, no. 1 (2005): 79-112.

Sedgwick, John, and Mike Pokorny, editors. An Economic History of Film. London: Routledge, 2004.

Thompson, Kristin.. Exporting Entertainment: America in the World Film Market, 1907-1934. London: British Film Institute, 1985.

Vogel, Harold L. Entertainment Industry Economics: A Guide for Financial Analysis. Cambridge: Cambridge University Press, Sixth Edition, 2004.

Gerben Bakker may be contacted at gbakker at essex.ac.uk


[1] Gross return on investment, disregarding interest costs and distribution charges was 60 percent for European vs. 287 percent for U.S. films. Gross margin was 37 percent for European vs. 74 percent for U.S. films. Costs per viewer are 3.33 vs. 1.43 euros, revenues per viewer are 5.30 vs. 5.52 euros.

[2] The author is indebted to Douglas Gomery for this point.

Citation: Bakker, Gerben. “The Economic History of the International Film Industry”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-the-international-film-industry/

Education and Economic Growth in Historical Perspective

David Mitch, University of Maryland Baltimore County

In his introduction to the Wealth of Nations, Adam Smith (1776, p. 1) states that the proportion between the annual produce of a nation and the number of people who are to consume that produce depends on “the skill, dexterity, and judgment with which its labour is generally applied.” In recent decades, analysts of economic productivity in the United States during the twentieth century have made allowance for Smith’s “skill, dexterity, and judgment” of the labor force under the rubric of labor force quality (Ho and Jorgenson 1999; Aaronson and Sullivan 2001; DeLong, Goldin, and Katz 2003). These studies have found that a variety of factors have influenced labor force quality in the U.S., including age structure and workforce experience, female labor force participation, and immigration. One of the most important determinants of labor force quality has been years of schooling completed by the labor force.

Data limitations complicate generalizing these findings to periods before the twentieth century and to geographical areas beyond the United States. However, the rise of modern economic growth over the last few centuries seems to roughly coincide with the rise of mass schooling throughout the world. The sustained growth in income per capita evidenced in much of the world over the past two to two and a half centuries is a marked divergence from previous tendencies. Kuznets (1966) used the phrase “modern economic growth” to describe this divergence and he placed its onset in the mid-eighteenth century. More recently, Maddison (2001) has placed the start of sustained economic growth in the early nineteenth century. Maddison (1995) estimates that per capita income between 1520 and 1992 increased some eight times for the world as a whole and up to seventeen times for certain regions. Popular schooling was not widespread anywhere in the world before 1600. By 1800, most of North America, Scandinavia, and Germany had achieved literacy rates well in excess of fifty percent. In France and England literacy rates were closer to fifty percent and school attendance before the age of ten was certainly widespread, if not yet the rule. It was not until later in the nineteenth century and the early twentieth century that Southern and Eastern Europe were to catch up with Western Europe and it was only the first half of the twentieth century that saw schooling become widespread through much of Asia and Latin America. Only later in the twentieth century did schooling begin to spread throughout Africa. The twentieth century has seen the spread of secondary and university education to much of the adult population in the United States and to a lesser extent in other developed countries.[2] However, correlation is not causation; rising income per capita may have contributed to rising levels of schooling, as well as schooling to income levels. Thus, the contribution of rising schooling to economic growth should be examined more directly.

Estimating the Contribution of the Rise of Mass Schooling to Economic Growth: A Growth Accounting Perspective

Growth accounting can be used to estimate the general bounds of the contribution the rise of schooling has made to economic growth over the past few centuries.[3] A key assumption of growth accounting is that factors of production are paid their social marginal products. Growth accounting starts with estimates of the growth of individual factors of production, as well as the shares of these factors in total output and estimates of the growth of total product. It then apportions the growth in output into that attributable to growth in each factor of production specified in the analysis and into that due to a residual that cannot otherwise be explained. Estimates of how much schooling has increased the productivity of individual workers, combined with estimates of the increase in schooling completed by the labor force, yield estimates of how much the increase in schooling has contributed to increasing output. A growth accounting approach offers the advantage that with basic estimates (or at least possible ranges) for trends in output, labor force, schooling attainment, and preferably capital stock and factor shares, it yields estimates of schooling’s contribution to economic growth. An important disadvantage is that it relies on indirect estimates at the micro level for how schooling influences productivity at the aggregate level, rather than on direct empirical evidence.[4]

Back-of-the-envelope estimates of increases in income per capita attributable to rising levels of education over a period of a few centuries can be obtained by considering possible ranges of levels of schooling increases as measured in average years of schooling along with possible ranges of rates of return per year of schooling, in terms of the percentage by which a year of schooling raises earnings and common ranges for labor’s share in national income. By using a Cobb-Douglas specification of the aggregate production function with two factors of production, labor and physical capital, one can arrive at the following equation for the ratio between final and initial national income per worker due to increases in average school years completed between the two time periods:

1) (Y/L)1/ (Y/L)0 = ( (1 + r )S1 - S0 )α

Where Y = output, L = the labor force, r = the percent by which a year of schooling increases labor productivity, S is the average years of schooling completed by the labor force in each time period, α is labor’s share in national income, and the subscripts 0 and 1 denote the initial and final time period over which the schooling changes occur.[5] This formulation is a partial equilibrium one, holding constant the level of physical capital. However, the level of physical capital should be expected to increase in response to improved labor force quality due to more schooling. A common specification of a growth model that allows for such responses of physical capital implies the following ratio between final and initial national income per worker (see Lord 2001, 99-100):

2) (Y/L)1/ (Y/L)0 = ( (1 + r )S1 - S0 )

The bounds on increases in years of schooling can be placed at between zero and 16, that is, between a completely unschooled and presumably illiterate population to one in which a college education is universal. As bounds on returns to increasing earnings per year of schooling, one can employ Krueger and Lindahl’s (2001) survey of results from recent estimates of earnings functions, which finds that returns range from 5 percent to 15 percent. The implications of varying these two parameters are reported in Tables 1A and 1B. Table 1A reports estimates based on the partial equilibrium specification holding constant the level of physical capital in equation 1). Table 1B reports estimates allowing for a changing level of physical capital as in equation 2). Labor’s share of income has been set at a commonly used value of 0.7 (see DeLong, Goldin and Katz 2003, 29; Maddison 1995, 255).

Table 1A
Increase in per Capita Income over a Base Level of 1 Attributable to Hypothetical Increases in Average Schooling Levels — Holding the Physical Capital Stock Constant

Percent Increase in Earnings per Extra Year of Schooling
Increase in Average
Years of Schooling
5% 10% 15%
1 1.035 1.07 1.10
3 1.11 1.22 1.34
6 – illiteracy to
universal grammar school
.23 1.49 1.80
12 – illiteracy to
universal high school
1.51 2.23 3.23
16 – illiteracy to
universal college
1.73 2.91 4.78

Table 1B
Increase in per Capita Income over a Base Level of 1 Attributable to Hypothetical Increases in Average Schooling Levels — Allowing for Steady-state Changes in the Physical Capital Stock

Percent Increase in Earnings per Extra Year of Schooling
Increase in Average
Years of Schooling
5% 10% 15%
1 1.05 1.10 1.15
3 1.16 1.33 1.52
6 – illiteracy to
universal grammar school
1.34 1.77 2.31
12 – illiteracy to
universal high school
1.79 3.14 5.35
16 – illiteracy to
universal college
2.18 4.59 9.36

The back-of-the-envelope calculations in Tables 1A and 1B make two simple points. First, schooling increases have the potential to explain a good deal of estimated long-term increases in per capita income. With the average member of an economy’s labor force embodying investments of twelve years of schooling and a moderate ten-percent rate of return per year of schooling and no increase in the capital stock, at least 17 percent of Maddison’s eight-fold increase in per capita income can be accounted for (i.e. 1.23/7) by rising schooling. Indeed, a 16 year schooling increase allowing for steady-state capital stock increases and at 15 percent per year return overexplains Maddison’s eight-fold increase (8.36/7). After all, if schooling has had substantial effects on the productivity of individual workers, if a sizable share of the labor force has experienced improvements in schooling completed and with labor’s share of output greater than half, then the contribution of rising schooling to increasing output should be large.

Second, the contribution of schooling increases that have actually occurred historically to per capita income increases is more modest accounting for at best about one fifth of Maddison’s one-fold increase. Thus an increase in average years of schooling completed by the labor force of 6 years, roughly that entailed by the spread of universal grammar schooling, would account for 19 percent (1.31/7) of an eight-fold per capita output increase at a high 15 percent rate of return allowing for steady state changes in the physical capital stock (Table 1B). And at a low 5 percent return per year of schooling, the contribution would be only 5 percent of the increase (0.34/7). Making lower-level elementary education universal would entail increasing average years of schooling completed by the labor force by 1 to 3 years; in most circumstances this is not a trivial accomplishment as measured by the societal resources required. However, even at a high 15 percent per year return and allowing for steady state changes in the capital stock (Table 1B), the contribution of a 3 year increase in average years of schooling would only account for 7 percent (0.52/7) of Maddison’s eight-fold increase.

How do the above proposed bounds on schooling increases compare with possible increases in the physical capital stock? Kendrick (1993, 143) finds a somewhat larger growth rate in his estimated human capital stock than in the stock of non-human capital for the U.S. between 1929 and 1969, though for the sub-period 1929-48, he estimates a slightly higher growth rate for the non-human capital stock. In contrast, Maddison (1995, 35-37) estimates larger increases in the value of non-residential structures per worker and in the value of machinery and equipment per worker than in years of schooling per adult for the U.S. and the U.K. between 1820 and 1992. For the U.S., he estimates that the value of non-residential structures per worker rose by 21 times and the value of machinery and equipment per worker rose by 141 times in comparison with a ten-fold increase in the years of schooling per adult between 1820 and 1992. For the U.K., his estimates indicate a 15 fold increase in the value of structures per worker and a 97 fold increase in value of machinery and equipment per worker in contrast with a seven-fold increase in average years of schooling between 1820 and 1992. It should be noted that these estimates are based on cumulated investments in schooling to estimate human capital; that is, they are based on the costs incurred to produce human capital. Davies and Whalley (1991, 188-189) argue that estimates based on the alternative approach of calculating the present value of future earnings premiums attributable to schooling and other forms of human capital yield substantially higher estimates of human capital due to capturing inframarginal returns above costs accruing to human capital investments. For the growth accounting approach employed here, the cumulated investment or cost approach would seem the appropriate one. Are there more inherent bounds on the accumulation of human capital over time than non-human capital? One limit on the accumulation of human capital is set by how much of one’s potential working life a worker is willing to sacrifice for purposes of improving education and future productivity. This can be compared with the corresponding limit on the willingness to sacrifice current consumption for wealth accumulation.

However, this discussion makes no explicit allowance for changes over time in the quality of schooling. Improvements in teacher training and teacher recruitment along with ongoing curriculum developments among other factors could lead to ongoing improvements over time in how much a year of school attendance would improve the underlying future productivity of the student. Woessmann (2002) and Hanushek and Kimcoe (2000) have recently argued for the importance of allowing for variation in school quality in estimating the impact of cross national variation in human capital levels on economic growth. Woessmann (2002) makes the suggestion that allowing for improvements in the quality of schooling can remove the upper bounds on schooling investment that would be present if this was simply a matter of increasing the percentage of the population enrolled in school at given levels of quality. While there would seem to be inherent bounds on the proportion of one’s life that one is willing to spend in school, such bounds would not apply to increases in expenditures and other means of improving school quality.

Expenditures per pupil appear to have risen markedly over long periods of time. Thus, in the United States, expenditure per pupil in public elementary and secondary schools in constant 1989-90 dollars rose by over 6 times between 1923-24 and 1973-74 (National Center for Educational Statistics, 60). And in Victorian England, nominal expenditures per pupil in state subsidized schools more than doubled between 1870 and 1900, despite falling prices (Mitch 1982, 204). These figures do not control for the rising percentage of students enrolled in higher grade levels (presumably at higher expenditure per student), general rises in living standards affecting teachers’ salaries and other factors influencing the abilities of those recruited into teaching. Nevertheless, they suggest the possibility of sizable improvements over time in school quality.

It can be argued that implicitly allowance is made for improvements in school quality in the rate of return imputed per year of schooling completed on average by the labor force. Insofar as schools became more effective over time in transmitting knowledge and skills, the economic return per year of schooling should have increased correspondingly. Thus any attempt to allow for school quality in a growth accounting analysis should be careful to avoid double counting school quality in both school inputs and in returns per year of schooling.

The benchmark for the impact of increases in average levels of schooling completed in Table 1 are Maddison’s estimates of changes in output per capita over the last two centuries. In fact, major increases in schooling levels have most commonly been compressed into intervals of several decades or less, rather than periods of a century or more. This would imply that the contribution to output growth of improvements in labor force quality due to increases in schooling levels would have been concentrated primarily in periods of marked improvement in schooling levels and would have been far more modest during periods of more sluggish increase in educational attainment. In order to gauge the impact of the time interval over which changes in schooling occur on growth rates of output, Table 2 provides the change in average years of schooling implied by some of the hypothetical changes in average levels of schooling attainment reported in Table 1 for various time periods.

Table 2

Annual Change in Average Years of Schooling per Adult per Year Implied by Hypothetical Figures in Table 1

Time period over which increase occurred
Total Increase in
Average Years of
Schooling per Adult
5 years 10 years 30 years 50 years 100 years
1 0.2 0.1 0.033 0.02 0.01
3 0.6 0.3 0.1 0.06 0.03
6 1.2 0.6 0.2 0.12 0.06
9 1.8 0.9 0.3 0.18 0.09

Table 3 translates these rates of schooling growth into output growth rates using the partial equilibrium framework of equation 1) using a value for the share of labor of 0.7 as above. The contribution of schooling to growth rates of output and output per capita can be calculated as labor’s share times the percentage return per year of schooling on earnings times the annual increase in average years of schooling.

Table 3A
Contribution of Schooling for Large Increases in Schooling to Annual Growth Rates of Output

Length of time for schooling increase 6 year rise in average years of schooling 6 year rise in average years of schooling 9 year rise in average years of schooling 9 year rise in average years of schooling
5% return 10 % return 5 % return 10% return
30 years 0.7% 1.4% 1.05% 2.1%
50 years 0.42% 0.84% 0.63% 1.26%

Table 3B
Contribution of Schooling for Small to Modest Increases in Schooling to Annual Growth Rates of Output

Length of time for schooling increase 1 year rise in average years of schooling 1 year rise in average years of schooling 3 year rise in average years of schooling 3 year rise in average years of schooling
5 % return 10 % return 5% return 10% return
5 years 0.7% 1.4% 2.1% 4.2%
10 years 0.35% 0.7% 1.05% 2.1%
20 years 0.175% 0.35% 0.525% 1.05%
30 years 0.12% 0.23% 0.35% 0.7%
50 years 0.07% 0.14% 0.21% 0.42%
100 years 0.035% 0.07% 0.105% 0.21%

The case of the U.S. in the twentieth century as analyzed in DeLong, Goldin and Katz (2003) offers an example of how apparent limits or at least resistance to ongoing expansion of schooling have lowered the contribution of schooling to growth. They find that between World War I and the end of the century, improvements in labor quality attributable to schooling can account for about a quarter of the growth of output per capita in the U.S. during this period; this is similar in magnitude to Denison’s (1962) estimates for the first part of this period. This era saw the mean years of schooling completed by age 35 increased from 7.4 years for an American born in 1875 to 14.1 years for an American born in 1975 (DeLong, Goldin and Katz 2003, 22). However, in the last two decades of the twentieth century the rate of increase of mean years of schooling completed leveled off and correspondingly the contribution of schooling to labor quality improvements fell almost in half.

Maddison (1995) has compiled estimates of the average years of schooling completed for a number of countries going back to 1820. It is indicative of the sparseness of schooling completed by adult population estimates that Maddison reports estimates for only 3 countries, the U.S., the U.K., and Japan, all the way back to 1820. Maddison’s figures come from other studies and their reliability warrants further critical scrutiny than can be accorded them here. Since systematic census evidence on adult educational attainment did not begin until the mid-twentieth century, estimates of labor force educational attainment prior to 1900 should be treated with some skepticism. Nevertheless, Maddison’s estimates can be used to give a sense of plausible changes in levels of schooling completed over the last century and a half. The average increases in years of schooling per year for various time periods implied by Maddison’s figures are reported in Table 4. Maddison constructed his figures by giving primary education a weight of 1, secondary education a weight of 1.4, and tertiary, a weight of 2 based on evidence on relative earnings for each level of education.

Table 4
Estimates of the Annual Change in Average Years of Schooling per Person aged 15-64 for Selected Countries and Time Periods

Country 1913-1973 1870-1973 1870-1913
U.S. 0 .112 0.107 0.092
France 0.0783
Germany 0.053
Netherlands 0.064
U.K. 0.0473 0.0722 0.102
Japan 0.112 0.106 0.090

Source: Maddison (1995), 37, Table 2-3

Table 5
Annual Growth Rates in GDP per Capita

Region 1820-70 1870-1913 1913-50 1950-73 1973-92
12 West European Countries 0.9 1.3 1.2 3.8 1.8
4 Western Offshoots 1.4 1.5 1.3 2.4 1.2
5 South European Countries n.a. 0.9 0.7 4.8 2.2
7 East European Countries n.a. 1.2 1.0 4.0 -0.8
7 Latin American Countries n.a. 1.5 1.9 2.4 0.4
11 Asian Countries 0.1 0.7 -0.2 3.1 3.5
10 African countries n.a. n.a. 1.0 1.8 -0.4

Source: Maddison (1995), 62-63, Table 3-2.

In comparing Tables 2 and 4 it can be observed that the estimated actual changes in years of schooling compiled by Maddison (as well as the average over 55 countries reported by Lichtenberg (1994) for the third quarter of the twentieth century) fall within a lower bound set in the hypothetical ranges of a 3 year increase in average schooling spread over a century and an upper bound set by a 6 year increase in average schooling spread over 50 years.

Equations 1) and 2) above assume that each year of schooling of a worker has the same impact on productivity. In fact it has been common to find that the impact of schooling on productivity varies according to level of education. While the rate of return as a percentage of costs tends to be higher for primary than secondary schooling, which is in turn higher than tertiary education, this reflects the far lower costs, especially lower foregone earnings, of primary schooling (Psacharopolous and Patrinos 2004). The earnings premium per year of schooling tends to be higher for higher levels of education and this earnings premium, rather than the rate of return as a percentage costs, is the appropriate measure for assessing the contribution of rising schooling to growth (OECD 2001). Accordingly growth accounting analyses commonly construct schooling indexes weighting years of schooling according to estimates of each year’s impact on earnings (see for example Maddison 1995; Denison 1962). DeLong, Goldin and Katz (2003) use chain weighted indexes of returns according to each level of schooling. A rough approximation of the effect of allowing for variation in economic impact by level of schooling in the analysis in Table 1 is simply to focus on the mid-range 10 percent rate of return as an approximate average of high, low, and medium level returns.[6]

The U.S. is notable for rapid expansion in schooling attainment over the twentieth century at both the secondary and tertiary level, while in Europe widespread expansion has tended to focus on the primary and lower secondary level. These differences are evident in Denison’s estimates of the actual differences in educational distribution between the United States and a number of Western European countries in the mid-twentieth century (see Table 6).

Table 6

Percentage Distributions of the Male Labor Force by Years of Schooling Completed

Years of School Completed United States 1957 France 1954 United Kingdom 1951 Italy 1961
0 1.4 0.3 0.2 13.7
1-4 5.7 2.4 0.2 26.1
5-6 6.3 19.2 0.8 38.0
7 5.8 21.1 4.0 4.2
8 17.2 27.8 27.2 8.1
9 6.3 4.6 45.1 0.7
10 7.3 4.1 8.4 0.7
11 6.0 6.5 7.3 0.6
12 26.2 5.4 2.5 1.8
13-15 8.3 5.4 2.2 3.0
16 or more 9.5 3.2 2.1 3.1

Source: Denison (1967), 80, Table 8-1.

Some segments of the population are likely to have much greater enhancements of productivity from additional years of schooling than others. Insofar as the more able benefit from schooling compared to the rest of the ability distribution, putting substantially greater relative emphasis on expansion of higher levels of schooling could considerably augment growth rates over a more egalitarian strategy. This result would follow from a substantially greater premium assigned to higher levels of education. However, some studies of education in developing countries have found that they allocate a disproportionate share of resources to tertiary schooling at the expense of primary schooling, reflecting efforts of elites to benefit their offspring. How this has impeded economic growth would depend on the disparity in rates of return among levels of education, a point of some controversy in the economics of education literature (Birdsall 1996; Psacharopoulos 1996).

While allocating schooling disproportionately towards the more able in a society may have promoted growth, there would have been corresponding losses stemming from groups that have been systematically excluded or at least restricted in their access to education due to discrimination by factors such as race, gender and religion (Margo 1990). These losses could be attributed in part to the presence of individuals of high ability in groups experiencing discrimination due to failure to provide them with sufficient education to properly utilize their talents. However, historians such as Ashton (1948, 15) have argued that the exclusion of non-Anglicans from English universities prior to the mid-nineteenth century resulted in the channeling of their talents into manufacturing and commerce.

Even if returns have been higher at some levels of education than others, a sustained and substantial increase in labor force quality would seem to entail an egalitarian strategy of widespread increase in access to schooling. The contrast between the rapid increase in access to secondary and tertiary schooling in the U.S. and the much more limited increase in access in Europe during the twentieth century with the correspondingly much greater role for schooling in accounting for economic growth in the U.S. than in Europe (see Denison 1967) points to the importance of an egalitarian strategy in sustaining ongoing increases in aggregate labor force quality.

One would expect on increase in the relative supply of more schooled labor to lead to a decline in the premium to schooling, other things equal. Some recent analyses of the contribution of schooling to growth have allowed for this by specifying a parametric relationship between the distribution of schooling in an economy’s labor force and its impact on output or on a hypothesized intermediary human capital factor (Bils and Klenow 2000).[7]

Direct empirical evidence on trends in the premium to schooling is helpful both to obviate reliance on a theoretical specification and to allow for factors such as technical change that may have offset the impact of the increasing supply of schooling. Goldin and Katz (2001) have developed evidence on trends in the premium to schooling over the twentieth century that have allowed them to adjust for these trends in estimating the contribution of schooling to economic growth (DeLong, Goldin and Katz 2003). They find a marked fall in the premium to schooling, roughly falling in half between 1910 and 1950. However, they also find that this decline in the schooling premium was more than offset by their estimated increase over this same period in years of schooling completed by the average worker of 2.9 years and hence that on net schooling increases contributed to improved productivity of the U.S. workforce. They estimate increases of 0.5 percent per year in labor productivity due to increased educational attainment between 1910 and 1950 relative to the average total annual increase in labor productivity of 1.62 percent over the entire period 1915 to 2000. For the period since 1960, DeLong, Goldin and Katz find that the premium to education has increased while the increase in educational attainment at first increased and then declined. During this latter period, the increase in labor force quality has declined, as noted above, despite a widening premium to education, due to the slowing down in the increase in educational attainment.

Classifying the Range of Possible Relationships between Schooling and Economic Growth

In generalizing beyond the twentieth-century U.S. experience, allowance should be made both for the role of influences other than education on economic growth and for the possibility that the impact of education on growth can vary considerably according to the historical situation. In fact to understand why and how education might contribute to economic growth over the range of historical experience, it is important to investigate the variation in the impact of education on growth that has occurred historically. In relating education to economic growth, one can distinguish four basic possibilities.

The first is one of stagnation in both educational attainment and in output per head. Arguably, this was the most common situation throughout the world until 1750 and even after that date characterized Southern and Eastern Europe through the late nineteenth century, as well as most of Africa, Asia, and Latin American through the mid-twentieth century. The qualifier “arguably” is inserted here, because this view of the matter almost surely makes inadequate allowance for the improvements in informal acquisition of skills through family transmission and direct experience as well as through more formal non-schooling channels such as guild-sponsored apprenticeships, an aspect to be taken up further below. It also makes no allowance for the possible long-term improvements in per capita income that took place prior to 1750 but have been inadequately documented. Still focusing on formal schooling as the source of improvement in labor force, there is reason to think that this may have been a pervasive situation throughout much of human history.

The second situation is one in which income per capita rose despite stagnating education levels; factors other than improvements in educational attainment were generating economic growth. England during its industrial revolution, 1750 to 1840 is a notable instance in which some historians have argued that this situation prevailed. During this period, English schooling and literacy rates rose only slightly if at all, while income per capita appears to have risen. Literacy and schooling appears to have been of little use in newly created manufacturing occupations such as in cotton spinning. Indeed, literacy rates and schooling actually appears to have declined in some of the most rapidly industrializing areas of England such as Lancashire (Sanderson 1972; Nicholas and Nicholas 1992). Not all have concurred with this interpretation of the role of education in the English industrial revolution and the result depends on how educational trends are measured and how education is specified as affecting output (see Laqueur; Crafts 1995; Mitch 1999). Moreover this makes no allowance for the role of informal acquisition of skills. Boot (1995) argues that in the case of cotton spinners, informal skill acquisition with experience was substantial.

The simplest interpretation of this situation is that other factors contributed to economic growth other than schooling or human capital more generally. The clearest non-human capital explanatory factor would perhaps be physical capital accumulation; another might be foreign trade. However, if one turns to technological advance as a driving force, then this gives rise to the possibility that human capital — at least broadly defined — was if not the underlying force at least a central contributing factor to the industrial revolution. The argument for this possibility is that the improvements in knowledge and skills associated with technological advance are embodied in human agents and hence are forms of human capital. Recent work by Mokyr (2002) would suggest this interpretation. Nevertheless, the British industrial revolution does remain as a prominent instance in which human capital conventionally defined as schooling stagnated in the presence of a notable upsurge in economic growth. A less extreme case is provided by the post-World War II European catch-up with the United States, as Denison’s (1967) growth accounting analysis indicates that this occurred despite slower European increases in educational attainment due to other factors offsetting this. Historical instances such as that of the British industrial revolution call into question the common assumption that education is a necessary prerequisite for economic growth (see Mitch 1990).

The third situation is one in which rising educational attainment corresponds with rising rates of economic growth. This is the situation one would expect to prevail if education contributes to economic productivity and if any negative factors are not sufficient to offset this influence. One sub-set of instances would be those in which very large and reasonably compressed increases in the educational attainment of the labor force occurred. One important example of this is the twentieth century U.S., with the high school movement followed by increases in college attendance, as noted above. Another would be those of certain East Asian economies since World War II, as documented in the growth accounting analysis by Young (1995) of the substantial contributions of their rising educational attainment to their rapid growth rates. Another sub-set of cases corresponding to more modest increases in schooling can be interpreted as applying either to countries experiencing schooling increases focussed at the elementary level, as in much of Western Europe over the nineteenth century. The so-called literacy campaigns, as in the Soviet Union and Cuba (see Arnove and Graff eds. 1987) in the early and mid-twentieth century with modest improvements in educational attainment over compressed time periods of just a few decades could also be viewed as fitting into this sub-category. However, whether there were increases in output per capita corresponding to these more modest increases in educational attainment remains to be established.

The fourth situation is one in which economic growth has stagnated despite the presence of marked improvements in educational attainment. Possible examples of this situation would include the early rise of literacy in some Northern European areas, such as Scotland and Scandinavia, in the seventeenth and eighteenth centuries (see Houston 1988; Sandberg 1979) and some regions of Africa and Asia in the later twentieth century (see Pritchett 2001). One explanation of this situation is that it reflects instances in which any positive impact of educational attainment is small relative to other influences having an adverse impact. But one can also interpret it as reflecting situations in which incentive structures direct educated people into destructive and transfer activities inimical to economic growth (see North 1990; Baumol 1990; Murphy, Shleifer, and Vishny 1991).

Cross-country studies of the relationship between changes in schooling and growth since 1960 have yielded conflicting results which in itself could be interpreted as supporting the presence of some mix of the four situations just surveyed. A number of studies have found at best a weak relationship between changes in schooling and growth (Pritchett 2001; Bils and Klenow 2000); others have found a stronger relationship (Topel 1999). Much seems to depend on issues of measurement and on how the relationship between schooling and output is specified (Temple 2001b; Woessmann 2002, 2003).

The Determinants of Schooling

Whether education contributes to economic growth can be seen as depending on two factors, the extent to which educational levels improve over time and the impact of education on economic productivity. The first factor is a topic for extended discussion in its own right and no attempt will be made to consider it in depth here. Factors commonly considered include rising income per capita, distribution of political power, and cultural influences (Goldin 2001, Lindert 2004, Mariscal and Sokoloff 2000, Easterlin 1981; Mitch 2004). The issue of endogeneity of determination has often been raised with respect to the determinants of schooling. Thus, it is plausible that rising income contributes to rising levels of schooling and that the spread of mass education can influence the distribution of political power as well as the reverse. While these are important considerations, they are sufficiently complex to warrant extended attention in their own right.[8]

Influences on the Economic Impact of Schooling

Insofar as schooling improves general human intellectual capacities, it could be seen as having a universal impact irrespective of context. However, Rosenzweig (1995; 1999) has noted that the even the general influence of education on individual productivity or adaptability depend on the complexity of the situation. He notes that for agricultural tasks primarily involving physical exertion, no difference in productivity is evident between workers according to education levels; however, in more complex allocative decisions, education does enhance performance. This could account for findings that literacy rates were low among cotton spinners in the British industrial revolution despite findings of substantial premiums to experience (Sanderson 1972; Boot 1995). However, other studies have found literacy to have a substantial positive impact on labor productivity in cotton textile manufacture in the U.S., Italy, and Japan (Bessen 2003; A’Hearn 1998, Saxonhouse 1977) and have suggested a connection between literacy and labor discipline.

A more macro influence is the changing sectoral composition of the economy. It is common to suggest that the service and manufacturing sector have more functional uses for educated labor than the agricultural sector and hence that the shift from agriculture to industry in particular will lead to greater use of educated labor and in turn to require more educated labor forces. However, there are no clear theoretical or empirical grounds for the claim that agriculture makes less use of educated labor than other sectors of the economy. In fact, farmers have often had relatively high literacy rates and there are more obvious functional uses for education in agriculture in keeping accounts and keeping up with technological developments than in manufacturing. Nilsson et al (1999) argue that the process of enclosure in nineteenth-century Sweden, with the increased demands for reading and writing land transfer documents that this entailed, increased the value of literacy in the Swedish agrarian economy. The findings noted above that those in cotton textile occupations associated with early industrialization in Britain had relatively low literacy rates is one indication of the lack of any clear cut ranking across broad economic sectors in the use of educated labor.

Changes in the organization of decision making within major sectors as well as changes in the composition of production within sectors are more likely to have had an impact on demands for educated labor. Thus, within agriculture the extent of centralization or decentralization of decision making, that is the extent to which farm work forces consisted of farmers and large numbers of hired workers or of large numbers of peasants each with scope for making allocative decisions, is likely to have affected the uses made of educated labor in agriculture. Within manufacturing, a given country’s endowment of skilled relative to unskilled labor has been seen as influencing the extent to which openness to trade increases skill premiums, though this entails endogenous determination (Wood 1995).

Technological advance would have tended to boost the demand for more skilled and educated labor if technological advance and skills are complementary, as is often asserted.

However, there is no theoretical reason why technology and skills need be complementary and indeed concepts of directed technological change or induced innovation would suggest that in the presence of relatively high skill premiums, technological advance would be skill saving rather than skill using. Goldin and Katz (1998) have argued that the shift from the factory to continuous processing and batch production associated with the shift of power sources from steam to electricity in the early twentieth century lead to rising technology skill complementarity in U.S. manufacturing. It remains to be established how general this trend has been. It could be related to the distinction made between the dominance in the United States of extensive growth in the nineteenth century due to the growth of factors of production such as labor and capital and the increasing importance of intensive growth in the twentieth century. Intensive growth is often associated with technological advance and a presumed enhanced value for education (Abramovitz and David 2000). Some analysts have emphasized the importance of capital-skill complementarity. For example, Galor and Moav (2003) point to the level of the physical capital stock as a key influence on the return to human capital investment; they suggest that once physical capital stock accumulation surpassed a certain level, the positive impact of human capital accumulation on the return to physical capital became large enough that owners of physical capital came to support the rise of mass schooling. They cite the case of schooling reform in early twentieth century Britain as an example.

Even sharp declines in the premiums to schooling do not preclude a significant impact of education on economic growth. DeLong, Goldin and Katz’s (2003) growth accounting analysis for the twentieth century U.S. makes the point that even at modest positive returns to schooling on the order of 5 percent per year of schooling, with large enough increases in educational attainment, the contribution to growth can be substantial.

Human Capital

Economists have generalized the impact of schooling on labor force quality into the concept of human capital. Human capital refers to the investments that human beings make in themselves to enhance their economic productivity. These investments can take on many forms and include not only schooling but also apprenticeship, a healthy diet, and exercise, among other possibilities. Some economists have even suggested that more amorphous societal factors such as trust, institutional tradition, technological know how and innovation can all be viewed as forms of human capital (Temple 2001a; Topel 1999; Mokyr 2002). Thus broadly defined, human capital would appear as a prime candidate for explaining much of the difference across nations and over time in output and economic growth. However, gaining much insight into the actual magnitudes and the channels of influence by which human capital might influence economic growth requires specification of both the nature and determinants of human capital and how human capital affects aggregate production of an economy.

Much of the literature on human capital and growth makes the implicit assumption that some sort of numerical scale exists for human capital, even if multidimensional and even if unobservable. This in turn implies that it is meaningful to relate levels and changes of human capital to levels of income per capita and rates of economic growth. Given the multiplicity of factors that influence human knowledge and skill and in turn how these influence labor productivity, difficulties would seem likely to arise with attempts to measure aggregate human capital similar to those that have arisen with attempts to specify and measure the nature of human intelligence. Woessmann (2002, 2003) provides useful surveys of some of the issues involved in attempting to specify human capital at the aggregate level appropriate for relating it to economic growth.

One can distinguish between approaches to the measurement of human capital that focus on schooling, as in the discussion above, and those that take a broader view. Broad view approaches try to capture all investments that may have improved human productivity from whatever source, including not just schooling but other productivity enhancing investments, such as on-the-job training. The basic premise of broad view approaches is that for an aggregate economy, the income going to labor over and above what that labor would earn if it were paid the income of an unskilled worker can be viewed as human capital. This measure can be constructed in various ways including as a ratio using unskilled labor earnings as the denominator as in Mulligan and Sala-I-Martin (1997) or using the share of labor income not going as compensation for unskilled labor as in Crafts (1995) and Mitch (2004). Mulligan and Sala-I-Martin (2000) point to some of the major index number problems that can arise in using this approach to aggregate heterogeneous workers.

Crafts and Mitch find that for Britain during its late eighteenth and early nineteenth century industrial revolution between one-sixth and one-fourth of income per capita can be attributed to human capital measured as the share of labor income not going as compensation for unskilled labor.

One approach that has been taken recently to estimate the role of human capital differences in explaining international differences in income per capita is to consider changes in immigrant earnings between origin and destination countries along with differences between immigrant and native workers in the destination country. Olson (1996) suggested that the large increase in earnings of immigrants commonly observed in moving from a low income to a high income country points to a small role for human capital in explaining the wide variation in per capita income across countries. Hendricks (2002) has used differences between immigrant and native earnings in the U.S. to estimate the contribution of otherwise unobserved skill differences to explaining differences in income per capita across countries and finds that they account for only a small part of the latter differences. Hendricks’ approach raises the issue of whether there could be long-term increases in otherwise unobserved skills that could have contributed to economic growth.

The Informal Acquisition of Human Capital

One possible source of such skills is through the informal acquisition of human capital through on-the-job experience. Insofar as work has been common from early adolescence onwards, the issue arises of why the aggregate stock of skills acquired through experience would vary over time and thus influence rates of economic growth. Some types of on-the-job experience which contribute to economic productivity, such as apprenticeship, may entail an opportunity cost and aggregate trends in skill accumulation will be influenced by societal willingness to incur such opportunity costs.

Insofar as schooling continues through adolescence, this can interfere with the accumulation of workforce experience. DeLong, Goldin and Katz (2003) note the tradeoff between rising average years of schooling completed and decreasing years of labor force experience in influencing labor force quality of the U.S. labor force in the last half of the twentieth century. Connolly (2004) has found that informal experience played a relatively greater role in Southern economic growth than for other regions of the United States.

Hansen (1997) has also distinguished the academically-oriented secondary schooling the United States developed in the late nineteenth and early twentieth century from the vocationally-oriented schooling and apprenticeship system that Germany developed over the same time period. Goldin (2001) argues that in the United States the educational system developed general abilities suitable for the greater opportunities for geographical and occupational mobility that prevailed there, while specific vocational training was more suitable for the more restricted mobility opportunities in Germany.

Little evidence exists on whether long-term trends in informal opportunities for skill acquisition have influenced growth rates. However, Smith’s (1776) view of the importance of the division of labor in influencing productivity would suggest that the impact of trends in these opportunities may well have been quite sizable.

Externalities from Education

Economists commonly claim that education yields benefits to society over and above the impact on labor market productivity perceived by the person receiving the education. These benefits can include impacts on economic productivity, such as impacts on technological advance. They can also include non-labor market benefits. Thus McMahon (2002, 11) in his assessment of the social benefits of education includes not only direct effects on economic productivity but also impacts on a) population growth rates and health b) democratization, political stability, and human rights, c) the environment, d) reduction of poverty and inequality, e) crime and drug use, and f) labor force participation. While these effects may appear to involve primarily non-market activity and thus would not be reflected in national output measures and growth rates, factors such as political stability, democratization, population growth, and health have obvious consequences for prospects for long-term growth. However, allowance should be made for the simultaneous influence of the distribution of political power and life expectancy on societal investments in schooling.

For the period since 1960, numerous studies have employed cross country variation in various estimates of human capital and income per capita to directly estimate the impact of human capital on levels of income per capita and growth. A central goal of many such estimates is to see if there are externalities to education on output over and above the private returns estimated from micro data. The results have been conflicting and this has been attributed not only to problems of measurement error but also to differences in specification of human capital and its impact on growth. There does not appear to be strong evidence of large positive externalities to human capital (Temple 2001a). Furthermore, McMahon (2004) reports some empirical specifications which yield substantial indirect long-run effects.

For the period before 1960, limits on the availability of data on schooling and income have limited the use of this empirical regression approach. Thus, any discussion of the impact of externalities of education on production is considerably more conjectural. The central role of government, religious, and philanthropic agencies in the provision of schooling suggests the presence of externalities. Politicians and educators more frequently justified government and philanthropic provision of schooling by its impacts on religious and moral behavior than by any market failure resulting in sub-optimal provision of schooling from the standpoint of maximizing labor productivity. Thus, Adam Smith in his discussion of mass schooling in The Wealth of Nations, places more emphasis on its value to the state in enhancing orderliness and decency while reducing the propensity to popular superstition than on its immediate value in enhancing the economic productivity of the individual worker.

The Impact of the Level of Human Capital on Rates of Economic Growth

The approaches considered thus far relate changes in educational attainment of the labor force to changes in output per worker. An alternative, though not mutually exclusive, approach is to relate the level of educational attainment of an economy’s labor force to its rate of economic growth. The argument for doing so is that a high but unchanging level of educational attainment should contribute to growth by facilitating creativity, innovation and adaptation to change as well as facilitating the ongoing maintenance and improvement of skill in the workforce. Topel (1999) has argued that there may not be any fundamental difference between the two types of approach insofar as ongoing sources of productivity advance and adaptation to change could be viewed as reflecting ongoing improvements in human capital. Nevertheless, some empirical studies based on international data for the late twentieth century have found that a country’s level of educational attainment has a much stronger impact on its rate of economic growth than its rate of improvement in educational attainment (Benhabib and Spiegel 1994).

The paucity of data on schooling attainment has limited the empirical examination of the relationship between levels of human capital and economic growth for periods before the late twentieth century. However, Sandberg (1982) has argued, based on a descriptive comparison of economies in various categories, that those with high levels of schooling in 1850 subsequently experienced faster rates of economic growth. Some studies, such as O’Rourke and Williamson (1997) and Foreman-Peck and Lains (1999), have found that high levels of schooling and literacy have contributed to more rapid rates of convergence for European countries in the late nineteenth century and at the state level for the U.S. over the twentieth century (Connolly 2004).

Bowman and Anderson (1963), a much earlier study based on international evidence for the mid-twentieth century, can be interpreted in the spirit of relating levels of education to subsequent levels of income growth. Their reading of the cross-country relationship between literacy rates and per capita income at mid-twentieth-century was that a threshold of 40 percent adult literacy was required for a country to have a per capita income above 300 1955 dollars. Some have ahistorically projected back this literacy threshold to earlier centuries although the Bowman and Anderson proposal was intended to apply to mid-twentieth century development patterns.

The mechanisms by which the level of schooling would influence the rate of economic growth are problematic to establish. One can distinguish two general possibilities. One would be that higher levels of educational attainment facilitate adaptation and responsiveness to change throughout the workforce. This would be especially important where a large percentage of workers are in decision making positions such as an economy composed largely of small farmers and other small enterprises. The finding of Foster and Rosenzweig (1996) for late twentieth century India that the rate of return to schooling is higher during periods of more rapid technological advance in agriculture would be consistent with this. Likewise, Nilsson et al (1999) find that literacy was important for nineteenth-century Swedish farmers in dealing with enclosure, an institutional change. The other possibility is that higher levels of educational attainment increase the potential pool from which an elite group responsible for innovation can be recruited. This could be viewed as applying specifically to scientific and technical innovation as in Mokyr (2002) and Jones (2002) — but also to technological and industrial leadership more generally (Nelson and Wright 1992) and to facilitating advancement in society by ability irrespective of social origins (Galor and Tsiddon 1997). Recently, Labuske and Baten (2004) have found that international rates of patenting are related to secondary enrollment rates.

Two issues have arisen in the recent theoretical literature regarding specifying relationships between the level of human capital and rates of economic growth. First, Lucas (1988) in an influential model of the impact of human capital on growth, specifies that the rate of growth of human capital formation depends on initial levels of human capital, in other words that parents’ and teachers’ human capital has a direct positive influence on the rate of growth of learners’ human capital. This specification of the impact of the initial level of human capital allows for ongoing and unbounded growth of human capital and through this its ongoing contribution to economic growth. Such ongoing growth of human capital could occur through improvements in the quality of schooling or through enhanced improvements in learning from parents and other informal settings. While it might be plausible to suppose that improved education of teachers will enhance their effectiveness with learners, it seems less plausible to suppose that this enhanced effectiveness will increase unbounded in proportion to initial levels of education (Lord 2001, 82).

A second issue is that insofar as higher levels of human capital contribute to economic growth through increases in research and development activity and innovative activity more generally, one would expect the presence of scale effects. Economies with larger populations holding constant their level of human capital per person should benefit from more overall innovative activity simply because they have more people engaged in innovative activity. Jones (1995) has pointed out that such scale effects seem implausible if one looks at the time series relationship between rates of economic growth and those engaged in innovative activity. In recent decades the growth of the number of scientists, engineers, and others engaged in innovative activity has far outstripped the actual growth of productivity and other indicators of direct impact on innovation. Thus, one should allow for diminishing returns in the relationship between levels of education and technological advance.

Thus, as with schooling externalities, considering the impact of levels of education on growth offers numerous channels of influence leaving the challenge for the historian of ascertaining their quantitative importance in the past.

Conclusion

This survey has considered some of the basic ways in which the rise of mass education has contributed to economic growth in recent centuries. Given their potential influence on labor productivity, levels and changes in schooling and of human capital more generally have the potential for explaining a large share of increases in per capita output over time. However, increases in mass schooling seem to explain a major share of economic growth only over relatively short periods of time, with a more modest impact over longer time horizons. In some situations, such as the United States in the twentieth century, it appears that improvements in the schooling of the labor force have made substantial contributions to economic growth. Yet schooling should not be seen as either a necessary or sufficient condition for generating economic growth. Factors other than education can contribute to economic growth and in their absence, it is not clear that schooling in itself can contribute to economic growth. Moreover, there are likely limits on the extent to which average years of schooling of the labor force can expand, although improvement in the quality of schooling is not so obviously bounded. Perhaps the most obvious avenue through which education has contributed to economic growth is by expanding the rate of technological change. But as has been noted, there are numerous other possible channels of influence ranging from political stability and property rights to life expectancy and fertility. The diversity of these channels point to both the challenges and the opportunities in examining the historical connections between education and economic growth.

References

Aaronson, Daniel and Daniel Sullivan. “Growth in Worker Quality.” Economic Perspectives, Federal Reserve Bank of Chicago 25, no. 4 (2001): 53-74.

Abramovitz, Moses and Paul David. “American Macroeconomic Growth in the Era of Knowledge-Based Progress: The Long-Run Perspective.” In Cambridge Economic History of the United States, Vol. III, The Twentieth Century, edited by Stanley L. Engerman and Robert E. Gallman, 1-92. New York: Cambridge University Press, 2000.

A’Hearn, Brian. “Institutions, Externalities, and Economic Growth in Southern Italy: Evidence from the Cotton Textile Industry, 1861-1914.” Economic History Review 51, no. 4 (1998): 734-62.

Arnove, Robert F. and Harvey J. Graff, editors. National Literacy Campaigns: Historical and Comparative Perspectives. New York: Plenum Press, 1987.

Ashton, T.S. The Industrial Revolution, 1760-1830. Oxford: Oxford University Press, 1948.

Barro, Robert J. “Notes on Growth Accounting.” NBER Working Paper 6654, 1998.

Baumol, William. “Entrepreneurship: Productive, Unproductive, and Destructive.” Journal of Political Economy 98, no. 5, part 1 (1990): 893-921.

Benhabib, J. and M. M. Spiegel. “The Role of Human Capital in Economic Development: Evidence from Aggregate Cross-country Data.” Journal of Monetary Economics 34 (1994): 143-73.

Bessen, James. “Technology and Learning by Factory Workers: The Stretch-Out at Lowell, 1842.” Journal of Economic History 63, no. 1 (2003): 33-64.

Bils, Mark and Peter J. Klenow. “Does Schooling Cause Growth?” American Economic Review 90, no. 5 (2000): 1160-83.

Birdsall, Nancy. “Public Spending on Higher Education in Developing Countries: Too Much or Too Little?” Economics of Education Review 15, no. 4 (1996): 407-19.

Blaug, Mark. An Introduction to the Economics of Education. Harmondsworth, England: Penguin Books, 1970.

Boot, H.M. “How Skilled Were Lancashire Cotton Factory Workers in 1833?” Economic History Review 48, no. 2 (1995): 283-303.

Bowman, Mary Jean and C. Arnold Anderson. “Concerning the Role of Education in Development.” In Old Societies and New States: The Quest for Modernity in Africa and Asia, edited by Clifford Geertz. Glencoe, IL: Free Press, 1963.

Broadberry, Stephen. “Human Capital and Productivity Performance: Britain, the United States and Germany, 1870-1990.” In The Economic Future in Historical Perspective, edited by Paul A. David and Mark Thomas. Oxford: Oxford University Press, 2003.

Conlisk, John. “Comments” on Griliches. In Education, Income, and Human Capital, edited by W. Lee Hansen. New York: Columbia University Press, 1970.

Connolly, Michelle. “Human Capital and Growth in the Postbellum South: A Separate but Unequal Story.” Journal of Economic History 64, no.2 (2004): 363-99.

Crafts, Nicholas. “Exogenous or Endogenous Growth? The Industrial Revolution Reconsidered.” Journal of Economic History 55, no. 4 (1995): 745-72.

Davies, James and John Whalley. “Taxes and Capital Formation: How Important Is Human Capital?” In National Saving and Economic Performance, edited by B. Douglas Bernheim and John B. Shoven, 163-97. Chicago: University of Chicago Press, 1991.

DeLong, J. Bradford, Claudia Goldin and Lawrence F. Katz. “Sustaining U.S. Economic Growth.” In Agenda for the Nation, edited by Henry Aaron, James M. Lindsay, and Pietro S. Niyola, 17-60. Washington, D.C.: Brookings Institution Press, 2003.

Denison, Edward F. The Sources of Economic Growth in the United Statesand the Alternatives before Us. New York: Committee for Economic Development, 1962.

Denison, Edward F. Why Growth Rates Differ: Postwar Experience in Nine Western Countries. Washington, D.C.: Brookings Institution Press, 1967.

Easterlin, Richard. “Why Isn’t the Whole World Developed?” Journal of Economic History 41, no. 1 (1981): 1-19.

Foreman-Peck, James and Pedro Lains. “Economic Growth in the European Periphery, 1870-1914.” Paper presented at the Third Conference of the European Historical Economics Society, Lisbon, Portugal, 1999..

Foster, Andrew D. and Mark R. Rosenzweig. “Technical Change and Human-capital Returns and Investments: Evidence from the Green Revolution.” American Economic Review 86, no. 4 (1996): 931-53.

Galor, Oded and Daniel Tsiddon. “The Distribution of Human Capital, Technological Progress and Economic Growth.” Journal of Economic Growth 2, no. 1 (1997): 93-124.

Galor, Oded and Omer Moav. “Das Human Kapital.” Brown University Working Paper No. 2000-17, July 2003.

Goldin, Claudia. “The Human Capital Century and American Leadership: Virtues of the Past.” Journal of Economic History 61, no. 2 (2001): 263-92.

Goldin, Claudia and Lawrence F. Katz. “The Origins of Technology-Skill Complementarity.” Quarterly Journal of Economics 113, no. 3 (1998): 693-732.

Goldin, Claudia and Lawrence F. Katz. “Decreasing (and Then Increasing) Inequality in America: A Tale of Two Half-Centuries.” In The Causes and Consequences of Increasing Inequality, edited by Finis Welch, 37-82. Chicago: University of Chicago Press, 2001.

Graff, Harvey.. The Legacies of Literacy: Continuities and Contradictions in Western Culture and Society. Bloomington: Indiana University Press, 1987.

Griliches, Zvi. “Notes on the Role of Education in Production Functions and Growth Accounting.” In Education, Income, and Human Capital, edited by W. Lee Hansen. New York: Columbia University Press, 1970.

Hansen, Hal. “Caps and Gowns: Historical Reflections on the Institutions that Shaped Learning for and at Work in Germany and the United States, 1800-1945.” Ph.D. dissertation, University of Wisconsin, 1997.

Hanushek, Eric and Dennis D. Kimko. “Schooling, Labor-Force Quality, and the Growth of Nations.” American Economic Review 90, no.3 (2000): 1184-1208.

Hendricks, Lutz. “How Important Is Human Capital for Development? Evidence from Immigrant Earnings.” American Economic Review 92, no. 1 (2002): 198-219.

Ho, Mun S. and Dale Jorgenson. “The Quality of the U.S. Work Force, 1948-1995.” Harvard University Working Paper, 1999.

Houston, R.A. Literacy in Early Modern Europe: Culture and Education, 1500-1800.

London: Longman, 1988.

Jones, Charles. “R&D-Based Models of Economic Growth.” Journal of Political Economy 103, no. 4 (1995): 759-84.

Jones, Charles. “Sources of U.S. Economic Growth in a World of Ideas.” American Economic Review 92, no. 1 (2002): 220-39.

Jorgenson, Dale W. and Barbara M. Fraumeni. “The Accumulation of Human and Nonhuman Capital, 1948-84.” In The Measurement of Saving, Investment, and Wealth, edited by R. E. Lipsey and H. S. Tice. Chicago: University of Chicago Press, 1989.

Jorgenson, Dale W. and Zvi Griliches. “The Explanation of Productivity Change.” Review of Economic Studies 34, no. 3 (1967): 249-83.

Kendrick, John W. “How Much Does Capital Explain?” In Explaining Economic Growth: Essays in Honour of Angus Maddison, edited by Adam Szirmai, Bart van Ark and Dirk Pilat, 129-45. Amsterdam: North Holland, 1993.

Krueger, Alan B. and Mikael Lindahl. 2001. “Education for Growth: Why and for Whom?”

Journal of Economic Literature 39, no. 4 (2001): 1101-36.

Krueger, Anne O. “Factor Endowments and per Capita Income Differences among Countries.” Economic Journal 78, no. 311 (1968): 641-59.

Kuznets, Simon. Modern Economic Growth: Rate, Structure and Spread. New Haven: Yale University Press, 1966.

Labuske, Kirsten and Joerg Baten. “Patenting Abroad and Human Capital Formation.” University of Tubingen Working Paper, 2004.

Laqueur, Thomas. “Debate: Literacy and Social Mobility in the Industrial Revolution in England.” Past and Present 64, no. 1 (1974): 96-107.

Lichtenberg, Frank R. “Have International Differences in Educational Attainments Narrowed?” In Convergence of Productivity: Cross-national Studies and Historical Evidence, edited by William J. Baumol, Richard R. Nelson, and Edward N. Wolff, 225-42. New York: Oxford University Press, 1994.

Lindert, Peter H. Growing Public: Social Spending and Economic Growth since the Eighteenth Century. Cambridge: Cambridge University Press, 2004.

Lord, William A. Household Dynamics: Economic Growth and Policy. New York: Oxford University Press, 2002.

Lucas, Robert E., Jr. “On the Mechanics of Economic Development.” Journal of Monetary Economics 22, no. 1 (1988): 3-42.

Maddison, Angus. Monitoring the World Economy, 1820-1992. Paris: OECD, 1995.

Maddison, Angus. The World Economy: A Millennial Perspective. Paris: OECD, 2001.

Margo, Robert A. Race and Schooling in the South, 1880-1950: An Economic History. Chicago: University of Chicago Press, 1990.

Mariscal, Elisa and Kenneth Sokoloff. “Schooling, Suffrage, and the Persistence of Inequality in the Americas, 1800-1945.” InPolitical Institutions and Economic Growth in Latin America: Essays in Policy, History, and Political Economy, edited by Stephen Haber, 159-218. Stanford: Hoover Institution Press, 2000.

Matthews, R.C.O., C. H. Feinstein, and J.C. Odling-Smee. British Economic Growth, 1856-1973. Stanford: Stanford University Press, 1982.

McMahon, Walter. Education and Development: Measuring the Social Benefits. Oxford: Oxford University Press, 2002.

Mitch, David. “The Spread of Literacy in Nineteenth-Century England.” Ph.D. dissertation, University of Chicago, 1982.

Mitch, David. “Education and Economic Growth: Another Axiom of Indispensability?” In Education and Economic Development since the Industrial Revolution, edited by Gabriel Tortella. Valencia: Generalitat Valencia, 1990. Reprinted in The Economic Value of Education: Studies in the Economics of Education, edited by Mark Blaug, 385-401. Cheltenham, UK: Edward Elgar, 1992.

Mitch, David. “The Role of Education and Skill in the British Industrial Revolution.” In The British Industrial Revolution: An Economic Perspective (second edition), edited by Joel Mokyr, 241-79. Boulder, CO: Westview Press, 1999.

Mitch, David. “Education and Skill of the British Labour Force.” In The Cambridge Economic History of Modern Britain, Vol.1, Industrialization, 1700-1860, edited by Roderick Floud and Paul Johnson, 332-56.Cambridge: Cambridge University Press, 2004a.

Mitch, David. “School Finance.” In International Handbook on the Economics of Education, edited by Geraint Johnes and Jill Johnes, 260-97 Cheltenham UK: Edward Elgar, 2004b.

Mokyr, Joel. The Gifts of Athena: Historical Origins of the Knowledge Economy. Princeton: Princeton University Press, 2002.

Mulligan, Casey G. and Xavier Sala-I-Martin. “A Labor Income-based Measure of the Value of Human Capital: An Application to the States of the United States.” Japanand the World Economy 9, no. 2 (1997): 159-91.

Mulligan, Casey B. and Xavier Sala-I-Martin. “Measuring Aggregate Human Capital.” Journal of Economic Growth 5, no. 3 (2002): 215-52.

Murphy, Kevin M., Andrei Shleifer, and Robert W. Vishny. “The Allocation of Talent: Implications for Growth.” Quarterly Journal of Economics 106, no. 2 (1991): 503-30.

National Center of Education Statistics. 120 Years of American Education: A Statistical Portrait. Washington, D.C.: U.S. Department of Education, Office of Educational Research and Improvement, 1993.

Nelson, Richard R. and Gavin Wright. “The Rise and Fall of American Technological Leadership: The Postwar Era in Historical Perspective.” Journal of Economic Literature 30, no. 4 (1992): 1931-64.

Nicholas, Stephen and Jacqueline Nicholas. “Male Literacy, ‘Deskilling’ and the Industrial Revolution.” Journal of Interdisciplinary History 23, no. 1 (1992): 1-18.

Nilsson, Anders, Lars Pettersson and Patrick Svensson. “Agrarian Transition and Literacy: The Case of Nineteenth-century Sweden.” European Review of Economic History 3, no. 1 (1999): 79-96.

North, Douglass C. Institutions, Institutional Change and Economic Performance. Cambridge: Cambridge University Press, 1990.

OECD. Education at a Glance: OECD Indicators. Paris: OECD, 2001.

Olson, Mancur, Jr. “Big Bills Left on the Sidewalk: Why Some Nations Are Rich and Others Poor.” Journal of Economic Perspectives 10, no. 2 (1996): 3-24.

O’Rourke, Kevin and Jeffrey G. Williamson. “Around the European Periphery, 1870-1913: Globalization, Schooling and Growth.” European Review of Economic History 1, no. 2 (1997): 153-90.

Pritchett, Lant. “Where Has All the Education Gone?” World Bank Economic Review 15, no. 3 (2001): 367-91.

Psacharopoulos, George. “The Contribution of Education to Economic Growth: International Comparisons.” In International Comparisons of Productivity and Causes of the Slowdown, edited by John W. Kendrick. Cambridge, MA: Ballinger Publishing, 1984.

Psacharopoulos, George. “Public Spending on Higher Education in Developing Countries: Too Much Rather than Too Little.” Economics of Education Review 15, no. 4 (1996): 421-22.

Psacharopoulos, George and Harry Anthony Patrinos. “Human Capital and Rates of Return.” In International Handbook on the Economics of Education, edited by Geraint Johnes and Jill Johnes, 1-57. Cheltenham, UK: Edward Elgar, 2004.

Rangazas, Peter. “Schooling and Economic Growth: A King-Rebelo Experiment with Human Capital.” Journal of Monetary Economics 46, no. 2 (2000): 397-416.

Rosenzweig, Mark. “Why Are There Returns to Schooling?” American Economic Review Papers and Proceedings 85, no. 2 (1995): 69-75.

Rosenzweig, Mark. “Schooling, Economic Growth and Aggregate Data.” In Development, Duality and the International Economic Regime, edited by Gary Saxonhouse and T.N. Srinivasan, 107-29. Ann Arbor: University of Michigan Press, 1997.

Sandberg, Lars. “The Case of the Impoverished Sophisticate: Human Capital and Swedish Economic Growth before World War I.” Journal of Economic History 39, no.1 (1979): 225-41.

Sandberg, Lars. “Ignorance, Poverty and Economic Backwardness in the Early Stages of European Industrialization: Variations on Alexander Gerschenkron’s Grand Theme.” Journal of European Economic History 11 (1982): 675-98.

Sanderson, Michael. “Literacy and Social Mobility in the Industrial Revolution in England.” Past and Present 56 (1972): 75-104.

Saxonhouse, Gary. “Productivity Change and Labor Absorption in Japanese Cotton Spinning, 1891-1935″ Quarterly Journal of Economics 91, no. 2 (1977): 195-220.

Smith, Adam. An Inquiry into the Nature and Causes of the Wealth of Nations. Chicago: University of Chicago Press, [1776] 1976.

Temple, Jonathan. “Growth Effects of Education and Social Capital in the OECD Countries.” OECD Economic Studies 33, no. 1 (2001a): 58-96.

Temple, Jonathan. “Generalizations That Aren’t? Evidence on Education and Growth.” European Economic Review 45, no. 4-6 (2001b): 905-18.

Topel, Robert. “Labor Markets and Economic Growth.” In Handbook of Labor Economics, Volume 3, edited by Orley Ashenfelter and David Card, 2943-84. Amsterdam: Elsevier Science, 1999.

Woessmann, Ludger. Schooling and the Quality of Human Capital. Berlin: Springer, 2002.

Woessmann, Ludger. “Specifying Human Capital.” Journal of Economic Surveys 17, no.3 (2003): 239-70.

Wood, Adrian. “How Trade Hurt Unskilled Workers.” Journal of Economic Perspectives 9, no. 3 (1995): 57-80.

Young, Alwyn. “The Tyranny of Numbers: Confronting the Statistical Realities of the East Asian Growth Experience.” Quarterly Journal of Economics 110, no. 3 (1995): 641-80.


[1] I have received helpful comments on this essay from Mac Boot, Claudia Goldin, Bill Lord, Lant Pritchett, Robert Whaples, and an anonymous referee. At an earlier stage in working through some of this material, I benefited from a quite useful conversation with Nick Crafts. However, I bear sole responsibility for remaining errors and shortcomings.

[2] For a detailed survey of trends in schooling in the early modern and modern period see Graff (1987).

[3] See Barro (1998) for a brief intellectual history of growth accounting.

[4] Blaug (1970) provides an accessible, detailed critique of the assumptions behind Denison’s growth accounting approach and Topel (1999) provides a further discussion of the problems of using a growth accounting approach to measure the contribution of education, especially those due to omitting social externalities.

[5] By using a Cobb-Douglas specification of the aggregate production function, one can arrive at the following equation for the ratio between final and initial national income per worker due to increases in average school years completed between the two time periods, t = 0 and t =1:

Start with the aggregate production function specification:

Y = A K(1-α) [(1+r)S L]α

Y/L = A (K/L)(1-α) [(1+r)S L/L]α

Y/L = A (K/L)(1-α) [(1+r)S]α

Assume that the average years of schooling of the labor force is the only change between t = 0 and t =1; that is, assume no change in the ratio of capital to labor between time periods. Then the ratio of the income per worker in the later time period to the earlier time period will be:

(Y/L)1/ (Y/L)0 = ( (1 + r )S1- S0 )α

Where Y = output, A = a measure of the current state of technology, K = the physical capital stock, L = the labor force, r = the percent by which a year of schooling increases labor productivity, S is the average years of schooling completed by the labor force in each time period, α is labor’s share in national income, and the subscripts 0 and 1 denote initial and final time periods.

As noted above, the derivation above is for a partial equilibrium change in years of schooling of the labor force holding constant the physical capital stock. Allowing for physical capital stock accumulation in response to schooling increases in a Solow-type model implies that the ratio of final to initial output per worker will be

(Y/L)1/ (Y/L)0 = ( (1 + r )S1 - S0 ) .

For a derivation of this see Lord (2001, 99-100). Lord’s derivation differs from that here by specifying the technology parameter A as labor augmenting. Allowing for increases in A over time due to technical change would further increase the contribution to output per worker of additional years of schooling.

[6]To take a specific example, suppose that in the steady-state case of Table 1B, a 5 percent earnings premium per year of schooling is assigned to the first 6 years of schooling, i.e. primary schooling, a 10 percent earnings premium per year is assigned to the next 6 years of schooling, i.e. secondary schooling, and a 15 percent earnings premium per year is assigned to the final 4 years of schooling, that is college. In that case, the impact on steady state income per capita compared with no schooling at all would be (1.05)6x(1.10)6x(1.15)4 = 4.15, compared with the 4.59 in going from no schooling to universal college at a 10 percent rate of return for every year of school completed.

[7] Denison’s standard growth accounting approach assumes that education is labor augmenting and, in particular, that there is an infinite elasticity of substitution between skilled and unskilled labor. This specification is conventional in growth accounting analysis. But another common specification in entering education into aggregate production functions is to specify human capital as a third factor of production along with unskilled labor and physical capital. Insofar as this is done with a Cobb-Douglas production function specification, as is conventional, the implied elasticity of substitution between human capital and either unskilled labor or physical capital is unity. The complementarity between human capital and other inputs this implies will tend to increase the contribution of human capital increases to economic growth by decreasing the tendency for diminishing returns to set in. (For a fuller treatment of the considerations involved see Griliches 1970, Conlisk 1970, Broadberry 2003). For an application of this approach in a historical growth accounting exercise, see Crafts (1995), who finds a fairly substantial contribution of human capital during the English industrial revolution. For a critique of Crafts’ estimates see Mitch (1999).

[8] For an examination of long-run growth dynamics with schooling investments endogenously determined by transfer-constrained family decisions see Lord 2001, 209-213 and Rangazas 2000. Lord and Rangazas find that allowing for the fact that families are credit constrained in making schooling investment decisions is consistent with the time path of interest rates in the U.S. between 1870 and 1970.

Citation: Mitch, David. “Education and Economic Growth in Historical Perspective”. EH.Net Encyclopedia, edited by Robert Whaples. July 26, 2005. URL http://eh.net/encyclopedia/education-and-economic-growth-in-historical-perspective/

An Economic History of Denmark

Ingrid Henriksen, University of Copenhagen

Denmark is located in Northern Europe between the North Sea and the Baltic. Today Denmark consists of the Jutland Peninsula bordering Germany and the Danish Isles and covers 43,069 square kilometers (16,629 square miles). 1 The present nation is the result of several cessions of territory throughout history. The last of the former Danish territories in southern Sweden were lost to Sweden in 1658, following one of the numerous wars between the two nations, which especially marred the sixteenth and seventeenth centuries. Following defeat in the Napoleonic Wars, Norway was separated from Denmark in 1814. After the last major war, the Second Schleswig War in 1864, Danish territory was further reduced by a third when Schleswig and Holstein were ceded to Germany. After a regional referendum in 1920 only North-Schleswig returned to Denmark. Finally, Iceland, withdrew from the union with Denmark in 1944. The following will deal with the geographical unit of today’s Denmark.

Prerequisites of Growth

Throughout history a number of advantageous factors have shaped the Danish economy. From this perspective it may not be surprising to find today’s Denmark among the richest societies in the world. According to the OECD, it ranked seventh in 2004, with income of $29.231 per capita (PPP). Although we can identify a number of turning points and breaks, for the time period over which we have quantitative evidence this long-run position has changed little. Thus Maddison (2001) in his estimate of GDP per capita around 1600 places Denmark as number six. One interpretation could be that favorable circumstances, rather than ingenious institutions or policies, have determined Danish economic development. Nevertheless, this article also deals with time periods in which the Danish economy was either diverging from or converging towards the leading economies.

Table 1:
Average Annual GDP Growth (at factor costs)
Total Per capita
1870-1880 1.9% 0.9%
1880-1890 2.5% 1.5%
1890-1900 2.9% 1.8%
1900-1913 3.2% 2.0%
1913-1929 3.0% 1.6%
1929-1938 2.2% 1.4%
1938-1950 2.4% 1.4%
1950-1960 3.4% 2.6%
1960-1973 4.6% 3.8%
1973-1982 1.5% 1.3%
1982-1993 1.6% 1.5%
1993-2004 2.2% 2.0%

Sources: Johansen (1985) and Statistics Denmark ‘Statistikbanken’ online.

Denmark’s geographical location in close proximity of the most dynamic nations of sixteenth-century Europe, the Netherlands and the United Kingdom, no doubt exerted a positive influence on the Danish economy and Danish institutions. The North German area influenced Denmark both through long-term economic links and through the Lutheran Protestant Reformation which the Danes embraced in 1536.

The Danish economy traditionally specialized in agriculture like most other small and medium-sized European countries. It is, however, rather unique to find a rich European country in the late-nineteenth and mid-twentieth century which retained such a strong agrarian bias. Only in the late 1950s did the workforce of manufacturing industry overtake that of agriculture. Thus an economic history of Denmark must take its point of departure in agricultural development for quite a long stretch of time.

Looking at resource endowments, Denmark enjoyed a relatively high agricultural land-to-labor ratio compared to other European countries, with the exception of the UK. This was significant for several reasons since it, in this case, was accompanied by a comparatively wealthy peasantry.

Denmark had no mineral resources to speak of until the exploitation of oil and gas in the North Sea began in 1972 and 1984, respectively. From 1991 on Denmark has been a net exporter of energy although on a very modest scale compared to neighboring Norway and Britain. The small deposits are currently projected to be depleted by the end of the second decade of the twenty-first century.

Figure 1. Percent of GDP in selected=

Source: Johansen (1985) and Statistics Denmark ’Nationalregnskaber’

Good logistic can be regarded as a resource in pre-industrial economies. The Danish coast line of 7,314 km and the fact that no point is more than 50 km from the sea were advantages in an age in which transport by sea was more economical than land transport.

Decline and Transformation, 1500-1750

The year of the Lutheran Reformation (1536) conventionally marks the end of the Middle Ages in Danish historiography. Only around 1500 did population growth begin to pick up after the devastating effect of the Black Death. Growth thereafter was modest and at times probably stagnant with large fluctuations in mortality following major wars, particularly during the seventeenth century, and years of bad harvests. About 80-85 percent of the population lived from subsistence agriculture in small rural communities and this did not change. Exports are estimated to have been about 5 percent of GDP between 1550 and 1650. The main export products were oxen and grain. The period after 1650 was characterized by a long lasting slump with a marked decline in exports to the neighboring countries, the Netherlands in particular.

The institutional development after the Black Death showed a return to more archaic forms. Unlike other parts of northwestern Europe, the peasantry on the Danish Isles afterwards became a victim of a process of re-feudalization during the last decades of the fifteenth century. A likely explanation is the low population density that encouraged large landowners to hold on to their labor by all means. Freehold tenure among peasants effectively disappeared during the seventeenth century. Institutions like bonded labor that forced peasants to stay on the estate where they were born, and labor services on the demesne as part of the land rent bring to mind similar arrangements in Europe east of the Elbe River. One exception to the East European model was crucial, however. The demesne land, that is the land worked directly under the estate, never made up more than nine percent of total land by the mid eighteenth century. Although some estate owners saw an interest in encroaching on peasant land, the state protected the latter as production units and, more importantly, as a tax base. Bonded labor was codified in the all-encompassing Danish Law of Christian V in 1683. It was further intensified by being extended, though under another label, to all Denmark during 1733-88, as a means for the state to tide the large landlords over an agrarian crisis. One explanation for the long life of such an authoritarian institution could be that the tenants were relatively well off, with 25-50 acres of land on average. Another reason could be that reality differed from the formal rigor of the institutions.

Following the Protestant Reformation in 1536, the Crown took over all church land, thereby making it the owner of 50 percent of all land. The costs of warfare during most of the sixteenth century could still be covered by the revenue of these substantial possessions. Around 1600 the income from taxation and customs, mostly Sound Toll collected from ships that passed the narrow strait between Denmark and today’s Sweden, on the one hand and Crown land revenues on the other were equally large. About 50 years later, after a major fiscal crisis had led to the sale of about half of all Crown lands, the revenue from royal demesnes declined relatively to about one third, and after 1660 the full transition from domain state to tax state was completed.

The bulk of the former Crown land had been sold to nobles and a few common owners of estates. Consequently, although the Danish constitution of 1665 was the most stringent version of absolutism found anywhere in Europe at the time, the Crown depended heavily on estate owners to perform a number of important local tasks. Thus, conscription of troops for warfare, collection of land taxes and maintenance of law and order enhanced the landlords’ power over their tenants.

Reform and International Market Integration, 1750-1870

The driving force of Danish economic growth, which took off during the late eighteenth century was population growth at home and abroad – which triggered technological and institutional innovation. Whereas the Danish population during the previous hundred years grew by about 0.4 percent per annum, growth climbed to about 0.6 percent, accelerating after 1775 and especially from the second decade of the nineteenth century (Johansen 2002). Like elsewhere in Northern Europe, accelerating growth can be ascribed to a decline in mortality, mainly child mortality. Probably this development was initiated by fewer spells of epidemic diseases due to fewer wars and to greater inherited immunity against contagious diseases. Vaccination against smallpox and formal education of midwives from the early nineteenth century might have played a role (Banggård 2004). Land reforms that entailed some scattering of the farm population may also have had a positive influence. Prices rose from the late eighteenth century in response to the increase in populations in Northern Europe, but also following a number of international conflicts. This again caused a boom in Danish transit shipping and in grain exports.

Population growth rendered the old institutional set up obsolete. Landlords no longer needed to bind labor to their estate, as a new class of landless laborers or cottagers with little land emerged. The work of these day-laborers was to replace the labor services of tenant farmers on the demesnes. The old system of labor services obviously presented an incentive problem all the more since it was often carried by the live-in servants of the tenant farmers. Thus, the labor days on the demesnes represented a loss to both landlords and tenants (Henriksen 2003). Part of the land rent was originally paid in grain. Some of it had been converted to money which meant that real rents declined during the inflation. The solution to these problems was massive land sales both from the remaining crown lands and from private landlords to their tenants. As a result two-thirds of all Danish farmers became owner-occupiers compared to only ten percent in the mid-eighteenth century. This development was halted during the next two and a half decades but resumed as the business cycle picked up during the 1840s and 1850s. It was to become of vital importance to the modernization of Danish agriculture towards the end of the nineteenth century that 75 percent of all agricultural land was farmed by owners of middle-sized farms of about 50 acres. Population growth may also have put a pressure on common lands in the villages. At any rate enclosure begun in the 1760s, accelerated in the 1790s supported by legislation and was almost complete in the third decade of the nineteenth century.

The initiative for the sweeping land reforms from the 1780s is thought to have come from below – that is from the landlords and in some instances also from the peasantry. The absolute monarch and his counselors were, however, strongly supportive of these measures. The desire for peasant land as a tax base weighed heavily and the reforms were believed to enhance the efficiency of peasant farming. Besides, the central government was by now more powerful than in the preceding centuries and less dependent on landlords for local administrative tasks.

Production per capita rose modestly before the 1830s and more pronouncedly thereafter when a better allocation of labor and land followed the reforms and when some new crops like clover and potatoes were introduced at a larger scale. Most importantly, the Danes no longer lived at the margin of hunger. No longer do we find a correlation between demographic variables, deaths and births, and bad harvest years (Johansen 2002).

A liberalization of import tariffs in 1797 marked the end of a short spell of late mercantilism. Further liberalizations during the nineteenth and the beginning of the twentieth century established the Danish liberal tradition in international trade that was only to be broken by the protectionism of the 1930s.

Following the loss of the secured Norwegian market for grain in 1814, Danish exports began to target the British market. The great rush forward came as the British Corn Law was repealed in 1846. The export share of the production value in agriculture rose from roughly 10 to around 30 percent between 1800 and 1870.

In 1849 absolute monarchy was peacefully replaced by a free constitution. The long-term benefits of fundamental principles such as the inviolability of private property rights, the freedom of contracting and the freedom of association were probably essential to future growth though hard to quantify.

Modernization and Convergence, 1870-1914

During this period Danish economic growth outperformed that of most other European countries. A convergence in real wages towards the richest countries, Britain and the U.S., as shown by O’Rourke and Williamsson (1999), can only in part be explained by open economy forces. Denmark became a net importer of foreign capital from the 1890s and foreign debt was well above 40 percent of GDP on the eve of WWI. Overseas emigration reduced the potential workforce but as mortality declined population growth stayed around one percent per annum. The increase in foreign trade was substantial, as in many other economies during the heyday of the gold standard. Thus the export share of Danish agriculture surged to a 60 percent.

The background for the latter development has featured prominently in many international comparative analyses. Part of the explanation for the success, as in other Protestant parts of Northern Europe, was a high rate of literacy that allowed a fast spread of new ideas and new technology.

The driving force of growth was that of a small open economy, which responded effectively to a change in international product prices, in this instance caused by the invasion of cheap grain to Western Europe from North America and Eastern Europe. Like Britain, the Netherlands and Belgium, Denmark did not impose a tariff on grain, in spite of the strong agrarian dominance in society and politics.

Proposals to impose tariffs on grain, and later on cattle and butter, were turned down by Danish farmers. The majority seems to have realized the advantages accruing from the free imports of cheap animal feed during the ongoing process of transition from vegetable to animal production, at a time when the prices of animal products did not decline as much as grain prices. The dominant middle-sized farm was inefficient for wheat but had its comparative advantage in intensive animal farming with the given technology. O’Rourke (1997) found that the grain invasion only lowered Danish rents by 4-5 percent, while real wages rose (according to expectation) but more than in any other agrarian economy and more than in industrialized Britain.

The move from grain exports to exports of animal products, mainly butter and bacon, was to a great extent facilitated by the spread of agricultural cooperatives. This organization allowed the middle-sized and small farms that dominated Danish agriculture to benefit from the economy of scale in processing and marketing. The newly invented steam-driven continuous cream separator skimmed more cream from a kilo of milk than conventional methods and had the further advantage of allowing transported milk brought together from a number of suppliers to be skimmed. From the 1880s the majority of these creameries in Denmark were established as cooperatives and about 20 years later, in 1903, the owners of 81 percent of all milk cows supplied to a cooperative (Henriksen 1999). The Danish dairy industry captured over a third of the rapidly expanding British butter-import market, establishing a reputation for consistent quality that was reflected in high prices. Furthermore, the cooperatives played an active role in persuading the dairy farmers to expand production from summer to year-round dairying. The costs of intensive feeding during the wintertime were more than made up for by a winter price premium (Henriksen and O’Rourke 2005). Year-round dairying resulted in a higher rate of utilization of agrarian capital – that is of farm animals and of the modern cooperative creameries. Not least did this intensive production mean a higher utilization of hitherto underemployed labor. From the late 1890’s, in particular, labor productivity in agriculture rose at an unanticipated speed at par with productivity increase in the urban trades.

Industrialization in Denmark took its modest beginning in the 1870s with a temporary acceleration in the late 1890s. It may be a prime example of an industrialization process governed by domestic demand for industrial goods. Industry’s export never exceeded 10 percent of value added before 1914, compared to agriculture’s export share of 60 percent. The export drive of agriculture towards the end of the nineteenth century was a major force in developing other sectors of the economy not least transport, trade and finance.

Weathering War and Depression, 1914-1950

Denmark, as a neutral nation, escaped the devastating effects of World War I and was even allowed to carry on exports to both sides in the conflict. The ensuing trade surplus resulted in a trebling of the money supply. As the monetary authorities failed to contain the inflationary effects of this development, the value of the Danish currency slumped to about 60 percent of its pre-war value in 1920. The effects of monetary policy failure were aggravated by a decision to return to the gold standard at the 1913 level. When monetary policy was finally tightened in 1924, it resulted in fierce speculation in an appreciation of the Krone. During 1925-26 the currency returned quickly to its pre-war parity. As this was not counterbalanced by an equal decline in prices, the result was a sharp real appreciation and a subsequent deterioration in Denmark’s competitive position (Klovland 1997).

Figure 2. Indices of the Krone Real Exchange Rate and Terms Of Trade (1980=100; Real rates based on Wholesale Price Index

Source: Abildgren (2005)

Note: Trade with Germany is included in the calculation of the real effective exchange rate for the whole period, including 1921-23.

When, in September 1931, Britain decided to leave the gold standard again, Denmark, together with Sweden and Norway, followed only a week later. This move was beneficial as the large real depreciation lead to a long-lasting improvement in Denmark’s competitiveness in the 1930s. It was, no doubt, the single most important policy decision during the depression years. Keynesian demand management, even if it had been fully understood, was barred by a small public sector, only about 13 percent of GDP. As it was, fiscal orthodoxy ruled and policy was slightly procyclical as taxes were raised to cover the deficit created by crisis and unemployment (Topp 1995).

Structural development during the 1920s, surprisingly for a rich nation at this stage, was in favor of agriculture. The total labor force in Danish agriculture grew by 5 percent from 1920 to 1930. The number of employees in agriculture was stagnating whereas the number of self-employed farmers increased by a larger number. The development in relative incomes cannot account for this trend but part of the explanation must be found in a flawed Danish land policy, which actively supported a further parceling out of land into small holdings and restricted the consolidation into larger more viable farms. It took until the early 1960s before this policy began to be unwound.

When the world depression hit Denmark with a minor time lag, agriculture still employed one-third of the total workforce while its contribution to total GDP was a bit less than one-fifth. Perhaps more importantly, agricultural goods still made up 80 percent of total exports.

Denmark’s terms of trade, as a consequence, declined by 24 percent from 1930 to 1932. In 1933 and 1934 bilateral trade agreements were forced upon Denmark by Britain and Germany. In 1932 Denmark had adopted exchange control, a harsh measure even for its time, to stem the net flow of foreign exchange out of the country. By rationing imports exchange control also offered some protection of domestic industry. At the end of the decade manufacture’s GDP had surpassed that of agriculture. In spite of the protectionist policy, unemployment soared to 13-15 percent of the workforce.

The policy mistakes during World War I and its immediate aftermath served as a lesson for policymakers during World War II. The German occupation force (April 9, 1940 until May 5, 1945) drew the funds for its sustenance and for exports to Germany on the Danish central bank whereby the money supply more than doubled. In response the Danish authorities in 1943 launched a policy of absorbing money through open market operations and, for the first time in history, through a surplus on the state budget.

Economic reconstruction after World War II was swift, as again Denmark had been spared the worst consequences of a major war. In 1946 GDP recovered its highest pre-war level. In spite of this, Denmark received relatively generous support through the Marshall Plan of 1948-52, when measured in dollars per capita.

From Riches to Crisis, 1950-1973: Liberalizations and International Integration Once Again

The growth performance during 1950-1957 was markedly lower than the Western European average. The main reason was the high share of agricultural goods in Danish exports, 63 percent in 1950. International trade in agricultural products to a large extent remained regulated. Large deteriorations in the terms of trade caused by the British devaluation 1949, when Denmark followed suit, the outbreak of the Korean War in 1950, and the Suez-crisis of 1956 made matters worse. The ensuing deficits on the balance of payment led the government to contractionary policy measures which restrained growth.

The liberalization of the flow of goods and capital in Western Europe within the framework of the OEEC (the Organization for European Economic Cooperation) during the 1950s probably dealt a blow to some of the Danish manufacturing firms, especially in the textile industry, that had been sheltered through exchange control and wartime. Nevertheless, the export share of industrial production doubled from 10 percent to 20 percent before 1957, at the same time as employment in industry surpassed agricultural employment.

On the question of European economic integration Denmark linked up with its largest trading partner, Britain. After the establishment of the European Common Market in 1958 and when the attempts to create a large European free trade area failed, Denmark entered the European Free Trade Association (EFTA) created under British leadership in 1960. When Britain was finally able to join the European Economic Community (EEC) in 1973, Denmark followed, after a referendum on the issue. Long before admission to the EEC, the advantages to Danish agriculture from the Common Agricultural Policy (CAP) had been emphasized. The higher prices within the EEC were capitalized into higher land prices at the same time that investments were increased based on the expected gains from membership. As a result the most indebted farmers who had borrowed at fixed interests rates were hit hard by two developments from the early 1980s. The EEC started to reduce the producers’ benefits of the CAP because of overproduction and, after 1982, the Danish economy adjusted to a lower level of inflation, and therefore, nominal interest rates. According to Andersen (2001) Danish farmers were left with the highest interest burden of all European Union (EU) farmers in the 1990’s.

Denmark’s relations with the EU, while enthusiastic at the beginning, have since been characterized by a certain amount of reserve. A national referendum in 1992 turned down the treaty on the European Union, the Maastricht Treaty. The Danes, then, opted out of four areas, common citizenship, a common currency, common foreign and defense politics and a common policy on police and legal matters. Once more, in 2000, adoption of the common currency, the Euro, was turned down by the Danish electorate. In the debate leading up to the referendum the possible economic advantages of the Euro in the form of lower transaction costs were considered to be modest, compared to the existent regime of fixed exchange rates vis-à-vis the Euro. All the major political parties, nevertheless, are pro-European, with only the extreme Right and the extreme Left being against. It seems that there is a discrepancy between the general public and the politicians on this particular issue.

As far as domestic economic policy is concerned, the heritage from the 1940s was a new commitment to high employment modified by a balance of payment constraint. The Danish policy differed from that of some other parts of Europe in that the remains of the planned economy from the war and reconstruction period in the form of rationing and price control were dismantled around 1950 and that no nationalizations took place.

Instead of direct regulation, economic policy relied on demand management with fiscal policy as its main instrument. Monetary policy remained a bone of contention between politicians and economists. Coordination of policies was the buzzword but within that framework monetary policy was allotted a passive role. The major political parties for a long time were wary of letting the market rate of interest clear the loan market. Instead, some quantitative measures were carried out with the purpose of dampening the demand for loans.

From Agricultural Society to Service Society: The Growth of the Welfare State

Structural problems in foreign trade extended into the high growth period of 1958-73, as Danish agricultural exports were met with constraints both from the then EEC-member countries and most EFTA countries, as well. During the same decade, the 1960s, as the importance of agriculture was declining the share of employment in the public sector grew rapidly until 1983. Building and construction also took a growing share of the workforce until 1970. These developments left manufacturing industry with a secondary position. Consequently, as pointed out by Pedersen (1995) the sheltered sectors in the economy crowded out the sectors that were exposed to international competition, that is mostly industry and agriculture, by putting a pressure on labor and other costs during the years of strong expansion.

Perhaps the most conspicuous feature of the Danish economy during the Golden Age was the steep increase in welfare-related costs from the mid 1960s and not least the corresponding increases in the number of public employees. Although the seeds of the modern Scandinavian welfare state were sown at a much earlier date, the 1960s was the time when public expenditure as a share of GDP exceeded that of most other countries.

As in other modern welfare states, important elements in the growth of the public sector during the 1960s were the expansion in public health care and education, both free for all citizens. The background for much of the increase in the number of public employees from the late 1960s was the rise in labor participation by married women from the late 1960s until about 1990, partly at least as a consequence. In response, the public day care facilities for young children and old people were expanded. Whereas in 1965 7 percent of 0-6 year olds were in a day nursery or kindergarten, this share rose to 77 per cent in 2000. This again spawned more employment opportunities for women in the public sector. Today the labor participation for women, around 75 percent of 16-66 year olds, is among the highest in the world.

Originally social welfare programs targeted low income earners who were encouraged to take out insurance against sickness (1892), unemployment (1907) and disability (1922). The public subsidized these schemes and initiated a program for the poor among old people (1891). The high unemployment period in the 1930s inspired some temporary relief and some administrative reform, but little fundamental change.

Welfare policy in the first four decades following World War II is commonly believed to have been strongly influenced by the Social Democrat party which held around 30 percent of the votes in general elections and was the party in power for long periods of time. One of the distinctive features of the Danish welfare state has been its focus on the needs of the individual person rather than on the family context. Another important characteristic is the universal nature of a number of benefits starting with a basic old age pension for all in 1956. The compensation rates in a number of schedules are high in international comparison, particularly for low income earners. Public transfers gained a larger share in total public outlays both because standards were raised – that is benefits became higher – and because the number of recipients increased dramatically following the high unemployment regime from the mid 1970s to the mid 1990s. To pay for the high transfers and the large public sector – around 30 percent of the work force – the tax load is also high in international perspective. The share public sector and social expenditure has risen to above 50 percent of GDP, only second to the share in Sweden.

Figure 3. Unemployment, Denmark (percent of total labor force)

Source: Statistics Denmark ‘50 års-oversigten’ and ADAM’s databank

The Danish labor market model has recently attracted favorable international attention (OECD 2005). It has been declared successful in fighting unemployment – especially compared to the policies of countries like Germany and France. The so-called Flexicurity model rests on three pillars. The first is low employment protection, the second is relatively high compensation rates for the unemployed and the third is the requirement for active participation by the unemployed. Low employment protection has a long tradition in Denmark and there is no change in this factor when comparing the twenty years of high unemployment – 8-12 per cent of the labor force – from the mid 1970s to the mid 1990s, to the past ten years when unemployment has declined to a mere 4.5 percent in 2006. The rules governing compensation to the unemployed were tightened from 1994, limiting the number of years the unemployed could receive benefits from 7 to 4. Most noticeably labor market policy in 1994 turned from ‘passive’ measures – besides unemployment benefits, an early retirement scheme and a temporary paid leave scheme – toward ‘active’ measures that were devoted to getting people back to work by providing training and jobs. It is commonly supposed that the strengthening of economic incentives helped to lower unemployment. However, as Andersen and Svarer (2006) point out, while unemployment has declined substantially a large and growing share of Danes of employable age receives transfers other than unemployment benefit – that is benefits related to sickness or social problems of various kinds, early retirement benefits, etc. This makes it hazardous to compare the Danish labor market model with that of many other countries.

Exchange Rates and Macroeconomic Policy

Denmark has traditionally adhered to a fixed exchange rate regime. The belief is that for a small and open economy, a floating exchange rate could lead to very volatile exchange rates which would harm foreign trade. After having abandoned the gold standard in 1931, the Danish currency (the Krone) was, for a while, pegged to the British pound, only to join the IMF system of fixed but adjustable exchange rates, the so-called Bretton Woods system after World War II. The close link with the British economy still manifested itself when the Danish currency was devaluated along with the pound in 1949 and, half way, in 1967. The devaluation also reflected that after 1960, Denmark’s international competitiveness had gradually been eroded by rising real wages, corresponding to a 30 percent real appreciation of the currency (Pedersen 1996).

When the Bretton Woods system broke down in the early 1970s, Denmark joined the European exchange rate cooperation, the “Snake” arrangement, set up in 1972, an arrangement that was to be continued in the form of the Exchange Rate Mechanism within the European Monetary System from 1979. The Deutschmark was effectively the nominal anchor in European currency cooperation until the launch of the Euro in 1999, a fact that put Danish competitiveness under severe pressure because of markedly higher inflation in Denmark compared to Germany. In the end the Danish government gave way before the pressure and undertook four discrete devaluations from 1979 to 1982. Since compensatory increases in wages were held back, the balance of trade improved perceptibly.

This improvement could, however, not make up for the soaring costs of old loans at a time when the international real rates of interests were high. The Danish devaluation strategy exacerbated this problem. The anticipation of further devaluations was mirrored in a steep increase in the long-term rate of interest. It peaked at 22 percent in nominal terms in 1982, with an interest spread to Germany of 10 percent. Combined with the effects of the second oil crisis on the Danish terms of trade, unemployment rose to 10 percent of the labor force. Given the relatively high compensation ratios for the unemployed, the public deficit increased rapidly and public debt grew to about 70 percent of GDP.

Figure 4. Current Account and Foreign Debt (Denmark)

Source: Statistics Denmark Statistical Yearbooks and ADAM’s Databank

In September 1982 the Social Democrat minority government resigned without a general election and was relieved by a Conservative-Liberal minority government. The new government launched a program to improve the competitiveness of the private sector and to rebalance public finances. An important element was a disinflationary economic policy based on fixed exchange rates pegging the Krone to the participants of the EMS and, from 1999, to the Euro. Furthermore, automatic wage indexation that had occurred, with short interruptions since 1920 (with a short lag and high coverage), was abolished. Fiscal policy was tightened, thus bringing an end to the real increases in public expenditure that had lasted since the 1960’s.

The stabilization policy was successful in bringing down inflation and long interest rates. Pedersen (1995) finds that this process, nevertheless, was slower than might have been expected. In view of former Danish exchange rate policy it took some time for the market to believe in the credible commitment to fixed exchange rates. From the late 1990s the interest spread to Germany/ Euroland has been negligible, however.

The initial success of the stabilization policy brought a boom to the Danish economy that, once again, caused overheating in the form of high wage increases (in 1987) and a deterioration of the current account. The solution to this was a number of reforms in 1986-87 aiming at encouraging private savings that had by then fallen to an historical low. Most notable was the reform that reduced tax deductibility of private interest on debts. These measures resulted in a hard landing to the economy caused by the collapse of the housing market.

The period of low growth was further prolonged by the international recession in 1992. In 1993 yet another shift of regime occurred in Danish economic policy. A new Social Democrat government decided to ‘kick start’ the economy by means of a moderate fiscal expansion whereas, in 1994, the same government tightened labor market policies substantially, as we have seen. Mainly as a consequence of these measures the Danish economy from 1994 entered a period of moderate growth with unemployment steadily falling to the level of the 1970s. A new feature that still puzzles Danish economists is that the decline in unemployment over these years has not yet resulted in any increase in wage inflation.

Denmark at the beginning of the twenty-first century in many ways fits the description of a Small Successful European Economy according to Mokyr (2006). Unlike in most of the other small economies, however, Danish exports are broad based and have no “niche” in the world market. Like some other small European countries, Ireland, Finland and Sweden, the short term economic fluctuations as described above have not followed the European business cycle very closely for the past thirty years (Andersen 2001). Domestic demand and domestic economic policy has, after all, played a crucial role even in a very small and very open economy.

References

Abildgren, Kim. “Real Effective Exchange Rates and Purchasing-Power-parity Convergence: Empirical Evidence for Denmark, 1875-2002.” Scandinavian Economic History Review 53, no. 3 (2005): 58-70.

Andersen, Torben M. et al. The Danish Economy: An international Perspective. Copenhagen: DJØF Publishing, 2001.

Andersen, Torben M. and Michael Svarer. “Flexicurity: den danska arbetsmarknadsmodellen.” Ekonomisk debatt 34, no. 1 (2006): 17-29.

Banggaard, Grethe. Befolkningsfremmende foranstaltninger og faldende børnedødelighed. Danmark, ca. 1750-1850. Odense: Syddansk Universitetsforlag, 2004

Hansen, Sv. Aage. Økonomisk vækst i Danmark: Volume I: 1720-1914 and Volume II: 1914-1983. København: Akademisk Forlag, 1984.

Henriksen, Ingrid. “Avoiding Lock-in: Cooperative Creameries in Denmark, 1882-1903.” European Review of Economic History 3, no. 1 (1999): 57-78

Henriksen, Ingrid. “Freehold Tenure in Late Eighteenth-Century Denmark.” Advances in Agricultural Economic History 2 (2003): 21-40.

Henriksen, Ingrid and Kevin H. O’Rourke. “Incentives, Technology and the Shift to Year-round Dairying in Late Nineteenth-century Denmark.” Economic History Review 58, no. 3 (2005):.520-54.

Johansen, Hans Chr. Danish Population History, 1600-1939. Odense: University Press of Southern Denmark, 2002.

Johansen, Hans Chr. Dansk historisk statistik, 1814-1980. København: Gyldendal, 1985.

Klovland, Jan T. “Monetary Policy and Business Cycles in the Interwar Years: The Scandinavian Experience.” European Review of Economic History 2, no. 3 (1998): 309-44.

Maddison, Angus. The World Economy: A Millennial Perspective. Paris: OECD, 2001

Mokyr, Joel. “Successful Small Open Economies and the Importance of Good Institutions.” In The Road to Prosperity. An Economic History of Finland, edited by Jari Ojala, Jari Eloranta and Jukka Jalava, 8-14. Helsinki: SKS, 2006.

Pedersen, Peder J. “Postwar Growth of the Danish Economy.” In Economic Growth in Europe since 1945, edited by Nicholas Crafts and Gianni Toniolo. Cambridge: Cambridge University Press, 1995.

OECD, Employment Outlook, 2005.

O’Rourke, Kevin H. “The European Grain Invasion, 1870-1913.” Journal of Economic History 57, no. 4 (1997): 775-99.

O’Rourke, Kevin H. and Jeffrey G. Williamson. Globalization and History: The Evolution of a Nineteenth-century Atlantic Economy. Cambridge, MA: MIT Press, 1999

Topp, Niels-Henrik. “Influence of the Public Sector on Activity in Denmark, 1929-39.” Scandinavian Economic History Review 43, no. 3 (1995): 339-56.


Footnotes

1 Denmark also includes the Faeroe Islands, with home rule since 1948, and Greenland, with home rule since 1979, both in the North Atlantic. These territories are left out of this account.

Citation: Henriksen, Ingrid. “An Economic History of Denmark”. EH.Net Encyclopedia, edited by Robert Whaples. October 6, 2006. URL http://eh.net/encyclopedia/an-economic-history-of-denmark/

An Economic History of Copyright in Europe and the United States

B. Zorina Khan, Bowdoin College

Introduction

Copyright is a form of intellectual property that provides legal protection against unauthorized copying of the producer’s original expression in products such as art, music, books, articles, and software. Economists have paid relatively little scholarly attention to copyrights, although recent debates about piracy and “the digital dilemma” (free use of digital property) have prompted closer attention to theoretical and historical issues. Like other forms of intellectual property, copyright is directed to the protection of cultural creations that are nonrivalrous and nonexclusive in nature. It is generally proposed that, in the absence of private or public forms of exclusion, prices will tend to be driven down to the low or zero marginal costs and the original producer would be unable to recover the initial investment.

Part of the debate about copyright exists because it is still not clear whether state enforcement is necessary to enable owners to gain returns, or whether the producers of copyrightable products respond significantly to financial incentives. Producers of these public goods might still be able to appropriate returns without copyright laws or in the face of widespread infringement, through such strategies as encryption, cartelization, the provision of complementary products, private monitoring and enforcement, market segmentation, network externalities, first mover effects and product differentiation. Patronage, taxation, subsidies, or public provision, might also comprise alternatives to copyright protection. In some instances “authors” (broadly defined) might be more concerned about nonfinancial rewards such as enhanced reputations or more extensive diffusion.

During the past three centuries great controversy has always been associated with the grant of property rights to authors, ranging from the notion that cultural creativity should be rewarded with perpetual rights, through the complete rejection of any intellectual property rights at all for copyrightable commodities. However, historically, the primary emphasis has been on the provision of copyright protection through the formal legal system. Europeans have generally tended to adopt the philosophical position that authorship embodies rights of personhood or moral rights that should be accorded strong protections. The American approach to copyright has been more utilitarian: policies were based on a comparison of costs and benefits, and the primary emphasis of early copyright policies was on the advancement of public welfare. However, the harmonization of international laws has created a melding of these two approaches. The tendency at present is toward stronger enforcement of copyrights, prompted by the lobbying of publishers and the globalization of culture and commerce. Technological change has always exerted an exogenous force for change in copyright laws, and modern innovations in particular provoke questions about the extent to which copyright systems can respond effectively to such challenges.

Copyright in Europe

Copyright in France

In the early years of printing, books and other written matter became part of the public domain when they were published. Like patents, the grant of book privileges originated in the Republic of Venice in the fifteenth century, a practice which was soon prevalent in a number of other European countries. Donatus Bossius, a Milan author, petitioned the duke in 1492 for an exclusive privilege for his book, and successfully argued that he would be unjustly deprived of the benefits from his efforts if others were able to freely copy his work. He was given the privilege for a term of ten years. However, authorship was not required for the grant of a privilege, and printers and publishers obtained monopolies over existing books as well as new works. Since privileges were granted on a case by case basis, they varied in geographical scope, duration, and breadth of coverage, as well as in terms of the attendant penalties for their violation. Grantors included religious orders and authorities, universities, political figures, and the representatives of the Crown.

The French privilege system was introduced in 1498 and was well-developed by the end of the sixteenth century. Privileges were granted under the auspices of the monarch, generally for a brief period of two to three years, although the term could be as much as ten years. Protection was granted to new books or translations, maps, type designs, engravings and artwork. Petitioners paid formal fees and informal gratuities to the officials concerned. Since applications could only be sealed if the King were present, petitions had to be carefully timed to take advantage of his route or his return from trips and campaigns. It became somewhat more convenient when the courts of appeal such as the Parlement de Paris began to issue grants that were privileges in all but name, although this could lead to conflicting rights if another authority had already allocated the monopoly elsewhere. The courts sometimes imposed limits on the rights conferred, in the form of stipulations about the prices that could be charged. Privileges were property that could be assigned or licensed to another party, and their infringement was punished by a fine and at times confiscation of all the output of “pirates.”

After 1566, the Edict of Moulins required that all new books had to be approved and licensed by the Crown. Favored parties were able to get renewals of their monopolies that also allowed them to lay claim to works that were already in the public domain. By the late eighteenth century an extensive administrative procedure was in place that was designed to restrict the number of presses and engage in surveillance and censorship of the publishing industry. Manuscripts first had to be read by a censor, and only after a permit was requested and granted could the book be printed, although the permit could later be revoked if complaints were lodged by sufficiently influential individuals. Decrees in 1777 established that authors who did not alienate their property were entitled to exclusive rights in perpetuity. Since few authors had the will or resources to publish and distribute books, their privileges were likely to be sold outright to professional publishers. However, the law made a distinction in the rights accorded to publishers, because if the right was sold the privilege was only accorded a limited duration of at least ten years, the exact term to be determined in accordance with the value of the work, and once the publisher’s term expired, the work passed into the public domain. The fee for a privilege was thirty six livres. Approvals to print a work, or a “permission simple” which did not entail exclusive rights could also be obtained after payment of a substantial fee. Between 1700 and 1789, a total of 2,586 petitions for exclusive privileges were filed, and about two thirds were granted. The result was a system that resulted in “odious monopolies,” higher prices and greater scarcity, large transfers to officials of the Crown and their allies, and pervasive censorship. It likewise disadvantaged smaller book producers, provincial publishers, and the academic and broader community.

The French Revolutionary decrees of 1791 and 1793 replaced the idea of privilege with that of uniform statutory claims to literary property, based on the principle that “the most sacred, the most unassailable and the most personal of possessions is the fruit of a writer’s thought.” The subject matter of copyrights covered books, dramatic productions and the output of the “beaux arts” including designs and sculpture. Authors were required to deposit two copies of their books with the Bibliothèque Nationale or risk losing their copyright. Some observers felt that copyrights in France were the least protected of all property rights, since they were enforced with a care to protecting the public domain and social welfare. Although France is associated with the author’s rights approach to copyright and proclamations of the “droit d’auteur,” these ideas evolved slowly and hesitatingly, mainly in order to meet the self-interest of the various members of the book trade. During the ancien régime, the rhetoric of authors’ rights had been promoted by French owners of book privileges as a way of deflecting criticism of monopoly grants and of protecting their profits, and by their critics as a means of attacking the same monopolies and profits. This language was retained in the statutes after the Revolution, so the changes in interpretation and enforcement may not have been universally evident.

By the middle of the nineteenth century, French jurisprudence and philosophy tended to explicate copyrights in terms of rights of personality but the idea of the moral claim of authors to property rights was not incorporated in the law until early in the twentieth century. The droit d’auteur first appeared in a law of April 1910. In 1920 visual artists were granted a “droit de suite” or a claim to a portion of the revenues from resale of their works. Subsequent evolution of French copyright laws led to the recognition of the right of disclosure, the right of retraction, the right of attribution, and the right of integrity. These moral rights are (at least in theory) perpetual, inalienable, and thus can be bequeathed to the heirs of the author or artist, regardless of whether or not the work was sold to someone else. The self-interested rhetoric of the owners of monopoly privileges now fully emerged as the keystone of the “French system of literary property” that would shape international copyright laws in the twenty first century.

Copyright in England

England similarly experienced a period during which privileges were granted, such as a seven year grant from the Chancellor of Oxford University for an 1518 work. In 1557, the Worshipful Company of Stationers, a publishers’ guild, was founded on the authority of a royal charter and controlled the book trade for next one hundred and fifty years. This company created and controlled the right of their constituent members to make copies, so in effect their “copy right” was a private property right that existed in perpetuity, independently of state or statutory rights. Enforcement and regulation were carried out by the corporation itself through its Court of Assistants. The Stationers’ Company maintained a register of books, issued licenses, and sanctioned individuals who violated their regulations. Thus, in both England and France, copyright law began as a monopoly grant to benefit and regulate the printers’ guilds, and as a form of surveillance and censorship over public opinion on behalf of the Crown.

The English system of privileges was replaced in 1710 by a copyright statute (the “Statute of Anne” or “An Act for the Encouragement of Learning, by Vesting the Copies of Printed Books in the Authors or Purchasers of Such Copies, During the Times Therein Mentioned,” 1709-10, 8 Anne, ch. 19.) The statute was not directed toward the authors of books and their rights. Rather, its intent was to restrain the publishing industry and destroy its monopoly power. According to the law, the grant of copyright was available to anyone, not just to the Stationers. Instead of a perpetual right, the term was limited to fourteen years, with a right of renewal, after which the work would enter the public domain. The statute also permitted the importation of books in foreign languages.

Subsequent litigation and judicial interpretation added a new and fundamentally different dimension to copyright. In order to protect their perpetual copyright, publishers tried to promote the idea that copyright was based on the natural rights of authors or creative individuals and, as the agent of the author, those rights devolved to the publisher. If indeed copyrights derived from these inherent principles, they represented property that existed independently of statutory provisions and could be protected under common law. The booksellers engaged in a series of strategic litigation that culminated in their defeat in the landmark case, Donaldson v. Beckett [98 Eng. Rep. 257 (1774)]. The court ruled that authors had a common law right in their unpublished works, but on publication that right was extinguished by the statute, whose provisions determined the nature and scope of any copyright claims. This transition from publisher’s rights to statutory author’s rights implied that copyright had transmuted from a straightforward license to protect monopoly profits into an expanding property right whose boundaries would henceforth increase at the expense of the public domain.

Between 1735 and 1875 fourteen Acts of Parliament amended the copyright legislation. Copyrights extended to sheet music, maps, charts, books, sculptures, paintings, photographs, dramatic works and songs sung in a dramatic fashion, and lectures outside of educational institutions. Copyright owners had no remedies at law unless they complied with a number of stipulations which included registration, the payment of fees, the delivery of free copies of every edition to the British Museum (delinquents were fined), as well as complimentary copies for four libraries, including the Bodleian and Trinity College. The ubiquitous Stationers’ Company administered registration, and the registrar personally benefited from the monetary fees of 5 shillings when the book was registered and an equal amount for each assignment and each copy of an entry, along with one shilling for each entry searched. Foreigners could only obtain copyrights if they presented themselves in a part of the British Empire at the time of publication. The book had to be published in the United Kingdom, and prior publication in a foreign country – even in a British colony – was an obstacle to copyright protection.

The term of the copyright in books was for the longer of 42 years from publication or the lifetime of the author plus seven years, and after the death of the author a compulsory license could be issued to ensure that works of sufficient public benefit would be published. The “work for hire” doctrine was in force for books, reviews, newspapers, magazines and essays unless a distinct contractual clause specified that the copyright was to accrue to the author. Similarly, unauthorized use of a publication was permitted for the purposes of “fair use.” Only the copyright holder and his agents were allowed to import the protected works into Britain.

The British Commission that reported on the state of the copyright system in 1878 felt that the laws were “obscure, arbitrary and piecemeal” and were compounded by the confused state of the common law. The numerous uncoordinated laws that were simultaneously in force led to conflicts and unintended defects in the system. The report discussed but did not recommend an alternative to the grant of copyrights, in the form of a royalty system where “any person would be entitled to copy or republish the work on paying or securing to the owner a remuneration, taking the form of royalty or definite sum prescribed by law.” The main benefit would be to be public in the form of early access to cheap editions, whereas the main cost would be to the publishers whose risk and return would be negatively affected.

The Commission noted that the implications for the colonies were “anomalous and unsatisfactory.” The publishers in England practiced price discrimination, modifying the initial high prices for copyrighted material through discounts given to reading clubs, circulating libraries and the like, benefits which were not available in the colonies. In 1846 the Colonial Office acknowledged “the injurious effects produced upon our more distant colonists” and passed the Foreign Reprints Act in the following year. This allowed the colonies who adopted the terms of British copyright legislation to import cheap reprints of British copyrighted material with a tariff of 12.5 percent, the proceeds of which were to be remitted to the copyright owners. However, enforcement of the tariff seems to have been less than vigorous since, between 1866 and 1876 only £1155 was received from the 19 colonies who took advantage of the legislation (£1084 from Canada which benefited significantly from the American reprint trade). The Canadians argued that it was difficult to monitor imports, so it would be more effective to allow them to publish the reprints themselves and collect taxes for the benefit of the copyright owners. This proposal was rejected, but under the Canadian Copyright Act of 1875 British copyright owners could obtain Canadian copyrights for Canadian editions that were sold at much lower prices than in Britain or even in the United States.

The Commission made two recommendations. First, the bigger colonies with domestic publishing facilities should be allowed to reprint copyrighted material on payment of a license to be set by law. Second, the benefits to the smaller colonies of access to British literature should take precedence over lobbies to repeal the Foreign Reprints Act, which should be better enforced rather than removed entirely. Some had argued that the public interest required that Britain should allow the importation of cheap colonial reprints since the high prices of books were “altogether prohibitory to the great mass of the reading public” but the Commission felt that this should only be adopted with the consent of the copyright owner. They also devoted a great deal of attention to what was termed “The American Question” but took the “highest public ground” and recommended against retaliatory policies.

Copyright in the United States

Colonial Copyright

In the period before the Declaration of Independence individual American states recognized and promoted patenting activity, but copyright protection was not considered to be of equal importance, for a number of reasons. First, in a democracy the claims of the public and the wish to foster freedom of expression were paramount. Second, to a new colony, pragmatic concerns were likely of greater importance than the arts, and the more substantial literary works were imported. Markets were sufficiently narrow that an individual could saturate the market with a first run printing, and most local publishers produced ephemera such as newspapers, almanacs, and bills. Third, it was unclear that copyright protection was needed as an incentive for creativity, especially since a significant fraction of output was devoted to works such as medical treatises and religious tracts whose authors wished simply to maximize the number of readers, rather than the amount of income they received.

In 1783, Connecticut became the first state to approve an “Act for the encouragement of literature and genius” because “it is perfectly agreeable to the principles of natural equity and justice, that every author should be secured in receiving the profits that may arise from the sale of his works, and such security may encourage men of learning and genius to publish their writings; which may do honor to their country, and service to mankind.” Although this preamble might seem to strongly favor author’s rights, the statute also specified that books were to be offered at reasonable prices and in sufficient quantities, or else a compulsory license would issue.

Federal Copyright Grants

Despite their common source in the intellectual property clause of the U.S. Constitution, copyright policies provided a marked contrast to the patent system. According to Wheaton v. Peters, 33 U.S. 591, 684 (1834): “It has been argued at the bar, that as the promotion of the progress of science and the useful arts is here united in the same clause in the constitution, the rights of the authors and inventors were considered as standing on the same footing; but this, I think, is a non sequitur, for when congress came to execute this power by legislation, the subjects are kept distinct, and very different provisions are made respecting them.”

The earliest federal statute to protect the product of authors was approved on May 31 1790, “for the encouragement of learning, by securing the copies of maps, charts, and books to the authors and proprietors of such copies, during the times therein mentioned.” John Barry obtained the first federal copyright when he registered his spelling book in the District Court of Pennsylvania, and early grants reflected the same utilitarian character. Policy makers felt that copyright protection would serve to increase the flow of learning and information, and by encouraging publication would contribute to democratic principles of free speech. The diffusion of knowledge would also ensure broad-based access to the benefits of social and economic development. The copyright act required authors and proprietors to deposit a copy of the title of their work in the office of the district court in the area where they lived, for a nominal fee of sixty cents. Registration secured the right to print, publish and sell maps, charts and books for a term of fourteen years, with the possibility of an extension for another like term. Amendments to the original act extended protection to other works including musical compositions, plays and performances, engravings and photographs. Legislators refused to grant perpetual terms, but the length of protection was extended in the general revision of the laws in 1831, and 1909.

In the case of patents, the rights of inventors, whether domestic or foreign, were widely viewed as coincident with public welfare. In stark contrast, policymakers showed from the very beginning an acute sensitivity to trade-offs between the rights of authors (or publishers) and social welfare. The protections provided to authors under copyrights were as a result much more limited than those provided by the laws based on moral rights that were applied in many European countries. Of relevance here are stipulations regarding first sale, work for hire, and fair use. Under a moral rights-based system, an artist or his heirs can claim remedies if subsequent owners alter or distort the work in a way that allegedly injures the artist’s honor or reputation. According to the first sale doctrine, the copyright holder lost all rights after the work was sold. In the American system, if the copyright holder’s welfare were enhanced by nonmonetary concerns, these individualized concerns could be addressed and enforced through contract law, rather than through a generic federal statutory clause that would affect all property holders. Similarly, “work for hire” doctrines also repudiated the right of personality, in favor of facilitating market transactions. For example, in 1895 Thomas Donaldson filed a complaint that Carroll D. Wright’s editing of Donaldson’s report for the Census Bureau was “damaging and injurious to the plaintiff, and to his reputation” as a scholar. The court rejected his claim and ruled that as a paid employee he had no rights in the bulletin; to rule otherwise would create problems in situations where employees were hired to prepare data and statistics.

This difficult quest for balance between private and public good was most evident in the copyright doctrine of “fair use” that (unlike with patents) allowed unauthorized access to copyrighted works under certain conditions. Joseph Story ruled in [Folsom v. Marsh, 9 F. Cas. 342 (1841)]: “we must often, in deciding questions of this sort, look to the nature and objects of the selections made, the quantity and value of the materials used, and the degree in which the use may prejudice the sale, or diminish the profits, or supersede the objects, of the original work.” One of the striking features of the fair use doctrine is the extent to which property rights were defined in terms of market valuations, or the impact on sales and profits, as opposed to a clear holding of the exclusivity of property. Fair use doctrine thus illustrates the extent to which the early policy makers weighed the costs and benefits of private property rights against the rights of the public and the provisions for a democratic society. If copyrights were as strictly construed as patents, it would serve to reduce scholarship, prohibit public access for noncommercial purposes, increase transactions costs for potential users, and inhibit learning which the statutes were meant to promote.

Nevertheless, like other forms of intellectual property, the copyright system evolved to encompass improvements in technology and changes in the marketplace. Technological changes in nineteenth-century printing included the use of stereotyping which lowered the costs of reprints, improvements in paper making machinery, and the advent of steam powered printing presses. Graphic design also benefited from innovations, most notably the development of lithography and photography. The number of new products also expanded significantly, encompassing recorded music and moving pictures by the end of the nineteenth century; and commercial television, video recordings, audiotapes, and digital music in the twentieth century.

The subject matter, scope and duration of copyrights expanded over the course of the nineteenth century to include musical compositions, plays, engravings, sculpture, and photographs. By 1910 the original copyright holder was granted derivative rights such as to translations of literary works into other languages; to performances; and the rights to adapt musical works, among others. Congress also lengthened the term of copyright several times, although by 1890 the term of copyright protection in Greece and the United States were the most abbreviated in the world. New technologies stimulated change by creating new subjects for copyright protection, and by lowering the costs of infringement of copyrighted works. In Edison v. Lubin, 122 F. Cas. 240 (1903), the lower court rejected Edison’s copyright of moving pictures under the statutory category of photographs. This decision was overturned by the appellate court: “[Congress] must have recognized there would be change and advance in making photographs, just as there has been in making books, printing chromos, and other subjects of copyright protection.” Copyright enforcement was largely the concern of commercial interests, and not of the creative individual. The fraction of copyright plaintiffs who were authors (broadly defined) was initially quite low, and fell continuously during the nineteenth century. By 1900-1909, only 8.6 percent of all plaintiffs in copyright cases were the creators of the item that was the subject of the litigation. Instead, by the same period, the majority of parties bringing cases were publishers and other assignees of copyrights.

In 1909 Congress revised the copyright law and composers were given the right to make the first mechanical reproductions of their music. However, after the first recording, the statute permitted a compulsory license to issue for copyrighted musical compositions: that is to say, anyone could subsequently make their own recording of the composition on payment of a fee that was set by the statute at two cents per recording. In effect, the property right was transformed into a liability rule. The next major legislative change in 1976 similarly allowed compulsory licenses to issue for works that are broadcast on cable television. The prevalence of compulsory licenses for copyrighted material is worth noting for a number of reasons: they underline some of the statutory differences between patents and copyrights in the United States; they reflect economic reasons for such distinctions; and they are also the result of political compromises among the various interest groups that are affected.

Allied Rights

The debate about the scope of patents and copyrights often underestimates or ignores the importance of allied rights that are available through other forms of the law such as contract and unfair competition. A noticeable feature of the case law is the willingness of the judiciary in the nineteenth century to extend protection to noncopyrighted works under alternative doctrines in the common law. More than 10 percent of copyright cases dealt with issues of unfair competition, and 7.7 percent with contracts; a further 12 percent encompassed issues of right to privacy, trade secrets, and misappropriation. For instance, in Keene v. Wheatley et al., 14 F. Cas. 180 (1860), the plaintiff did not have a statutory copyright in the play that was infringed. However, she was awarded damages on the basis of her proprietary common law right in an unpublished work, and because the defendants had taken advantage of a breach of confidence by one of her former employees. Similarly, the courts offered protection against misappropriation of information, such as occurred when the defendants in Chamber of Commerce of Minneapolis v. Wells et al., 111 N.W. 157 (1907) surreptitiously obtained stock market information by peering in windows, eavesdropping, and spying.

Several other examples relate to the more traditional copyright subject of the book trade. E. P. Dutton & Company published a series of Christmas books which another publisher photographed, and offered as a series with similar appearance and style but at lower prices. Dutton claimed to have been injured by a loss of profits and a loss of reputation as a maker of fine books. The firm did not have copyrights in the series, but they essentially claimed a right in the “look and feel” of the books. The court agreed: “the decisive fact is that the defendants are unfairly and fraudulently attempting to trade upon the reputation which plaintiff has built up for its books. The right to injunctive relief in such a case is too firmly established to require the citation of authorities.” In a case that will resonate with academics, a surgery professor at the University of Pennsylvania was held to have a common law property right in the lectures he presented, and a student could not publish them without his permission. Titles could not be copyrighted, but were protected as trade marks and under unfair competition doctrines. In this way, in numerous lawsuits G. C. Merriam & Co, the original publishers of Webster’s Dictionary, restrained the actions of competitors who published the dictionary once the copyrights had expired.

International Copyrights in the United States

The U.S. was long a net importer of literary and artistic works, especially from England, which implied that recognition of foreign copyrights would have led to a net deficit in international royalty payments. The Copyright Act recognized this when it specified that “nothing in this act shall be construed to extend to prohibit the importation or vending, reprinting or publishing within the United States, of any map, chart, book or books … by any person not a citizen of the United States.” Thus, the statutes explicitly authorized Americans to take free advantage of the cultural output of other countries. As a result, it was alleged that American publishers “indiscriminately reprinted books by foreign authors without even the pretence of acknowledgement.” The tendency to reprint foreign works was encouraged by the existence of tariffs on imported books that ranged as high as 25 percent.

The United States stood out in contrast to countries such as France, where Louis Napoleon’s Decree of 1852 prohibited counterfeiting of both foreign and domestic works. Other countries which were affected by American piracy retaliated by refusing to recognize American copyrights. Despite the lobbying of numerous authors and celebrities on both sides of the Atlantic, the American copyright statutes did not allow for copyright protection of foreign works for fully one century. As a result, American publishers and producers freely pirated foreign literature, art, and drama.

Effects of Copyright Piracy

What were the effects of piracy? First, did the American industry suffer from cheaper foreign books being dumped on the domestic market? This does not seem to have been the case. After controlling for the type of work, the cost of the work, and other variables, the prices of American books were lower than prices of foreign books. American book prices may have been lower to reflect lower perceived quality or other factors that caused imperfect substitutability between foreign and local products. As might be expected, prices were not exogenously and arbitrarily fixed, but varied in accordance with a publisher’s estimation of market factors such as the degree of competition and the responsiveness of demand to determinants. The reading public appears to have gained from the lack of copyright, which increased access to the superior products of more developed markets in Europe, and in the long run this likely improved both the demand and supply of domestic science and literature.

Second, according to observers, professional authorship in the United States was discouraged because it was difficult to compete with established authors such as Scott, Dickens and Tennyson. Whether native authors were deterred by foreign competition would depend on the extent to which foreign works prevailed in the American market. Early in American history the majority of books were reprints of foreign titles. However, nonfiction titles written by foreigners were less likely to be substitutable for nonfiction written by Americans; consequently, the supply of nonfiction soon tended to be provided by native authors. From an early period grammars, readers, and juvenile texts were also written by Americans. Geology, geography, history and similar works would have to be adapted or completely rewritten to be appropriate for an American market which reduced their attractiveness as reprints. Thus, publishers of schoolbooks, medical volumes and other nonfiction did not feel that the reforms of 1891 were relevant to their undertakings. Academic and religious books are less likely to be written for monetary returns, and their authors probably benefited from the wider circulation that lack of international copyright encouraged. However, the writers of these works declined in importance relative to writers of fiction, a category which grew from 6.4 percent before 1830 to 26.4 percent by the 1870s.

On the other hand, foreign authors dominated the field of fiction for much of the century. One study estimates about fifty percent of all fiction best sellers in antebellum period were pirated from foreign works. In 1895 American authors accounted for two of the top ten best sellers but by 1910 nine of the top ten were written by Americans. This fall over time in the fraction of foreign authorship may have been due to a natural evolutionary process, as the development of the market for domestic literature over time encouraged specialization. The growth in fiction authors was associated with the increase in the number of books per author over the same period. Improvements in transportation and the increase in the academic population probably played a large role in enabling individuals who lived outside the major publishing centers to become writers despite the distance. As the market expanded, a larger fraction of writers could become professionals.

Although the lack of copyright protection may not have discouraged authors, this does not imply that intellectual property policy in this dimension had no costs. It is likely that the lack of foreign copyrights led to some misallocation of efforts or resources, such as in attempting to circumvent the rules. Authors changed their residence temporarily when books were about to be published in order to qualify for copyright. Others obtained copyrights by arranging to co-author with a foreign citizen. T. H. Huxley adopted this strategy, arranging to co-author with “a young Yankee friend … Otherwise the thing would be pillaged at once.” An American publisher suggested that Kipling should find “a hack writer, whose name would be of use simply on account of its carrying the copyright.” Harriet Beecher Stowe proposed a partnership with Elizabeth Gaskell, so they could “secure copyright mutually in our respective countries and divide the profits.”

It is widely acknowledged that copyrights in books tended to be the concern of publishers rather than of authors (although the two are naturally not independent of each other). As a result of lack of legal copyrights in foreign works, publishers raced to be first on the market with the “new” pirated books, and the industry experienced several decades of intense, if not quite “ruinous” competition. These were problems that publishers in England had faced before, in the market for books that were uncopyrighted, such as Shakespeare and Fielding. Their solution was to collude in the form of strictly regulated cartels or “printing congers.” The congers created divisible property in books that they traded, such as a one hundred and sixtieth share in Johnson’s Dictionary that was sold for £23 in 1805. Cooperation resulted in risk sharing and a greater ability to cover expenses. The unstable races in the United States similarly settled down during the 1840s to collusive standards that were termed “trade custom” or “courtesy of the trade.”

The industry achieved relative stability because the dominant firms cooperated in establishing synthetic property rights in foreign-authored books. American publishers made payments (termed “copyrights”) to foreign authors to secure early sheets, and other firms recognized their exclusive property in the “authorized reprint”. Advance payments to foreign authors not only served to ensure the coincidence of publishers’ and authors’ interests – they were also recognized by “reputable” publishers as “copyrights.” These exclusive rights were tradable, and enforced by threats of predatory pricing and retaliation. Such practices suggest that publishers were able to simulate the legal grant through private means.

However, private rights naturally did not confer property rights that could be enforced at law. The case of Sheldon v. Houghton 21 F. Cas 1239 (1865) illustrates that these rights were considered to be “very valuable, and is often made the subject of contracts, sales, and transfers, among booksellers and publishers.” The very fact that a firm would file a plea for the court to protect their claim indicates how vested a right it had become. The plaintiff argued that “such custom is a reasonable one, and tends to prevent injurious competition in business, and to the investment of capital in publishing enterprises that are of advantage to the reading public.” The courts rejected this claim, since synthetic rights differed from copyrights in the degree of security that was offered by the enforcement power of the courts. Nevertheless, these title-specific of rights exclusion decreased uncertainty, enabled publishers to recoup their fixed costs, and avoided the wasteful duplication of resources that would otherwise have occurred.

It was not until 1891 that the Chace Act granted copyright protection to selected foreign residents. Thus, after a century of lobbying by interested parties on both sides of the Atlantic, based on reasons that ranged from the economic to the moral, copyright laws only changed when the United States became more competitive in the international market for literary and artistic works. However, the act also included significant concessions to printers’ unions and printing establishments in the form of “manufacturing clauses.” First, a book had to be published in the U.S. before or at the same time as the publication date in its country of origin. Second, the work had to be printed here, or printed from type set in the United States or from plates made from type set in the United States. Copyright protection still depended on conformity with stipulations such as formal registration of the work. These clauses resulted in U.S. failure to qualify for admission to the international Berne Convention until 1988, more than one hundred years after the first Convention.

After the copyright reforms in 1891, both English and American authors were disappointed to find that the change in the law did not lead to significant gains. Foreign authors realized they may even have benefited from the lack of copyright protection in the United States. Despite the cartelization of publishing, competition for these synthetic copyrights ensured that foreign authors were able to obtain payments that American firms made to secure the right to be first on the market. It can also be argued that foreign authors were able to reap higher total returns from the expansion of the market through piracy. The lack of copyright protection may have functioned as a form of price discrimination, where the product was sold at a higher price in the developed country, and at a lower or zero price in the poorer country. Returns under such circumstances may have been higher for goods with demand externalities or network effects, such as “bestsellers” where consumer valuation of the book increased with the size of the market. For example, Charles Dickens, Anthony Trollope, and other foreign writers were able to gain considerable income from complementary lecture tours in the extensive United States market.

Harmonization of Copyright Laws

In view of the strong protection accorded to inventors under the U.S. patent system, to foreign observers its copyright policies appeared to be all the more reprehensible. The United States, the most liberal in its policies towards patentees, had led the movement for harmonization of patent laws. In marked contrast, throughout the history of the U.S. system, its copyright grants in general were more abridged than almost all other countries in the world. The term of copyright grants to American citizens was among the shortest in the world, the country applied the broadest interpretation of fair use doctrines, and the validity of the copyright depended on strict compliance with the requirements. U.S. failure to recognize the rights of foreign authors was also unique among the major industrial nations. Throughout the nineteenth century proposals to reform the law and to acknowledge foreign copyrights were repeatedly brought before Congress and rejected. Even the bill that finally recognized international copyrights almost failed, only passed at the last possible moment, and required longstanding exemptions in favor of workers and printing enterprises.

In a parallel fashion to the status of the United States in patent matters, France’s influence was evident in the subsequent evolution of international copyright laws. Other countries had long recognized the rights of foreign authors in national laws and bilateral treaties, but France stood out in its favorable treatment of domestic and foreign copyrights as “the foremost of all nations in the protection it accords to literary property.” This was especially true of its concessions to foreign authors and artists. For instance, France allowed copyrights to foreigners conditioned on manufacturing clauses in 1810, and granted foreign and domestic authors equal rights in 1852. In the following decade France entered into almost two dozen bilateral treaties, prompting a movement towards multilateral negotiations, such as the Congress on Literary and Artistic Property in 1858. The International Literary and Artistic Association, which the French novelist Victor Hugo helped to establish, conceived of and organized the Convention which first met in Berne in 1883.

The Berne Convention included a number of countries that wished to establish an “International Union for the Protection of Literary and Artistic Works.” The preamble declared their intent to “protect effectively, and in as uniform a manner as possible, the rights of authors over their literary and artistic works.” The actual Articles were more modest in scope, requiring national treatment of authors belonging to the Union and minimum protection for translation and public performance rights. The Convention authorized the establishment of a physical office in Switzerland, whose official language would be French. The rules were revised in 1908 to extend the duration of copyright and to include modern technologies. Perhaps the most significant aspect of the convention was not its specific provisions, but the underlying property rights philosophy which was decidedly from the natural rights school. Berne abolished compliance with formalities as a prerequisite for copyright protection since the creative act itself was regarded as the source of the property right. This measure had far-reaching consequences, because it implied that copyright was now the default, whereas additions to the public domain would have to be achieved through affirmative actions and by means of specific limited exemptions. In 1928 the Berne Convention followed the French precedent and acknowledged the moral rights of authors and artists.

Unlike its leadership in patent conventions, the United States declined an invitation to the pivotal copyright conference in Berne in 1883; it attended but refused to sign the 1886 agreement of the Berne Convention. Instead, the United States pursued international copyright policies in the context of the weaker Universal Copyright Convention (UCC), which was adopted in 1952 and formalized in 1955 as a complementary agreement to the Berne Convention. The UCC membership included many developing countries that did not wish to comply with the Berne Convention because they viewed its provisions as overly favorable to the developed world. The United States was among the last wave of entrants into the Berne Convention when it finally joined in 1988. In order to do so it complied by removing prerequisites for copyright protection such as registration, and also lengthened the term of copyrights. However, it still has not introduced federal legislation in accordance with Article 6bis, which declares the moral rights of authors “independently of the author’s economic rights, and even after the transfer of the said rights.” Similarly, individual countries continue to differ in the extent to which multilateral provisions governed domestic legislation and practices.

The quest for harmonization of intellectual property laws resulted in a “race to the top,” directed by the efforts and self interest of the countries which had the strongest property rights. The movement to harmonize patents was driven by American efforts to ensure that its extraordinary patenting activity was remunerated beyond as well as within its borders. At the same time, the United States ignored international conventions to unify copyright legislation. Nevertheless, the harmonization of copyright laws proceeded, promoted by France and other civil law regimes which urged stronger protection for authors based on their “natural rights” although at the same time they infringed on the rights of foreign inventors. The net result was that international pressure was applied to developing countries in the twentieth century to establish strong patents and strong copyrights, although no individual developed country had adhered to both concepts simultaneously during their own early growth phase. This occurred even though theoretical models did not offer persuasive support for intellectual property harmonization, and indeed suggested that uniform policies might be detrimental even to some developed countries and to overall global welfare.

Conclusion

The past three centuries stand out in terms of the diversity across nations in intellectual property institutions, but the nineteenth century saw the origins of the movement towards the “harmonization” of laws that at present dominates global debates. Among the now-developed countries, the United States stood out for its conviction that broad access to intellectual property rules and standards was key to achieving economic development. Europeans were less concerned about enhancing mass literacy and public education, and viewed copyright owners as inherently meritorious and deserving of strong protection. European copyright regimes thus evolved in the direction of author’s rights, while the United States lagged behind the rest of the world in terms of both domestic and foreign copyright protection.

By design, American statutes differentiated between patents and copyrights in ways that seemed warranted if the objective was to increase social welfare. The patent system early on discriminated between nonresident and domestic inventors, but within a few decades changed to protect the right of any inventor who filed for an American patent regardless of nationality. The copyright statutes, in contrast, openly encouraged piracy of foreign goods on an astonishing scale for one hundred years, in defiance of the recriminations and pressures exerted by other countries. The American patent system required an initial search and examination that ensured the patentee was the “first and true” creator of the invention in the world, whereas copyrights were granted through mere registration. Patents were based on the assumption of novelty and held invalid if this assumption was violated, whereas essentially similar but independent creation was copyrightable. Copyright holders were granted the right to derivative works, whereas the patent holder was not. Unauthorized use of patented inventions was prohibited, whereas “fair use” of copyrighted material was permissible if certain conditions were met. Patented inventions involved greater initial investments, effort, and novelty than copyrighted products and tended to be more responsive to material incentives; whereas in many cases cultural goods would still be produced or only slightly reduced in the absence of such incentives. Fair use was not allowed in the case of patents because the disincentive effect was likely to be higher, while the costs of negotiation between the patentee and the more narrow market of potential users would generally be lower. If copyrights were as strongly enforced as patents it would benefit publishers and a small literary elite at the cost of social investments in learning and education.

The United States created a utilitarian market-based model of intellectual property grants which created incentives for invention, but always with the primary objective of increasing social welfare and protecting the public domain. The checks and balances of interest group lobbies, the legislature and the judiciary worked effectively as long as each institution was relatively well-matched in terms of size and influence. However, a number of legal and economic scholars are increasingly concerned that the political influence of corporate interests, the vast number of uncoordinated users over whom the social costs are spread, and international harmonization of laws have upset these counterchecks, leading to over-enforcement at both the private and public levels.

International harmonization with European doctrines introduced significant distortions in the fundamental principles of American copyright and its democratic provisions. One of the most significant of these changes was also one of the least debated: compliance with the precepts of the Berne Convention accorded automatic copyright protection to all creations on their fixation in tangible form. This rule reversed the relationship between copyright and the public domain that the U.S. Constitution stipulated. According to original U.S. copyright doctrines, the public domain was the default, and copyright merely comprised a limited exemption to the public domain; after the alignment with Berne, copyright became the default, and the rights of the public and of the public domain now merely comprise a limited exception to the primacy of copyright. The pervasive uncertainty that characterizes the intellectual property arena today leads risk-averse individuals and educational institutions to err on the side of abandoning their right to free access rather than invite potential challenges and costly litigation. A number of commentators are equally concerned about other dimensions of the globalization of intellectual property rights, such as the movement to emulate European grants of property rights in databases, which has the potential to inhibit diffusion and learning.

Copyright law and policy has always altered and been altered by social, economic and technological changes, in the United States and elsewhere. However, the one constant feature across the centuries is that copyright protection involves crucial political questions to a far greater extent than its economic implications.

Additional Readings

Economic History

B. Zorina Khan. The Democratization of Invention: Patents and Copyrights in American Economic Development, 1790-1920. New York: Cambridge University Press, 2005.

Law and Economics

Besen, Stanley, and L. Raskind. “An Introduction to the Law and Economics of Intellectual Property.” Journal of Economic Perspectives 5 (1991): 3-27.

Breyer, Stephen. “The Uneasy Case for Copyright: A Study of Copyright in Books, Photocopies and Computer Programs.” Harvard Law Review 84 (1970): 281-351.

Gallini, Nancy and S. Scotchmer. “Intellectual Property: When Is It the Best Incentive System?” Innovation Policy and the Economy 2 (2002): 51-78.

Gordon, Wendy, and R. Watt, editors. The Economics of Copyright: Developments in Research and Analysis. Cheltenham, UK: Edward Elgar, 2002.

Hurt, Robert M., and Robert M. Shuchman. “The Economic Rationale of Copyright.” American Economic Review Papers and Proceedings 56 (1966): 421-32.

Johnson, William R. “The Economics of Copying.” Journal of Political Economy 93 (1985): 1581-74.

Landes, William M., and Richard A. Posner. “An Economic Analysis of Copyright Law.” Journal of Legal Studies 18 (1989): 325-63.

Landes, William M., and Richard A. Posner. The Economic Structure of Intellectual Property Law. Cambridge, MA: Harvard University Press, 2003.

Liebowitz, S. J. “Copying and Indirect Appropriability: Photocopying of Journals.” Journal of Political Economy 93 (1985): 945-57.

Merges, Robert P. “Contracting into Liability Rules: Intellectual Property Rights and Collective Rights Organizations.” California Law Review 84, no. 5 (1996): 1293-1393.

Meurer, Michael J. “Copyright Law and Price Discrimination.” Cardozo Law Review 23 (2001): 55-148.

Novos, Ian E., and Michael Waldman. “The Effects of Increased Copyright Protection: An Analytic Approach.” Journal of Political Economy 92 (1984): 236-46.

Plant, Arnold. “The Economic Aspects of Copyright in Books.” Economica 1 (1934): 167-95.

Takeyama, L. “The Welfare Implications of Unauthorized Reproduction of Intellectual Property in the Presence of Demand Network Externalities.” Journal of Industrial Economics 42 (1994): 155–66.

Takeyama, L. “The Intertemporal Consequences of Unauthorized Reproduction of Intellectual Property.” Journal of Law and Economics 40 (1997): 511–22.

Varian, Hal. “Buying, Sharing and Renting Information Goods.” Journal of Industrial Economics 48, no. 4 (2000): 473–88.

Varian, Hal. “Copying and Copyright.” Journal of Economic Perspectives 19, no. 2 (2005): 121-38.

Watt, Richard. Copyright and Economic Theory: Friends or Foes? Cheltenham, UK: Edward Elgar, 2000.

History of Economic Thought

Hadfield, Gilliam K. “The Economics of Copyright: A Historical Perspective.” Copyright Law Symposium (ASCAP) 38 (1992): 1-46.

History

Armstrong, Elizabeth. Before Copyright: The French Book-Privilege System, 1498-1526. Cambridge: Cambridge University Press, 1990.

Birn, Raymond. “The Profits of Ideas: Privileges en librairie in Eighteenth-century France.” Eighteenth-Century Studies 4, no. 2 (1970-71): 131-68.

Bugbee, Bruce. The Genesis of American Patent and Copyright Law. Washington, DC: Public Affairs Press, 1967.

Dawson, Robert L. The French Booktrade and the “Permission Simple” of 1777: Copyright and the Public Domain. Oxford: Voltaire Foundation, 1992.

Hackett, Alice P., and James Henry Burke. Eighty Years of Best Sellers, 1895-1975. New York: Bowker, 1977.

Nowell-Smith, Simon. International Copyright Law and the Publisher in the Reign of Queen Victoria. Oxford: Clarendon Press, 1968.

Patterson, Lyman. Copyright in Historical Perspective. Nashville: Vanderbilt University Press, 1968.

Rose, Mark. Authors and Owners: The Invention of Copyright. Cambridge: Harvard University Press, 1993.

Saunders, David. Authorship and Copyright. London: Routledge, 1992.

Citation: Khan, B. “An Economic History of Copyright in Europe and the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/an-economic-history-of-copyright-in-europe-and-the-united-states/

The Economics of the Civil War

Roger L. Ransom, University of California, Riverside

The Civil War has been something of an enigma for scholars studying American history. During the first half of the twentieth century, historians viewed the war as a major turning point in American economic history. Charles Beard labeled it “Second American Revolution,” claiming that “at bottom the so-called Civil War – was a social war, ending in the unquestioned establishment of a new power in the government, making vast changes – in the course of industrial development, and in the constitution inherited from the Fathers” (Beard and Beard 1927: 53). By the time of the Second World War, Louis Hacker could sum up Beard’s position by simply stating that the war’s “striking achievement was the triumph of industrial capitalism” (Hacker 1940: 373). The “Beard-Hacker Thesis” had become the most widely accepted interpretation of the economic impact of the Civil War. Harold Faulkner devoted two chapters to a discussion of the causes and consequences of the war in his 1943 textbook American Economic History (which was then in its fifth edition), claiming that “its effects upon our industrial, financial, and commercial history were profound” (1943: 340).

In the years after World War II, a new group of economic historians — many of them trained in economics departments — focused their energies on the explanation of economic growth and development in the United States. As they looked for the keys to American growth in the nineteenth century, these economic historians questioned whether the Civil War — with its enormous destruction and disruption of society — could have been a stimulus to industrialization. In his 1955 textbook on American economic history, Ross Robertson mirrored a new view of the Civil War and economic growth when he argued that “persistent, fundamental forces were at work to forge the economic system and not even the catastrophe of internecine strife could greatly affect the outcome” (1955: 249). “Except for those with a particular interest in the economics of war,” claimed Robertson, “the four year period of conflict [1861-65] has had little attraction for economic historians” (1955: 247). Over the next two decades, this became the dominant view of the Civil War’s role industrialization of the United States.

Historical research has a way of returning to the same problems over and over. The efforts to explain regional patterns of economic growth and the timing of the United States’ “take-off” into industrialization, together with extensive research into the “economics” of the slave system of the South and the impact of emancipation, brought economic historians back to questions dealing with the Civil War. By the 1990s a new generation of economic history textbooks once again examined the “economics” of the Civil War (Atack and Passell 1994; Hughes and Cain 1998; Walton and Rockoff 1998). This reconsideration of the Civil War by economic historians can be loosely grouped into four broad issues: the “economic” causes of the war; the “costs” of the war; the problem of financing the War; and a re-examination of the Hacker-Beard thesis that the War was a turning point in American economic history.

Economic Causes of the War

No one seriously doubts that the enormous economic stake the South had in its slave labor force was a major factor in the sectional disputes that erupted in the middle of the nineteenth century. Figure 1 plots the total value of all slaves in the United States from 1805 to 1860. In 1805 there were just over one million slaves worth about $300 million; fifty-five years later there were four million slaves worth close to $3 billion. In the 11 states that eventually formed the Confederacy, four out of ten people were slaves in 1860, and these people accounted for more than half the agricultural labor in those states. In the cotton regions the importance of slave labor was even greater. The value of capital invested in slaves roughly equaled the total value of all farmland and farm buildings in the South. Though the value of slaves fluctuated from year to year, there was no prolonged period during which the value of the slaves owned in the United States did not increase markedly. Looking at Figure 1, it is hardly surprising that Southern slaveowners in 1860 were optimistic about the economic future of their region. They were, after all, in the midst of an unparalleled rise in the value of their slave assets.

A major finding of the research into the economic dynamics of the slave system was to demonstrate that the rise in the value of slaves was not based upon unfounded speculation. Slave labor was the foundation of a prosperous economic system in the South. To illustrate just how important slaves were to that prosperity, Gerald Gunderson (1974) estimated what fraction of the income of a white person living in the South of 1860 was derived from the earnings of slaves. Table 1 presents Gunderson’s estimates. In the seven states where most of the cotton was grown, almost one-half the population were slaves, and they accounted for 31 percent of white people’s income; for all 11 Confederate States, slaves represented 38 percent of the population and contributed 23 percent of whites’ income. Small wonder that Southerners — even those who did not own slaves — viewed any attempt by the federal government to limit the rights of slaveowners over their property as a potentially catastrophic threat to their entire economic system. By itself, the South’s economic investment in slavery could easily explain the willingness of Southerners to risk war when faced with what they viewed as a serious threat to their “peculiar institution” after the electoral victories of the Republican Party and President Abraham Lincoln the fall of 1860.

Table 1

The Fraction of Whites’ Incomes from Slavery

State Percent of the Population That Were Slaves Per Capita Earnings of Free Whites (in dollars) Slave Earnings per Free White (in dollars) Fraction of Earnings Due to Slavery
Alabama 45 120 50 41.7
South Carolina 57 159 57 35.8
Florida 44 143 48 33.6
Georgia 44 136 40 29.4
Mississippi 55 253 74 29.2
Louisiana 47 229 54 23.6
Texas 30 134 26 19.4
Seven Cotton States 46 163 50 30.6
North Carolina 33 108 21 19.4
Tennessee 25 93 17 18.3
Arkansas 26 121 21 17.4
Virginia 32 121 21 17.4
All 11 States 38 135 35 25.9
Source: Computed from data in Gerald Gunderson (1974: 922, Table 1)

The Northern states also had a huge economic stake in slavery and the cotton trade. The first half of the nineteenth century witnessed an enormous increase in the production of short-staple cotton in the South, and most of that cotton was exported to Great Britain and Europe. Figure 2 charts the growth of cotton exports from 1815 to 1860. By the mid 1830s, cotton shipments accounted for more than half the value of all exports from the United States. Note that there is a marked similarity between the trends in the export of cotton and the rising value of the slave population depicted in Figure 1. There could be little doubt that the prosperity of the slave economy rested on its ability to produce cotton more efficiently than any other region of the world.

The income generated by this “export sector” was a major impetus for growth not only in the South, but in the rest of the economy as well. Douglass North, in his pioneering study of the antebellum U.S. economy, examined the flows of trade within the United States to demonstrate how all regions benefited from the South’s concentration on cotton production (North 1961). Northern merchants gained from Southern demands for shipping cotton to markets abroad, and from the demand by Southerners for Northern and imported consumption goods. The low price of raw cotton produced by slave labor in the American South enabled textile manufacturers — both in the United States and in Britain — to expand production and provide benefits to consumers through a declining cost of textile products. As manufacturing of all kinds expanded at home and abroad, the need for food in cities created markets for foodstuffs that could be produced in the areas north of the Ohio River. And the primary force at work was the economic stimulus from the export of Southern Cotton. When James Hammond exclaimed in 1859 that “Cotton is King!” no one rose to dispute the point.

With so much to lose on both sides of the Mason-Dixon Line, economic logic suggests that a peaceful solution to the slave issue would have made far more sense than a bloody war. Yet no solution emerged. One “economic” solution to the slave problem would be for those who objected to slavery to “buy out” the economic interest of Southern slaveholders. Under such a scheme, the federal government would purchase slaves. A major problem here was that the costs of such a scheme would have been enormous. Claudia Goldin estimates that the cost of having the government buy all the slaves in the United States in 1860, would be about $2.7 billion (1973: 85, Table 1). Obviously, such a large sum could not be paid all at once. Yet even if the payments were spread over 25 years, the annual costs of such a scheme would involve a tripling of federal government outlays (Ransom and Sutch 1990: 39-42)! The costs could be reduced substantially if instead of freeing all the slaves at once, children were left in bondage until the age of 18 or 21 (Goldin 1973:85). Yet there would remain the problem of how even those reduced costs could be distributed among various groups in the population. The cost of any “compensated” emancipation scheme was so high that even those who wished to eliminate slavery were unwilling to pay for a “buyout” of those who owned slaves.

The high cost of emancipation was not the only way in which economic forces produced strong regional tensions in the United States before 1860. The regional economic specialization, previously noted as an important cause of the economic expansion of the antebellum period, also generated very strong regional divisions on economic issues. Recent research by economic, social and political historians has reopened some of the arguments first put forward by Beard and Hacker that economic changes in the Northern states were a major factor leading to the political collapse of the 1850s. Beard and Hacker focused on the narrow economic aspects of these changes, interpreting them as the efforts of an emerging class of industrial capitalists to gain control of economic policy. More recently, historians have taken a broader view of the situation, arguing that the sectional splits on these economic issues reflected sweeping economic and social changes in the Northern and Western states that were not experienced by people in the South. The term most historians have used to describe these changes is a “market revolution.”

Source: United States Population Census, 1860.

Perhaps the best single indicator of how pervasive the “market revolution” was in the Northern and Western states is the rise of urban areas in areas where markets have become important. Map 1 plots the 292 counties that reported an “urban population” in 1860. (The 1860 Census Office defined an “urban place” as a town or city having a population of at least 2,500 people.) Table 2 presents some additional statistics on urbanization by region. In 1860 6.1 million people — roughly one out of five persons in the United States — lived in an urban county. A glance at either the map or Table 2 reveals the enormous difference in urban development in the South compared to the Northern states. More than two-thirds of all urban counties were in the Northeast and West; those two regions accounted for nearly 80 percent of the urban population of the country. By contrast, less than 7 percent of people in the 11 Southern states of Table 2 lived in urban counties.

Table 2

Urban Population of the United States in 1860a

Region Counties with Urban Populations Total Urban Population in the Region Percent of Region’s Population Living in Urban Counties Region’s Urban Population as Percent of U.S. Urban Population
Northeastb 103 3,787,337 35.75 61.66
Westc 108 1,059,755 13.45 17.25
Borderd 23 578,669 18.45 9.42
Southe 51 621,757 6.83 10.12
Far Westf 7 99,145 15.19 1.54
Totalg 292 6,141,914 19.77 100.00
Notes:

a Urban population is people living in a city or town of at least 2,500

b Includes: Connecticut, Maine, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont.

c Includes: Illinois, Indiana, Iowa, Kansas, Minnesota, Nebraska, Ohio, and Wisconsin.

d Includes: Delaware, Kentucky, Maryland, and Missouri.

e Includes: Alabama, Arkansas, Florida, Georgia, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee, Texas, and Virginia.

f Includes: Colorado, California, Dakotas, Nevada, New Mexico, Oregon, Utah and Washington

g includes District of Columbia

Source: U.S Census of Population, 1860.

The region along the north Atlantic Coast, with its extensive development of commerce and industry, had the largest concentration of urban population in the United States; roughly one-third of the population of the nine states defined as the Northeast in Table 2 lived in urban counties. In the South, the picture was very different. Cotton cultivation with slave labor did not require local financial services or nearby manufacturing activities that might generate urban activities. The 11 states of the Confederacy had only 51 urban counties and they were widely scattered throughout the region. Western agriculture with its emphasis on foodstuffs encouraged urban activity near to the source of production. These centers were not necessarily large; indeed, the West had roughly the same number of large and mid-sized cities as the South. However there were far more small towns scattered throughout settled regions of Ohio, Indiana, Illinois, Wisconsin and Michigan than in the Southern landscape.

Economic policy had played a prominent role in American politics since the birth of the republic in 1790. With the formation of the Whig Party in the 1830s, a number of key economic issues emerged at the national level. To illustrate the extent to which the rise of urban centers and increased market activity in the North led to a growing crisis in economic policy, historians have re-examined four specific areas of legislative action singled out by Beard and Hacker as evidence of a Congressional stalemate in 1860 (Egnal 2001; Ransom and Sutch 2001; 1989; Bensel 1990; McPherson 1988).

Land Policy

1. Land Policy. Settlement of western lands had always been a major bone of contention for slave and free-labor farms. The manner in which the federal government distributed land to people could have a major impact on the nature of farming in a region. Northerners wanted to encourage the settlement of farms which would depend primarily on family labor by offering cheap land in small parcels. Southerners feared that such a policy would make it more difficult to keep areas open for settlement by slaveholders who wanted to establish large plantations. This all came to a head with the “Homestead Act” of 1860 that would provide 160 acres of free land for anyone who wanted to settle and farm the land. Northern and western congressmen strongly favored the bill in the House of Representatives but the measure received only a single vote from slave states’ representatives. The bill passed, but President Buchanan vetoed it. (Bensel 1990: 69-72)

Transportation Improvements

2. Transportation Improvements. Following the opening of the Erie Canal in 1823, there was growing support in the North and the Northwest for government support of improvement in transportation facilities — what were termed in those days “internal improvements”. The need for government- sponsored improvements was particularly urgent in the Great Lakes region (Egnal 2001: 45-50). The appearance of the railroad in the 1840s gave added support for those advocating government subsidies to promote transportation. Southerners required far fewer internal improvements than people in the Northwest, and they tended to view federal subsidies for such projects to be part of a “deal” between western and eastern interests that held no obvious gains for the South. The bill that best illustrates the regional disputes on transportation was the Pacific Railway Bill of 1860, which proposed a transcontinental railway link to the West Coast. The bill failed to pass the House, receiving no votes from congressmen representing districts of the South where there was a significant slave population (Bensel 1990: 70-71).

The Tariff

3. The Tariff. Southerners, with their emphasis on staple agriculture and need to buy goods produced outside the South, strongly objected to the imposition of duties on imported goods. Manufacturers in the Northeast, on the other hand, supported a high tariff as protection against cheap British imports. People in the West were caught in the middle of this controversy. Like the agricultural South they disliked the idea of a high “protective” tariff that raised the cost of imports. However the tariff was also the main source of federal revenue at this time, and Westerners needed government funds for the transportation improvements they supported in Congress. As a result, a compromise reached by western and eastern interests during in the tariff debates of 1857 was to support a “moderate” tariff; with duties set high enough to generate revenue and offer some protection to Northern manufacturers while not putting too much of a burden on Western and Eastern consumers. Southerners complained that even this level of protection was excessive and that it was one more example of the willingness of the West and the North to make economic bargains at the expense of the South (Ransom and Sutch 2001; Egnal 2001:50-52).

Banking

4. Banking. The federal government’s role in the chartering and regulation of banks was a volatile political issue throughout the antebellum period. In 1834 President Andrew Jackson created a major furor when he vetoed a bill to recharter the Second Bank of the United States. Jackson’s veto ushered in a period of that was termed “free banking” in the United States, where the chartering and regulation of banks was left entirely in the hands of state governments. Banks were a relatively new economic institution at this point in time, and opinions were sharply divided over the degree to which the federal government should regulate banks. In the Northeast, where over 60 percent of all banks were located, there was strong support by 1860 for the creation of a system of banks that would be chartered and regulated by the federal government. But in the South, which had little need for local banking services, there was little enthusiasm for such a proposal. Here again, the western states were caught in the middle. While they worried that a system of “national” banks that would be controlled by the already dominant eastern banking establishment, western farmers found themselves in need of local banking services for financing their crops. By 1860 many were inclined to support the Republican proposal for a National Banking System, however Southern opposition killed the National Bank Bill in 1860 (Ransom and Sutch 2001; Bensel 1990).

The growth of an urbanized market society in the North produced more than just a legislative program of political economy that Southerners strongly resisted. Several historians have taken a much broader view of the market revolution and industrialization in the North. They see the economic conflict of North and South, in the words of Richard Brown, as “the conflict of a modernizing society” (1976: 161). A leading historian of the Civil War, James McPherson, argues that Southerners were correct when they claimed that the revolutionary program sweeping through the North threatened their way of life (1983; 1988). James Huston (1999) carries the argument one step further by arguing that Southerners were correct in their fears that the triumph of this coalition would eventually lead to an assault by Northern politicians on slave property rights.

All this provided ample argument for those clamoring for the South to leave the Union in 1861. But why did the North fight a war rather than simply letting the unhappy Southerners go in peace? It seems unlikely that anyone will ever be able to show that the “gains” from the war outweighed the “costs” in economic terms. Still, war is always a gamble, and with the neither the costs nor the benefits easily calculated before the fact, leaders are often tempted to take the risk. The evidence above certainly lent strong support for those arguing that it made sense for the South to fight if a belligerent North threatened the institution of slavery. An economic case for the North is more problematic. Most writers argue that the decision for war on Lincoln’s part was not based primarily on economic grounds. However, Gerald Gunderson points out that if, as many historians argue, Northern Republicans were intent on controlling the spread of slavery, then a war to keep the South in the Union might have made sense. Gunderson compares the “costs” of the war (which we discuss below) with the cost of “compensated” emancipation and notes that the two are roughly the same order of magnitude — 2.5 to 3.7 billion dollars (1974: 940-42). Thus, going to war made as much “economic sense” as buying out the slaveholders. Gunderson makes the further point, which has been echoed by other writers, that the only way that the North could ensure that their program to contain slavery could be “enforced” would be if the South were kept in the Union. Allowing the South to leave the Union would mean that the North could no longer control the expansion of slavery anywhere in the Western Hemisphere (Ransom 1989; Ransom and Sutch 2001; Weingast 1998; Weingast 1995; Wolfson 1995). What is novel about these interpretations of the war is that they argue it was economic pressures of “modernization” in the North that made Northern policy towards secession in 1861 far more aggressive than the traditional story of a North forced into military action by the South’s attack on Fort Sumter.

That is not to say that either side wanted war — for economic or any other reason. Abraham Lincoln probably summarized the situation as well as anyone when he observed in his second inaugural address that: “Both parties deprecated war, but one of them would make war rather than let the nation survive, and the other would accept war rather than let it perish, and the war came.”

The “Costs” of the War

The Civil War has often been called the first “modern” war. In part this reflects the enormous effort expended by both sides to conduct the war. What was the cost of this conflict? The most comprehensive effort to answer this question is the work of Claudia Goldin and Frank Lewis (1978; 1975). The Goldin and Lewis estimates of the costs of the war are presented in Table 3. The costs are divided into two groups: the direct costs which include the expenditures of state and local governments plus the loss from destruction of property and the loss of human capital from the casualties; and what Goldin and Lewis term the indirect costs of the war which include the subsequent implications of the war after 1865. Goldin and Lewis estimate that the combined outlays of both governments — in 1860 dollars — totaled $3.3 billion. To this they add $1.8 billion to account for the discounted economic value of casualties in the war, and they add $1.5 billion to account for the destruction of the war in the South. This gives a total of $6.6 billion in direct costs — with each region incurring roughly half the total.

Table 3

The Costs of the Civil War

(Millions of 1860 Dollars)

South

North

Total

Direct Costs:

Government Expenditures

1,032

2,302

3,334

Physical Destruction

1,487

1,487

Loss of Human Capital

767

1,064

1,831

Total Direct Costs of the War

3,286

3,366

6,652

Per capita

376

148

212

Indirect Costs:

Total Decline in Consumption

6,190

1,149

7,339

Less:

Effect of Emancipation

1,960

Effect of Cotton Prices

1,670

Total Indirect Costs of The War

2,560

1,149

3,709

Per capita

293

51

118

Total Costs of the War

5,846

4,515

10,361

Per capita

670

199

330

Population in 1860 (Million)

8.73

27.71

31.43

Source: Ransom, (1998: 51, Table 3-1); Goldin and Lewis. (1975; 1978)

While these figures are only a very rough estimate of the actual costs, they provide an educated guess as to the order of magnitude of the economic effort required to wage the war, and it seems likely that if there is a bias, it is to understate the total. (Thus, for example, the estimated “economic” losses from casualties ignore the emotional cost of 625,000 deaths, and the estimates of property destruction were quite conservative.) Even so, the direct cost of the war as calculated by Goldin and Lewis was 1.5 times the total gross national product of the United States for 1860 — an enormous sum in comparison with any military effort by the United States up to that point. What stands out in addition to the enormity of the bill is the disparity in the burden these costs represented to the people in the North and the South. On a per capita basis, the costs to the North population were about $150 — or roughly equal to one year’s income. The Southern burden was two and a half times that amount — $376 per man, woman and child.

Staggering though these numbers are, they represent only a fraction of the full costs of the war, which lingered long after the fighting had stopped. One way to measure the full “costs” and “benefits” of the war, Goldin and Lewis argue, is to estimate the value of the observed postwar stream of consumption in each region and compare that figure to the estimated hypothetical stream of consumption had there been no war (1975: 309-10). (All the figures for the costs in Table 3 have been adjusted to reflect their discounted value in 1860.) The Goldin and Lewis estimate for the discounted value of lost consumption for the South was $6.2 billion; for the North the estimate was $1.15 billion. Ingenious though this methodology is, it suffers from the serious drawback that consumption lost for any reason — not just the war — is included in the figure. Particularly for the South, not all the decline in output after 1860 could be directly attributed to the war; the growth in the demand for cotton that fueled the antebellum economy did not continue, and there was a dramatic change in the supply of labor due to emancipation. Consequently, Goldin and Lewis subsequently adjusted their estimate of lost consumption due to the war down to $2.56 billion for the South in order to exclude the effects of emancipation and the collapse of the cotton market. The magnitudes of the indirect effects are detailed in Table 3. After the adjustments, the estimated costs for the war totaled more than $10 billion. Allocating the costs to each region produces a per capita burden of $670 in the South and $199 in the North. What Table 3 does not show is the extent to which these expenses were spread out over a long period of time. In the North, consumption had regained its prewar level by 1873, however in the South consumption remained below its 1860 level to the end of the century. We shall return to this issue below.

Financing the War

No war in American history strained the economic resources of the economy as the Civil War did. Governments on both sides were forced to resort to borrowing on an unprecedented scale to meet the financial obligations for the war. With more developed markets and an industrial base that could ultimately produce the goods needed for the war, the Union was clearly in a better position to meet this challenge. The South, on the other hand, had always relied on either Northern or foreign capital markets for their financial needs, and they had virtually no manufacturing establishments to produce military supplies. From the outset, the Confederates relied heavily on funds borrowed outside the South to purchase supplies abroad.

Figure 3 shows the sources of revenue collected by the Union government during the war. In 1862 and 1863 the government covered less than 15 percent of its total expenditures through taxes. With the imposition of a higher tariff, excise taxes, and the introduction of the first income tax in American history, this situation improved somewhat, and by the war’s end 25 percent of the federal government revenues had been collected in taxes. But what of the other 75 percent? In 1862 Congress authorized the U.S. Treasury to issue currency notes that were not backed by gold. By the end of the war, the treasury had printed more than $250 million worth of these “Greenbacks” and, together with the issue of gold-backed notes, the printing of money accounted for 18 percent of all government revenues. This still left a huge shortfall in revenue that was not covered by either taxes or the printing of money. The remaining revenues were obtained by borrowing funds from the public. Between 1861 and 1865 the debt obligation of the Federal government increased from $65 million to $2.7 billion (including the increased issuance of notes by the Treasury). The financial markets of the North were strained by these demands, but they proved equal to the task. In all, Northerners bought almost $2 billion worth of treasury notes and absorbed $700 million of new currency. Consequently, the Northern economy was able to finance the war without a significant reduction in private consumption. While the increase in the national debt seemed enormous at the time, events were to prove that the economy was more than able to deal with it. Indeed, several economic historians have claimed that the creation and subsequent retirement of the Civil War debt ultimately proved to be a significant impetus to post-war growth (Williamson 1974; James 1984). Wartime finance also prompted a significant change in the banking system of the United States. In 1862 Congress finally passed legislation creating the National Banking System. Their motive was not only to institute the program of banking reform pressed for many years by the Whigs and the Republicans; the newly-chartered federal banks were also required to purchase large blocs of federal bonds to hold as security against the issuance of their national bank notes.

The efforts of the Confederate government to pay for their war effort were far more chaotic than in the North, and reliable expenditure and revenue data are not available. Figure 4 presents the best revenue estimates we have for the Richmond government from 1861 though November 1864 (Burdekin and Langdana 1993). Several features of Confederate finance immediately stand out in comparison to the Union effort. First is the failure of the Richmond government to finance their war expenditures through taxation. Over the course of the war, tax revenues accounted for only 11 percent of all revenues. Another contrast was the much higher fraction of revenues accounted for by the issuance of currency on the part of the Richmond government. Over a third of the Confederate government’s revenue came from the printing press. The remainder came in the form of bonds, many of which were sold abroad in either London or Amsterdam. The reliance on borrowed funds proved to be a growing problem for the Confederate treasury. By mid-1864 the costs of paying interest on outstanding government bonds absorbed more than half all government expenditures. The difficulties of collecting taxes and floating new bond issues had become so severe that in the final year of the war the total revenues collected by the Confederate Government actually declined.

The printing of money and borrowing on such a huge scale had a dramatic effect on the economic stability of the Confederacy. The best measure of this instability and eventual collapse can be seen in the behavior of prices. An index of consumer prices is plotted together with the stock on money from early 1861 to April 1865 in Figure 5. By the beginning of 1862 prices had already doubled; by middle of 1863 they had increased by a factor of 13. Up to this point, the inflation could be largely attributed to the money placed in the hands of consumers by the huge deficits of the government. Prices and the stock of money had risen at roughly the same rate. This represented a classic case of what economists call demand-pull inflation: too much money chasing too few goods. However, from the middle of 1863 on, the behavior of prices no longer mirrors the money supply. Several economic historians have suggested that at this point the prices reflect people’s confidence in the future of the Confederacy as a viable state (Burdekin and Langdana 1993; Weidenmier 2000). Figure 5 identifies three major military “turning points” between 1863 and 1865. In late 1863 and early 1864, following the Confederate defeats at Gettysburg and Vicksburg, prices rose very sharply despite a marked decrease in the growth of the money supply. When the Union offensives in Georgia and Virginia stalled in the summer of 1864, prices stabilized for a few months, only to resume their upward spiral after the fall of Atlanta in September 1864. By that time, of course, the Confederate cause was clearly doomed. By the end of the war, inflation had reached a point where the value of the Confederate currency was virtually zero. People had taken to engaging in barter or using Union dollars (if they could be found) to conduct their transactions. The collapse of the Confederate monetary system was a reflection of the overall collapse of the economy’s efforts to sustain the war effort.

The Union also experienced inflation as a result of deficit finance during the war; the consumer price index rose from 100 at the outset of the war to 175 by the end of 1865. While this is nowhere near the degree of economic disruption caused by the increase in prices experienced by the Confederacy, a doubling of prices did have an effect on how the burden of the war’s costs were distributed among various groups in each economy. Inflation is a tax, and it tends to fall on those who are least able to afford it. One group that tends to be vulnerable to a sudden rise in prices is wage earners. Table 4 presents data on prices and wages in the United States and the Confederacy. The series for wages has been adjusted to reflect the decline in purchasing power due to inflation. Not surprisingly, wage earners in the South saw the real value of their wages practically disappear by the end of the war. In the North the situation was not as severe, but wages certainly did not keep pace with prices; the real value of wages fell by about 20 percent. It is not obvious why this happened. The need for manpower in the army and the demand for war production should have created a labor shortage that would drive wages higher. While the economic situation of laborers deteriorated during the war, one must remember that wage earners in 1860 were still a relatively small share of the total labor force. Agriculture, not industry, was the largest economic sector in the north, and farmers fared much in terms of their income during the war than did wage earners in the manufacturing sector (Ransom 1998:255-64; Atack and Passell 1994:368-70).

Table 4:

Indices of Prices and Real Wages During the Civil War

(1860=100)

Union Confederate
Year Prices Real Wages Prices Real Wages
1860 100 100 100 100
1861 101 100 121 86
1862 113 93 388 35
1863 139 84 1,452 19
1864 176 77 3,992 11
1865 175 82
Source: Union: (Atack and Passell 1994: 367, Table 13.5)

Confederate: (Lerner 1954)

Overall, it is clear that the North did a far better job of mobilizing the economic resources needed to carry on the war. The greater sophistication and size of Northern markets meant that the Union government could call upon institutional arrangements that allowed for a more efficient system of redirecting resources into wartime production than was possible in the South. The Confederates depended far more upon outside resources and direct intervention in the production of goods and services for their war effort, and in the end the domestic economy could not bear up under the strain of the effort. It is worth noting in this regard, that the Union blockade, which by 1863 had largely closed down not only the external trade of the South with Europe, but also the coastal trade that had been an important element in the antebellum transportation system, may have played a more crucial part in bringing about the eventual collapse of the Southern war effort than is often recognized (Ransom 2002).

The Civil War as a Watershed in American Economic History

It is easy to see why contemporaries believed that the Civil War was a watershed event in American History. With a cost of billions of dollars and 625,000 men killed, slavery had been abolished and the Union had been preserved. Economic historians viewing the event fifty years later could note that the half-century following the Civil War had been a period of extraordinary growth and expansion of the American economy. But was the war really the “Second American Revolution” as Beard (1927) and Louis Hacker (1940) claimed? That was certainly the prevailing view as late as 1960, when Thomas Cochran (1961) published an article titled “Did the Civil War Retard Industrialization?” Cochran pointed out that, until the 1950s, there was no quantitative evidence to prove or disprove the Beard-Hacker thesis. Recent quantitative research, he argued, showed that the war had actually slowed the rate of industrial growth. Stanley Engerman expanded Cochran’s argument by attacking the Beard-Hacker claim that political changes — particularly the passage in 1862 of the Republican program of political economy that had been bottled up in Congress by Southern opposition — were instrumental in accelerating economic growth (Engerman 1966). The major thrust of these arguments was that neither the war nor the legislation was necessary for industrialization — which was already well underway by 1860. “Aside from commercial banking,” noted one commentator, “the Civil War appears not to have started or created any new patterns of economic institutional change” (Gilchrist and Lewis 1965: 174). Had there been no war, these critics argued, the trajectory of economic growth that emerged after 1870 would have done so anyway.

Despite this criticism, the notion of a “second” American Revolution lives on. Clearly the Beards and Hacker were in error in their claim that industrial growth accelerated during the war. The Civil War, like most modern wars, involved a huge effort to mobilize resources to carry on the fight. This had the effect of making it appear that the economy was expanding due to the production of military goods. However, Beard and Hacker — and a good many other historians — mistook this increased wartime activity as a net increase in output when in fact what happened is that resources were shifted away from consumer products towards wartime production (Ransom 1989: Chapter 7). But what of the larger question of political change resulting from the war? Critics of Beard and Hacker claimed that the Republican program would have eventually been enacted even if there been no war; hence the war was not a crucial turning point in economic development. The problem with this line of argument is that it completely misses the point of the Beard-Hacker argument. They would readily agree that in the absence of a war the Republican program of political economy would triumph — and that is why there was a war! Historians who argue that economic forces were an underlying cause of sectional conflicts go on to point out that war was probably the only way to settle those conflicts. In this view, the war was a watershed event in the economic development of the United States because the Union military victory ensured that the “market revolution” would not be stymied by the South’s attempt to break up the Union (Ransom 1999).

Whatever the effects of the war on industrial growth, economic historians agree that the war had a profound effect on the South. The destruction of slavery meant that the entire Southern economy had to be rebuilt. This turned out to be a monumental task; far larger than anyone at the time imagined. As noted above in the discussion of the indirect costs of the war, Southerners bore a disproportionate share of those costs and the burden persisted long after the war had ended. The failure of the postbellum Southern economy to recover has spawned a huge literature that goes well beyond the effects of the war.

Economic historians who have examined the immediate effects of the war have reached a few important conclusions. First, the idea that the South was physically destroyed by the fighting has been largely discarded. Most writers have accepted the argument of Ransom and Sutch (2001) that the major “damage” to the South from the war was the depreciation and neglect of property on farms as a significant portion of the male workforce went off to war for several years. Second was the impact of emancipation. Slaveholders lost their enormous investment in slaves as a result of emancipation. Planters were consequently strapped for capital in the years immediately after the war, and this affected their options with regard to labor contracts with the freedmen and in their dealings with capital markets to obtain credit for the planting season. The freedmen and their families responded to emancipation by withdrawing up to a third of their labor from the market. While this was a perfectly reasonable response, it had the effect of creating an apparent labor “shortage” and it convinced white landlords that a free labor system could never work with the ex-slaves; thus further complicating an already unsettled labor market. In the longer run, as Gavin Wright (1986) put it, emancipation transformed the white landowners from “laborlords” to “landlords.” This was not a simple transition. While they were able, for the most part, to cling to their landholdings, the ex-slaveholders were ultimately forced to break up the great plantations that had been the cornerstone of the antebellum Southern economy and rent small parcels of land to the freedmen under using a new form of rental contract — sharecropping. From a situation where tenancy was extremely rare, the South suddenly became an agricultural economy characterized by tenant farms.

The result was an economy that remained heavily committed not only to agriculture, but to the staple crop of cotton. Crop output in the South fell dramatically at the end of the war, and had not yet recovered its antebellum level by 1879. The loss of income was particularly hard on white Southerners; per capita income of whites in 1857 had been $125; in 1879 it was just over $80 (Ransom and Sutch 1979). Table 5 compares the economic growth of GNP in the United States with the gross crop output of the Southern states from 1874 to 1904. Over the last quarter of the nineteenth century, gross crop output in the South rose by about one percent per year at a time when the GNP of United States (including the South) was rising at twice that rate. By the end of the century, Southern per capita income had fallen to roughly two-thirds the national level, and the South was locked in a cycle of poverty that lasted well into the twentieth century. How much of this failure was due solely to the war remains open to debate. What is clear is that neither the dreams of those who fought for an independent South in 1861 nor the dreams of those who hoped that a “New South” that might emerge from the destruction of war after 1865 were realized.

Table 5Annual Rates of Growth of Gross National Product of the U.S. and the Gross Southern Crop Output, 1874 to 1904
Annual Percentage Rate of Growth
Interval Gross National Product of the U.S. Gross Southern Crop Output
1874 to 1884 2.79 1.57
1879 to 1889 1.91 1.14
1884 to 1894 0.96 1.51
1889 to 1899 1.15 0.97
1894 to 1904 2.30 0.21
1874 to 1904 2.01 1.10
Source: (Ransom and Sutch 1979: 140, Table 7.3

References

Atack, Jeremy, and Peter Passell. A New Economic View of American History from Colonial Times to 1940. Second edition. New York: W.W. Norton, 1994.

Beard, Charles, and Mary Beard. The Rise of American Civilization. Two volumes. New York: Macmillan, 1927.

Bensel, Richard F. Yankee Leviathan: The Origins of Central State Authority in America, 1859-1877. New York: Cambridge University Press, 1990.

Brown, Richard D. Modernization: The Transformation of American Life, 1600-1865. New York: Hill and Wang, 1976.

Burdekin, Richard C.K., and Farrokh K. Langdana. “War Finance in the Southern Confederacy.” Explorations in Economic History 30 (1993): 352-377.

Cochran, Thomas C. “Did the Civil War Retard Industrialization?” Mississippi Valley Historical Review 48 (September 1961): 197-210.

Egnal, Marc. “The Beards Were Right: Parties in the North, 1840-1860.” Civil War History 47 (2001): 30-56.

Engerman, Stanley L. “The Economic Impact of the Civil War.” Explorations in Entrepreneurial History, second series 3 (1966): 176-199 .

Faulkner, Harold Underwood. American Economic History. Fifth edition. New York: Harper & Brothers, 1943.

Gilchrist, David T., and W. David Lewis, editors. Economic Change in the Civil War Era. Greenville, DE: Eleutherian Mills-Hagley Foundation, 1965.

Goldin, Claudia Dale. “The Economics of Emancipation.” Journal of Economic History 33 (1973): 66-85.

Goldin, Claudia, and Frank Lewis. “The Economic Costs of the American Civil War: Estimates and Implications.” Journal of Economic History 35 (1975): 299-326.

Goldin, Claudia, and Frank Lewis. “The Post-Bellum Recovery of the South and the Cost of the Civil War: Comment.” Journal of Economic History 38 (1978): 487-492.

Gunderson, Gerald. “The Origin of the American Civil War.” Journal of Economic History 34 (1974): 915-950.

Hacker, Louis. The Triumph of American Capitalism: The Development of Forces in American History to the End of the Nineteenth Century. New York: Columbia University Press, 1940.

Hughes, J.R.T., and Louis P. Cain. American Economic History. Fifth edition. New York: Addison Wesley, 1998.

Huston, James L. “Property Rights in Slavery and the Coming of the Civil War.” Journal of Southern History 65 (1999): 249-286.

James, John. “Public Debt Management and Nineteenth-Century American Economic Growth.” Explorations in Economic History 21 (1984): 192-217.

Lerner, Eugene. “Money, Prices and Wages in the Confederacy, 1861-65.” Ph.D. dissertation, University of Chicago, Chicago, 1954.

McPherson, James M. “Antebellum Southern Exceptionalism: A New Look at an Old Question.” Civil War History 29 (1983): 230-244.

McPherson, James M. Battle Cry of Freedom: The Civil War Era. New York: Oxford University Press, 1988.

North, Douglass C. The Economic Growth of the United States, 1790-1860. Englewood Cliffs: Prentice Hall, 1961.

Ransom, Roger L. Conflict and Compromise: The Political Economy of Slavery, Emancipation, and the American Civil War. New York: Cambridge University Press, 1989.

Ransom, Roger L. “The Economic Consequences of the American Civil War.” In The Political Economy of War and Peace, edited by M. Wolfson. Norwell, MA: Kluwer Academic Publishers, 1998.

Ransom, Roger L. “Fact and Counterfact: The ‘Second American Revolution’ Revisited.” Civil War History 45 (1999): 28-60.

Ransom, Roger L. “The Historical Statistics of the Confederacy.” In The Historical Statistics of the United States, Millennial Edition, edited by Susan Carter and Richard Sutch. New York: Cambridge University Press, 2002.

Ransom, Roger L., and Richard Sutch. “Growth and Welfare in the American South in the Nineteenth Century.” Explorations in Economic History 16 (1979): 207-235.

Ransom, Roger L., and Richard Sutch. “Who Pays for Slavery?” In The Wealth of Races: The Present Value of Benefits from Past Injustices, edited by Richard F. America, 31-54. Westport, CT: Greenwood Press, 1990.

Ransom, Roger L., and Richard Sutch. “Conflicting Visions: The American Civil War as a Revolutionary Conflict.” Research in Economic History 20 (2001)

Ransom, Roger L., and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. Second edition. New York: Cambridge University Press, 2001.

Robertson, Ross M. History of the American Economy. Second edition. New York: Harcourt Brace and World, 1955.

United States, Bureau of the Census. Historical Statistics of the United States, Colonial Times to 1970. Two volumes. Washington: U.S. Government Printing Office, 1975.

Walton, Gary M., and Hugh Rockoff. History of the American Economy. Eighth edition. New York: Dryden, 1998.

Weidenmier, Marc. “The Market for Confederate Bonds.” Explorations in Economic History 37 (2000): 76-97.

Weingast, Barry. “The Economic Role of Political Institutions: Market Preserving Federalism and Economic Development.” Journal of Law, Economics and Organization 11 (1995): 1:31.

Weingast, Barry R. “Political Stability and Civil War: Institutions, Commitment, and American Democracy.” In Analytic Narratives, edited by Robert Bates et al. Princeton: Princeton University Press, 1998.

Williamson, Jeffrey. “Watersheds and Turning Points: Conjectures on the Long-Term Impact of Civil War Financing.” Journal of Economic History 34 (1974): 636-661.

Wolfson, Murray. “A House Divided against Itself Cannot Stand.” Conflict Management and Peace Science 14 (1995): 115-141.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy since the Civil War. New York: Basic Books, 1986.

Citation: Ransom, Roger. “Economics of the Civil War”. EH.Net Encyclopedia, edited by Robert Whaples. August 24, 2001. URL http://eh.net/encyclopedia/the-economics-of-the-civil-war/

Child Labor during the British Industrial Revolution

Carolyn Tuttle, Lake Forest College

During the late eighteenth and early nineteenth centuries Great Britain became the first country to industrialize. Because of this, it was also the first country where the nature of children’s work changed so dramatically that child labor became seen as a social problem and a political issue.

This article examines the historical debate about child labor in Britain, Britain’s political response to problems with child labor, quantitative evidence about child labor during the 1800s, and economic explanations of the practice of child labor.

The Historical Debate about Child Labor in Britain

Child Labor before Industrialization

Children of poor and working-class families had worked for centuries before industrialization – helping around the house or assisting in the family’s enterprise when they were able. The practice of putting children to work was first documented in the Medieval era when fathers had their children spin thread for them to weave on the loom. Children performed a variety of tasks that were auxiliary to their parents but critical to the family economy. The family’s household needs determined the family’s supply of labor and “the interdependence of work and residence, of household labor needs, subsidence requirements, and family relationships constituted the ‘family economy'” [Tilly and Scott (1978, 12)].

Definitions of Child Labor

The term “child labor” generally refers to children who work to produce a good or a service which can be sold for money in the marketplace regardless of whether or not they are paid for their work. A “child” is usually defined as a person who is dependent upon other individuals (parents, relatives, or government officials) for his or her livelihood. The exact ages of “childhood” differ by country and time period.

Preindustrial Jobs

Children who lived on farms worked with the animals or in the fields planting seeds, pulling weeds and picking the ripe crop. Ann Kussmaul’s (1981) research uncovered a high percentage of youths working as servants in husbandry in the sixteenth century. Boys looked after the draught animals, cattle and sheep while girls milked the cows and cared for the chickens. Children who worked in homes were either apprentices, chimney sweeps, domestic servants, or assistants in the family business. As apprentices, children lived and worked with their master who established a workshop in his home or attached to the back of his cottage. The children received training in the trade instead of wages. Once they became fairly skilled in the trade they became journeymen. By the time they reached the age of twenty-one, most could start their own business because they had become highly skilled masters. Both parents and children considered this a fair arrangement unless the master was abusive. The infamous chimney sweeps, however, had apprenticeships considered especially harmful and exploitative. Boys as young as four would work for a master sweep who would send them up the narrow chimneys of British homes to scrape the soot off the sides. The first labor law passed in Britain to protect children from poor working conditions, the Act of 1788, attempted to improve the plight of these “climbing boys.” Around age twelve many girls left home to become domestic servants in the homes of artisans, traders, shopkeepers and manufacturers. They received a low wage, and room and board in exchange for doing household chores (cleaning, cooking, caring for children and shopping).

Children who were employed as assistants in domestic production (or what is also called the cottage industry) were in the best situation because they worked at home for their parents. Children who were helpers in the family business received training in a trade and their work directly increased the productivity of the family and hence the family’s income. Girls helped with dressmaking, hat making and button making while boys assisted with shoemaking, pottery making and horse shoeing. Although hours varied from trade to trade and family to family, children usually worked twelve hours per day with time out for meals and tea. These hours, moreover, were not regular over the year or consistent from day-to-day. The weather and family events affected the number of hours in a month children worked. This form of child labor was not viewed by society as cruel or abusive but was accepted as necessary for the survival of the family and development of the child.

Early Industrial Work

Once the first rural textile mills were built (1769) and child apprentices were hired as primary workers, the connotation of “child labor” began to change. Charles Dickens called these places of work the “dark satanic mills” and E. P. Thompson described them as “places of sexual license, foul language, cruelty, violent accidents, and alien manners” (1966, 307). Although long hours had been the custom for agricultural and domestic workers for generations, the factory system was criticized for strict discipline, harsh punishment, unhealthy working conditions, low wages, and inflexible work hours. The factory depersonalized the employer-employee relationship and was attacked for stripping the worker’s freedom, dignity and creativity. These child apprentices were paupers taken from orphanages and workhouses and were housed, clothed and fed but received no wages for their long day of work in the mill. A conservative estimate is that around 1784 one-third of the total workers in country mills were apprentices and that their numbers reached 80 to 90% in some individual mills (Collier, 1964). Despite the First Factory Act of 1802 (which attempted to improve the conditions of parish apprentices), several mill owners were in the same situation as Sir Robert Peel and Samuel Greg who solved their labor shortage by employing parish apprentices.

After the invention and adoption of Watt’s steam engine, mills no longer had to locate near water and rely on apprenticed orphans – hundreds of factory towns and villages developed in Lancashire, Manchester, Yorkshire and Cheshire. The factory owners began to hire children from poor and working-class families to work in these factories preparing and spinning cotton, flax, wool and silk.

The Child Labor Debate

What happened to children within these factory walls became a matter of intense social and political debate that continues today. Pessimists such as Alfred (1857), Engels (1926), Marx (1909), and Webb and Webb (1898) argued that children worked under deplorable conditions and were being exploited by the industrialists. A picture was painted of the “dark satanic mill” where children as young as five and six years old worked for twelve to sixteen hours a day, six days a week without recess for meals in hot, stuffy, poorly lit, overcrowded factories to earn as little as four shillings per week. Reformers called for child labor laws and after considerable debate, Parliament took action and set up a Royal Commission of Inquiry into children’s employment. Optimists, on the other hand, argued that the employment of children in these factories was beneficial to the child, family and country and that the conditions were no worse than they had been on farms, in cottages or up chimneys. Ure (1835) and Clapham (1926) argued that the work was easy for children and helped them make a necessary contribution to their family’s income. Many factory owners claimed that employing children was necessary for production to run smoothly and for their products to remain competitive. John Wesley, the founder of Methodism, recommended child labor as a means of preventing youthful idleness and vice. Ivy Pinchbeck (1930) pointed out, moreover, that working hours and conditions had been as bad in the older domestic industries as they were in the industrial factories.

Factory Acts

Although the debate over whether children were exploited during the British Industrial Revolution continues today [see Nardinelli (1988) and Tuttle (1998)], Parliament passed several child labor laws after hearing the evidence collected. The three laws which most impacted the employment of children in the textile industry were the Cotton Factories Regulation Act of 1819 (which set the minimum working age at 9 and maximum working hours at 12), the Regulation of Child Labor Law of 1833 (which established paid inspectors to enforce the laws) and the Ten Hours Bill of 1847 (which limited working hours to 10 for children and women).

The Extent of Child Labor

The significance of child labor during the Industrial Revolution was attached to both the changes in the nature of child labor and the extent to which children were employed in the factories. Cunningham (1990) argues that the idleness of children was more a problem during the Industrial Revolution than the exploitation resulting from employment. He examines the Report on the Poor Laws in 1834 and finds that in parish after parish there was very little employment for children. In contrast, Cruickshank (1981), Hammond and Hammond (1937), Nardinelli (1990), Redford (1926), Rule (1981), and Tuttle (1999) claim that a large number of children were employed in the textile factories. These two seemingly contradictory claims can be reconciled because the labor market for child labor was not a national market. Instead, child labor was a regional phenomenon where a high incidence of child labor existed in the manufacturing districts while a low incidence of children were employed in rural and farming districts.

Since the first reliable British Census that inquired about children’s work was in 1841, it is impossible to compare the number of children employed on the farms and in cottage industry with the number of children employed in the factories during the heart of the British industrial revolution. It is possible, however, to get a sense of how many children were employed by the industries considered the “leaders” of the Industrial Revolution – textiles and coal mining. Although there is still not a consensus on the degree to which industrial manufacturers depended on child labor, research by several economic historians have uncovered several facts.

Estimates of Child Labor in Textiles

Using data from an early British Parliamentary Report (1819[HL.24]CX), Freuenberger, Mather and Nardinelli concluded that “children formed a substantial part of the labor force” in the textile mills (1984, 1087). They calculated that while only 4.5% of the cotton workers were under 10, 54.5% were under the age of 19 – confirmation that the employment of children and youths was pervasive in cotton textile factories (1984, 1087). Tuttle’s research using a later British Parliamentary Report (1834(167)XIX) shows this trend continued. She calculated that children under 13 comprised roughly 10 to 20 % of the work forces in the cotton, wool, flax, and silk mills in 1833. The employment of youths between the age of 13 and 18 was higher than for younger children, comprising roughly 23 to 57% of the work forces in cotton, wool, flax, and silk mills. Cruickshank also confirms that the contribution of children to textile work forces was significant. She showed that the growth of the factory system meant that from one-sixth to one-fifth of the total work force in the textile towns in 1833 were children under 14. There were 4,000 children in the mills of Manchester; 1,600 in Stockport; 1,500 in Bolton and 1,300 in Hyde (1981, 51).

The employment of children in textile factories continued to be high until mid-nineteenth century. According to the British Census, in 1841 the three most common occupations of boys were Agricultural Labourer, Domestic Servant and Cotton Manufacture with 196,640; 90,464 and 44,833 boys under 20 employed, respectively. Similarly for girls the three most common occupations include Cotton Manufacture. In 1841, 346,079 girls were Domestic Servants; 62,131 were employed in Cotton Manufacture and 22,174 were Dress-makers. By 1851 the three most common occupations for boys under 15 were Agricultural Labourer (82,259), Messenger (43,922) and Cotton Manufacture (33,228) and for girls it was Domestic Servant (58,933), Cotton Manufacture (37,058) and Indoor Farm Servant (12,809) (1852-53[1691-I]LXXXVIII, pt.1). It is clear from these findings that children made up a large portion of the work force in textile mills during the nineteenth century. Using returns from the Factory Inspectors, S. J. Chapman’s (1904) calculations reveal that the percentage of child operatives under 13 had a downward trend for the first half of the century from 13.4% in 1835 to 4.7% in 1838 to 5.8% in 1847 and 4.6% by 1850 and then rose again to 6.5% in 1856, 8.8% in 1867, 10.4% in 1869 and 9.6% in 1870 (1904, 112).

Estimates of Child Labor in Mining

Children and youth also comprised a relatively large proportion of the work forces in coal and metal mines in Britain. In 1842, the proportion of the work forces that were children and youth in coal and metal mines ranged from 19 to 40%. A larger proportion of the work forces of coal mines used child labor underground while more children were found on the surface of metal mines “dressing the ores” (a process of separating the ore from the dirt and rock). By 1842 one-third of the underground work force of coal mines was under the age of 18 and one-fourth of the work force of metal mines were children and youth (1842[380]XV). In 1851 children and youth (under 20) comprised 30% of the total population of coal miners in Great Britain. After the Mining Act of 1842 was passed which prohibited girls and women from working in mines, fewer children worked in mines. The Reports on Sessions 1847-48 and 1849 Mining Districts I (1847-48[993]XXVI and 1849[1109]XXII) and The Reports on Sessions 1850 and 1857-58 Mining Districts II (1850[1248]XXIII and 1857-58[2424]XXXII) contain statements from mining commissioners that the number of young children employed underground had diminished.

In 1838, Jenkin (1927) estimates that roughly 5,000 children were employed in the metal mines of Cornwall and by 1842 the returns from The First Report show as many as 5,378 children and youth worked in the mines. In 1838 Lemon collected data from 124 tin, copper and lead mines in Cornwall and found that 85% employed children. In the 105 mines that employed child labor, children comprised from as little as 2% to as much as 50% of the work force with a mean of 20% (Lemon, 1838). According to Jenkin the employment of children in copper and tin mines in Cornwall began to decline by 1870 (1927, 309).

Explanations for Child Labor

The Supply of Child Labor

Given the role of child labor in the British Industrial Revolution, many economic historians have tried to explain why child labor became so prevalent. A competitive model of the labor market for children has been used to examine the factors that influenced the demand for children by employers and the supply of children from families. The majority of scholars argue that it was the plentiful supply of children that increased employment in industrial work places turning child labor into a social problem. The most common explanation for the increase in supply is poverty – the family sent their children to work because they desperately needed the income. Another common explanation is that work was a traditional and customary component of ordinary people’s lives. Parents had worked when they were young and required their children to do the same. The prevailing view of childhood for the working-class was that children were considered “little adults” and were expected to contribute to the family’s income or enterprise. Other less commonly argued sources of an increase in the supply of child labor were that parents either sent their children to work because they were greedy and wanted more income to spend on themselves or that children wanted out of the house because their parents were emotionally and physically abusive. Whatever the reason for the increase in supply, scholars agree that since mandatory schooling laws were not passed until 1876, even well-intentioned parents had few alternatives.

The Demand for Child Labor

Other compelling explanations argue that it was demand, not supply, that increased the use of child labor during the Industrial Revolution. One explanation came from the industrialists and factory owners – children were a cheap source of labor that allowed them to stay competitive. Managers and overseers saw other advantages to hiring children and pointed out that children were ideal factory workers because they were obedient, submissive, likely to respond to punishment and unlikely to form unions. In addition, since the machines had reduced many procedures to simple one-step tasks, unskilled workers could replace skilled workers. Finally, a few scholars argue that the nimble fingers, small stature and suppleness of children were especially suited to the new machinery and work situations. They argue children had a comparative advantage with the machines that were small and built low to the ground as well as in the narrow underground tunnels of coal and metal mines. The Industrial Revolution, in this case, increased the demand for child labor by creating work situations where they could be very productive.

Influence of Child Labor Laws

Whether it was an increase in demand or an increase in supply, the argument that child labor laws were not considered much of a deterrent to employers or families is fairly convincing. Since fines were not large and enforcement was not strict, the implicit tax placed on the employer or family was quite low in comparison to the wages or profits the children generated [Nardinelli (1980)]. On the other hand, some scholars believe that the laws reduced the number of younger children working and reduced labor hours in general [Chapman (1904) and Plener (1873)].

Despite the laws there were still many children and youth employed in textiles and mining by mid-century. Booth calculated there were still 58,900 boys and 82,600 girls under 15 employed in textiles and dyeing in 1881. In mining the number did not show a steady decline during this period, but by 1881 there were 30,400 boys under 15 still employed and 500 girls under 15. See below.

Table 1: Child Employment, 1851-1881

Industry & Age Cohort 1851 1861 1871 1881
Mining
Males under 15
37,300 45,100 43,100 30,400
Females under 15 1,400 500 900 500
Males 15-20 50,100 65,300 74,900 87,300
Females over 15 5,400 4,900 5,300 5,700
Total under 15 as
% of work force
13% 12% 10% 6%
Textiles and Dyeing
Males under 15
93,800 80,700 78,500 58,900
Females under 15 147,700 115,700 119,800 82,600
Males 15-20 92,600 92,600 90,500 93,200
Females over 15 780,900 739,300 729,700 699,900
Total under 15 as
% of work force
15% 19% 14% 11%

Source: Booth (1886, 353-399).

Explanations for the Decline in Child Labor

There are many opinions regarding the reason(s) for the diminished role of child labor in these industries. Social historians believe it was the rise of the domestic ideology of the father as breadwinner and the mother as housewife, that was imbedded in the upper and middle classes and spread to the working-class. Economic historians argue it was the rise in the standard of living that accompanied the Industrial Revolution that allowed parents to keep their children home. Although mandatory schooling laws did not play a role because they were so late, other scholars argue that families started showing an interest in education and began sending their children to school voluntarily. Finally, others claim that it was the advances in technology and the new heavier and more complicated machinery, which required the strength of skilled adult males, that lead to the decline in child labor in Great Britain. Although child labor has become a fading memory for Britons, it still remains a social problem and political issue for developing countries today.

References

Alfred (Samuel Kydd). The History of the Factory Movement. London: Simpkin, Marshall, and Co., 1857.

Booth, C. “On the Occupations of the People of the United Kingdom, 1801-81.” Journal of the Royal Statistical Society (J.S.S.) XLIX (1886): 314-436.

Chapman, S. J. The Lancashire Cotton Industry. Manchester: Manchester University Publications, 1904.

Clapham, Sir John. An Economic History of Modern Britain. Vol. I and II. Cambridge: Cambridge University Press, 1926.

Collier, Francis. The Family Economy of the Working Classes in the Cotton Industry, 1784-1833. Manchester: Manchester University Press, 1964.

Cruickshank, Marjorie. Children and Industry. Manchester: Manchester University Press, 1981.

Cunningham, Hugh. “The Employment and Unemployment of Children in England, c. 1680-1851.” Past and Present 126 (1990): 115-150.

Engels, Frederick. The Condition of the Working Class in England. Translated by the Institute of Marxism-Leninism, Moscow. London: E. J. Hobsbaum, 1969[1926].

Freudenberger, Herman, Francis J. Mather, and Clark Nardinelli. “A New Look at the Early Factory Labour Force.” Journal of Economic History 44 (1984): 1085-90.

Hammond, J. L. and Barbara Hammond. The Town Labourer, 1760-1832. New York: A Doubleday Anchor Book, 1937.

House of Commons Papers (British Parliamentary Papers):
1833(450)XX Factories, employment of children. R. Com. 1st rep.
1833(519)XXI Factories, employment of children. R. Com. 2nd rep.
1834(44)XXVII Administration and Operation of Poor Laws, App. A, pt.1.
1834(44)XXXVI Administration and Operation of Poor Laws. App. B.2, pts. III,IV,V.
1834 (167)XX Factories, employment of children. Supplementary Report.
1842[380]XV Children’s employment (mines). R. Com. 1st rep.
1847-48[993]XXVI Mines and Collieries, Mining Districts. Commissioner’s rep.
1849[1109]XXII Mines and Collieries, Mining Districts. Commissioner’s rep.
1850[1248]XXIII Mining Districts. Commissioner’s rep.
1857-58[2424]XXXII Mines and Minerals. Commissioner’s rep.

House of Lords Papers:
1819(24)CX

Jenkin, A. K. Hamilton. The Cornish Miner: An Account of His Life Above and Underground From Early Times. London: George Allen and Unwin, Ltd., 1927.

Kussmaul, Ann. A General View of the Rural Economy of England, 1538-1840. Cambridge: Cambridge University Press, 1990.

Lemon, Sir Charles. “The Statistics of the Copper Mines of Cornwall.” Journal of the Royal Statistical Society I (1838): 65-84.

Marx. Karl. Capital. vol. I. Chicago: Charles H. Kerr & Company, 1909.

Nardinelli, Clark. Child Labor and the Industrial Revolution. Bloomington: Indiana University Press, 1990.

Nardinelli, Clark. “Were Children Exploited During the Industrial Revolution?” Research in Economic History 2 (1988): 243-276.

Nardinelli, Clark. “Child Labor and the Factory Acts.” Journal of Economic History. 40, no. 4 (1980): 739-755.

Pinchbeck, Ivy. Women Workers and the Industrial Revolution, 1750-1800. London: George Routledge and Sons, 1930.

Plener, Ernst Elder Von. English Factory Legislation. London: Chapman and Hall, 1873.

Redford, Arthur. Labour Migration in England, 1800-1850. Manchester: Manchester University Press, 1926.

Rule, John. The Experience of Labour in Eighteenth Century English Industry. New York: St. Martin’s Press, 1981.

Thompson, E. P. The Making of the English Working Class. New York: Vintage Books, 1966.

Tilly, L. A. and Scott, J. W. Women, Work and Family. New York: Holt, Rinehart, and Winston, 1978.

Tuttle, Carolyn. “A Revival of the Pessimist View: Child Labor and the Industrial Revolution.” Research in Economic History 18 (1998): 53-82.

Tuttle, Carolyn. Hard at Work in Factories and Mines: The Economics of Child Labor During the British Industrial Revolution. Oxford: Westview Press, 1999.

Ure, Andrew. The Philosophy of Manufactures. London, 1835.

Webb, Sidney and Webb, Beatrice. Problems of Modern Industry. London: Longmanns, Green, 1898.

Citation: Tuttle, Carolyn. “Child Labor during the British Industrial Revolution”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL
http://eh.net/encyclopedia/child-labor-during-the-british-industrial-revolution/

The Use of Quantitative Micro-data in Canadian Economic History: A Brief Survey

Livio Di Matteo, Lakehead University

Introduction1

From a macro perspective, Canadian quantitative economic history is concerned with the collection and construction of historical time series data as well as the study of the performance of broad economic aggregates over time.2 The micro dimension of quantitative economic history focuses on individual and sector responses to economic phenomena.3 In particular, micro economic history is marked by the collection and analysis of data sets rooted in individual economic and social behavior. This approach uses primary historical records like census rolls, probate records, assessment rolls, land records, parish records and company records, to construct sets of socio-economic data used to examine the social and economic characteristics and behavior of those individuals and their society, both cross-sectionally and over time.

The expansion of historical micro-data studies in Canada has been a function of academic demand and supply factors. On the demand side, there has been a desire for more explicit use of economic and social theory in history and micro-data studies that make use of available records on individuals appeal to historians interested in understanding aggregate trends and reaching the micro-underpinnings of the larger macroeconomic and social relationships. For example, in Canada, the late nineteenth century was a period of intermittent economic growth and analyzing how that growth record affected different groups in society requires studies that disaggregate the population into sub-groups. One way of doing this that became attractive in the 1960’s was to collect micro-data samples from relevant census, assessment or probate records.

On the supply side, computers have lowered research costs, making the analysis of large data sets feasible and cost-effective. The proliferation of low cost personal computers, statistical packages and data spread-sheets has led to another revolution in micro-data analysis, as computers are now routinely taken into archives so that data collection, input and analysis can proceed even more efficiently.

In addition, studies using historical micro-data are an area where economic historians trained either as economists or historians have been able to find common ground.4 Many of the pioneering micro-data projects in Canada were conducted by historians with some training in quantitative techniques, much of which was acquired “on the job” by intellectual interest and excitement, rather than as graduate school training. Historians and economists are united by their common analysis of primary micro-data sources and their choice of sophisticated computer equipment, linkage software and statistical packages.

Background to Historical Micro-data Studies in Canadian Economic History

The early stage of historical micro-data projects in Canada attempted to systematically collect and analyze data on a large scale. Many of these micro-data projects crossed the lines between social and economic history, as well as demographic history in the case of French Canada. Path-breaking work by American scholars such as Lee Soltow (1971), Stephan Thernstrom (1973) and Alice Hanson Jones (1980) was an important influence on Canadian work. Their work on wealth and social structure and mobility using census and probate data drew attention to the extent of mobility — geographic, economic and social — that existed in pre-twentieth-century America.

However, Canadian historical micro-data work has been quite distinct from that of the United States, reflecting its separate tradition in economic history. Canada’s history is one of centralized penetration from the east via the Great Lakes-St. Lawrence waterway and the presence of two founding “nations” of European settlers – English and French – which led to strong Protestant and Roman Catholic traditions. Indeed, there was nearly 100 percent membership in the Roman Catholic Church for francophone Quebeckers for much of Canada’s history. As well, there is an economic reliance on natural resources, and a sparse population spread along an east-west corridor in isolated regions that have made Canada’s economic history, politics and institutions quite different from the United States.

The United States, from its early natural resource staples origins, developed a large, integrated internal market that was relatively independent of external economic forces, at least compared with Canada, and this shifted research topics away from trade and towards domestic resource allocation issues. At the level of historical micro-data, American scholars have had access to national micro-data samples for some time, which has not been the case in Canada until recently. Most of the early studies in Canadian micro-data were regional or urban samples drawn from manuscript sources and there has been little work since at a national level using micro-data sources. However, the strong role of the state in Canada has meant a particular richness to those sources that can be accessed and even the Census contains some personal details not available in the U.S. Census, such as religious affiliation. Moreover, earnings data are available in the Canadian census starting some forty years earlier than the United States.

Canadian micro-data studies have examined industry, fertility, urban and rural life, wages and labor markets, women’s work and roles in the economy, immigration and wealth. The data sources include census, probate records, assessment rolls, legal records and contracts, and are used by historians, economists, geographers, sociologists and demographers to study economic history.5 Very often, the primary sources are untapped and there can be substantial gaps in their coverage due to uneven preservation.

A Survey of Micro-data Studies

Early Years in English Canada

The fruits of early work in English Canada were books and papers by Frank Denton and Peter George (1970, 1973), Michael Katz (1975) and David Gagan (1981), among others.6 The Denton and George paper examined the influences on family size and school attendance in Wentworth County, Ontario, using the 1871 Census of Canada manuscripts. But it was Katz and Gagan’s work that generated greater attention among historians. Katz’s Hamilton Project used census, assessment rolls, city directories and other assorted micro-records to describe patterns of life in mid-nineteenth century Hamilton. Gagan’s Peel County Project was a comprehensive social and economic study of Peel County, Ontario, again using a variety of individual records including probate. These studies stimulated discussion and controversy about nineteenth-century wealth, inheritance patterns, and family size and structure.

The Demographic Tradition in French Canada

In French Canada, the pioneering work was the Saguenay Project organized by Gerard Bouchard (1977, 1983, 1992, 1993, 1996, 1998). Beginning in the 1970’s, a large effort has been expended to create a computerized genealogical and demographic data base for the Saguenay and Charlevoix regions of Quebec going back well into the nineteenth century. This data set, known now as the Balsac Register, contains data on 600,000 individuals (140,000 couples) and 2.4 million events (e.g. births, deaths, gender, etc…) with enormous social scientific and human genetic possibilities. The material gathered has been used to examine fertility, marriage patterns, inheritance, agricultural production and literacy, as well as genetic predisposition towards disease and formed the basis for a book spanning the history of population and families in the Saguenay over the period 1858 to 1971.

French Canada has a strong tradition of historical micro-data research rooted in demographic analysis.7 Another project underway since 1969 and associated with Bertrand Desjardins, Hubert Charbonneau, Jacques Légaré and Yves Landry is Le Programme de recherche en démographie historique (P.R.D.H) at the University of Montréal (Charbonneau, 1988; Landry, 1993; Desjardins, 1993). The database will eventually contain details on a million persons and their life events in Quebec between 1608 and 1850.

Industrial Studies

Only for the 1871 census have all of the schedules survived and the industrial schedules of that census have been made machine-readable (Bloomfield, 1986; Borsa and Inwood, 1993). Kris Inwood and Phyllis Wagg (1993) have used the census manuscript industrial schedules to examine the survival of handloom weaving in rural Canada circa 1870 (Inwood and Wagg, 1993). A total of 2,830 records were examined and data on average product, capital and month’s activity utilized. The results show that the demand for woolen homespun was income sensitive and that patterns of weaving by men and women differed with male-headed firms working a greater number of months during the year and more likely to have a second worker.

More recently, using a combination of aggregate capital market data and firm-level data for a sample of Canadian and American steel producers, Ian Keay and Angela Redish (2004) analyze the relationships between capital costs, financial structure, and domestic capital market characteristics. They find that national capital market characteristics and firm specific characteristics were important determinants of twentieth-century U.S. and Canadian steel firms’ financing decisions. Keay (2000) uses information from firms’ balance sheets and income accounts, and industry-specific prices to calculate labor, capital, intermediate input and total factor productivities for a sample of 39 Canadian and 39 American manufacturing firms in nine industries. The firm-level data also allow for the construction of nation, industry and time consistent series, including capital and value added. Inwood and Keay (2005) use establishment-level data describing manufacturers located in 128 border and near-border counties in Michigan, New York, Ohio, Pennsylvania, and Ontario to calculate Canadian relative to U.S. total factor productivity ratios for 25 industries. Their results illustrate that the average U.S. establishment was approximately 7% more efficient than its Canadian counterpart in 1870/71.

Population, Demographics & Fertility

Marvin McInnis (1977) assembled a body of census data on childbearing and other aspects of Upper Canadian households in 1861 and produced a sample of 1200 farm households that was used to examine the relationship between child-bearing and land availability. He found that an abundance of nearby uncultivated land did affect the probability of there being young children in the household but the magnitude of the influence was small. Moreover, the strongest result was that fertility fell as larger cities developed sufficiently close by for there to be a real influence by urban life and culture.

Eric Moore and Brian Osborne (1987) have examined the socio-economic differentials of marital fertility in Kingston. They related religion, birthplace, and age of mother, ethnic origin and occupational status to changes in fertility between 1861and 1881, using a data set of approximately 3000 observations taken from the manuscript census. Their choice of variables allows for the examination of the impact of both economic factors, as well as the importance of cultural attributes. William Marr (1992) took the first reasonably large sample of farm households (2,656) from the 1851-52 Census of Canada West and examined the determinants of fertility. He found fertility differences between older and more newly settled regions were influenced by land availability at the farm level but farm location, with respect to the extent of agricultural development, did not affect fertility when age, birthplace and religion were held constant. Michael Wayne (1998) uses the 1861 Census of Canada to look at the black population of Canada on the eve of the American Civil War. Meanwhile, George Emery (1993) helps provide an assessment of the comprehensiveness and accuracy of aggregate vital statistics in Ontario between 1869 and 1952 by looking at the process of recording vital statistics. Emery and Kevin McQuillan (1988) use case studies to examine mortality in nineteenth-century Ingersoll, Ontario.

Urban and Rural Life

A number of studies have examined urban and rural life. Bettina Bradbury (1984) has analyzed the census manuscripts of two working class Montreal wards, Ste. Anne and St. Jacques, for the years 1861, 1871 and 1881. Random samples of 1/10 of the households in these parts of Montreal were taken for a sample of nearly 11,000 individuals over three decades. The data were used to examine women and wage labor in Montreal. The evidence is that men were the primary wage earners but the wife’s contribution to the family economy was not so much her own wage labor, which was infrequent, but in organizing the economic life of the household and finding alternate sources of support.

Bettina Bradbury, Peter Gossage, Evelyn Kolish and Alan Stewart (1993) and Gossage (1991) have examined marriage contracts in Montreal over the period 1820-1840 and found that, over time, the use of marriage contracts changed, becoming a tool of a propertied minority. As well, a growing proportion of contract signers chose to keep the property of spouses separate rather than “in community.” The movement towards separation was most likely to be found among the wealthy where separate property offered advantages, especially to those engaged in commerce during harsh economic times. Gillian Hamilton (1999) looks at prenuptial contracting behavior in early nineteenth-century Quebec to explore property rights within families and finds that couples signing contracts tended to choose joint ownership of property when wives were particularly important to the household.

Chad Gaffield (1979, 1983, 1987) has examined social, family and economic life in the Eastern Ontario counties of Prescott-Russell, Alfred and Caledonia using aggregate census, as well as manuscript data for the period 1851-1881.8 He has applied the material to studying rural schooling and the economic structure of farm families and found systematic differences between the marriage patterns of Anglophones and Francophone with Francophone tending to marry at a younger average age. Also, land shortages and the diminishing forest frontier created economic difficulties that led to reduced family sizes by 1881. Gaffield’s most significant current research project is his leadership of the Canadian Century Research Infrastructure (CCRI) initiative, one of the country’s largest research projects. The CCRI is creating cross-indexed databases from a century’s worth of national census information, enabling unprecedented understanding of the making of modern Canada. This effort will eventually lead to an integrated set of micro-data resources at a national level comparable to what currently exist for the United States.9

Business Records

Company and business records have also been used as a source of micro-data and insight into economic history. Gillian Hamilton has conducted a number of studies examining contracts, property rights and labor markets in pre-twentieth century Canada. Hamilton (1996, 2000) examines the nature of apprenticing arrangements in Montreal around the turn of the nineteenth century, using apprenticeship contracts from a larger body of notarial records found in Quebec. The principal question addressed is what determined apprenticeship length and when the decline of the institution began? Hamilton finds that the characteristics of both masters and their boys were important and that masters often relied on probationary periods to better gauge a boy’s worth before signing a contract. Probations, all else equal, were associated with shorter contracts.

Ann Carlos and Frank Lewis (1998, 1999, 2001, 2002) access Hudson Bay Company fur trading records to study property rights, competition, and depletion in the eighteenth-century Canadian fur trade and their work represents an important foray into Canadian aboriginal economic history by studying role of aboriginals as consumers. Doug McCalla (2005, 2005, 2001) uses store records from Upper Canada to examine and understand consumer purchases in the early nineteenth century and gain insight into material culture. Barton Hamilton and Mary MacKinnon (1996) use the Canadian Pacific Railway records to study changes between 1903 and 1938 in the composition of job separations, and the probability of separation. The proportion of voluntary departures fell by more than half after World War I. Independent competing risk, piecewise-constant hazard functions for the probabilities of quits and layoffs are estimated. Changes in workforce composition lengthened the average worker’s spell, but a worker with any given set of characteristics was much more likely to be laid off after 1921, although many of these layoffs were only temporary.

MacKinnon (1997) taps into the CPR data again with a constructed sample of 9000 employees hired before 1945 that includes 700 pensioners and finds features of the CPR pension plan are consistent with economic explanations regarding the role of pensions. Long, continuous periods of service were likely to be rewarded and employees in the most responsible positions generally had higher pensions.

MacKinnon (1996) complements published Canadian nominal wage data by constructing a new hourly wage series, developed from firm records, for machinists, helpers, and laborers employed by the Canadian Pacific Railway between 1900 and 1930. This new evidence suggests that real wage growth in Canada was faster than previously believed, and that there were substantial changes in wage inequality. In another contribution, MacKinnon (1990) studies unemployment relief in Canada by examining relief policies and recipients and contrasting the Canadian situation with unemployment insurance in Britain. She finds demographic factors important in explaining who went on relief, with older workers, and those with large families most likely to be on relief for sustained periods. Another unique contribution to historical labor studies is Michael Huberman and Denise Young (1999). They examine a set of individual strike data of 1554 strikes for Canada from 1901 to 1914 and conclude that having international unions did not weaken Canada’s union movement and that they became part of Canada’s industrial relations framework.

The 1891 and 1901 Census

An ongoing project is the 1891 Census of Canada Project at the University of Guelph under Director Kris Inwood, which is making the information of this census available to the research public in a digitized sample of individual records from the 1891 census. The project is hosted by the University of Guelph, with support from the Canadian Foundation for Innovation, the Ontario Innovation Trust and private sector partners. Phase 1 (Ontario) of the project began during the winter of 2003 in association with the College of Arts Canada Research Chair in Rural History. The Ontario project continues until 2007. Phase II began in 2005; it extends data collection to the rest of the country and also creates an integrated national sample. The database includes information returned on a randomly selected 5% of the enumerators’ manuscript pages with each page containing information describing twenty-five people. An additional 5% of census pages for western Canada and several large cities augment the basic sample. Ultimately the database will contain records for more than 350,000 people, bearing in mind that the population of Canada in 1891 was 3.8 million.

The release of the 1901 Census of Canada manuscript census has also spawned numerous micro-data studies. Peter Baskerville and Eric Sager (1995, 1998) have used the 1901 Census to examine unemployment and the work force in late Victorian Canada.10 Baskerville (2001a,b) uses the 1901 census to examine the practice of boarding in Victorian Canada while in another study he uses the 1901 census to examine wealth and religion. Kenneth Sylvester (2001) uses the 1901 census to examine ethnicity and landholding. Alan Green and Mary MacKinnon (2001) use a new sample of individual-level data compiled from the manuscript returns of the 1901 Census of Canada to examine the assimilation of male wage-earning immigrants (mainly from the UK) in Montreal and Toronto. Unlike studies of post-World War II immigrants to Canada, and some recent studies of nineteenth-century immigration to the United States, they find slow assimilation to the earnings levels of native-born English mother-tongue Canadians. Green, MacKinnon and Chris Minns (2005) use 1901 census data to demonstrate that Anglophones and Francophone had very different personal characteristics, so that movement to the west was rarely economically attractive for Francophone. However, large-scale migration into New England fitted French Canadians’ demographic and human capital profile.

Wealth and Inequality

Recent years have also seen the emergence of a body of literature by several contributors on wealth accumulation and distribution in nineteenth-century Canada. This work has provided quantitative measurements of the degree of inequality in wealth holding, as well as its evolution over time. Gilles Paquet and Jean-Pierre Wallot (1976, 1986) have examined the net personal wealth of wealth holders using “les inventaires après déces” (inventories taken after death) in Quebec during the late eighteenth and early nineteenth century. They have suggested that the habitant was indeed a rational economic agent who chose land as a form of wealth not because of inherent conservatism but because information and transactions costs hindered the accumulation of financial assets.

A. Gordon Darroch (1983a, 1983b) has utilized municipal assessment rolls to study wealth inequality in Toronto during the late nineteenth century. Darroch found that inequality among assessed families was such that the top one-fifth of assessed families held at least 65% of all assessed wealth and the poorest 40% never more than 8%, even though inequality did decline between 1871 and 1899. Darroch and Michael Ornstein (1980, 1984) used the 1871 Census to examine ethnicity, occupational structure and family life cycles in Canada. Darroch and Soltow (1992, 1994) research property holding in Ontario using 5,669 individuals the 1871 census manuscripts and find “deep and abiding structures of inequality” accompanied by opportunities for mobility.

Lars Osberg and Fazley Siddiq (1988, 1993) and Siddiq (1988) have examined wealth inequality in Nova Scotia using probated estates from 1871 and 1899. They found a slight shift towards greater inequality in wealth over time and concluded that the prosperity of the 1850-1875 period in Nova Scotia benefited primarily the Halifax- based merchant class. Higher levels of wealth were associated with being a merchant and with living in Halifax, as opposed to the rest of the province. Siddiq and Julian Gwyn (1992) used probate inventories from 1851 and 1871 to study wealth over the period. They again document a greater trend towards inequality, accompanied by rising wealth. In addition, Peter Wardhas collected a set of 196 Nova Scotia probate records for Lunenburg County spanning 1808-1922, as well as a set of poll tax records for the same location between 1791 and 1795.11

Livio Di Matteo and Peter George (1992, 1998) have examined wealth distribution in late nineteenth century Ontario using probate records and assessment roll data for Wentworth County for the years 1872, 1882, 1892 and 1902. They find a rise in average wealth levels up until 1892 and a decline from 1892 to 1902. Whereas the rise in wealth from 1872 to 1892 appears to have accompanied by a trend towards greater equality in wealth distribution, the period 1892 to 1902 marked a return to greater inequality. Di Matteo (1996, 1997, 1998, 2001) uses a set of 3,515 probated decedents for all of Ontario in 1892 to examine the determinants of wealth holding, the wealth of the Irish, inequality and life cycle accumulation. Di Matteo and Herb Emery (2002) use the 1892 Ontario data to examine life insurance holding and the extent of self-insurance as wealth rises. Di Matteo (2004, 2006) uses a newly constructed micro-data set for the Thunder Bay District from 1885-1920 consisting of 1,293 probated decedents to examine wealth and inequality during Canada’s wheat boom era. Di Matteo is currently using Ontario probated decedents from 1902 linked to the 1901 census and combined with previous data from 1892 to examine the impact of religious affiliation on wealth holding.

Wealth and property holding among women has also been a specific topic of research.12 Peter Baskerville (1999) uses probate data to examine wealth holding by women in the cities of Victoria and Hamilton between 1880 and 1901 and finds that they were substantial property owners. The holding of wealth by women in the wake of property legislation is studied by Inwood and Sue Ingram (2000) and Inwood and Sarah Van Sligtenhorst (2004). Their work chronicles the increase in female property holding in the wake of Canadian property law changes in the late nineteenth-century, Inwood and Richard Reid (2001) also use the Canadian Census to examine the relationship between gender and occupational identity.

Conclusion

The flurry of recent activity in Canadian quantitative economic history using census and probate data bodes well for the future. Even the National Archives of Canada has now made digital images of census forms available online as well as other primary records.13 Moreover, projects such as the CCRI and the 1891 Census Project hold the promise of new, integrated data sources for future research on national as opposed to regional micro-data questions. We will be able to see the extent of regional economic development, earnings and convergence at a regional level and from a national perspective. Access to the 1911 and future access to the 1921 Census of Canada will also provide fertile areas for research and discovery. The period between 1900 and 1921, spanning the wheat boom and the First World War, is particularly important as it coincides with Canadian industrialization, rapid economic growth and the further expansion of wealth and income at the individual level. Moreover, the access to new samples of micro data may also help shed light on aboriginal economic history during the nineteenth and early twentieth century, as well as the economic progress of women.14 In particular, the economic history of Canada’s aboriginal peoples after the decline of the fur trade and during Canada’s industrialization is an area where micro-data might be useful in illustrating economic trends and conditions.15

References:

Baskerville, Peter A. “Familiar Strangers: Urban Families with Boarders in Canada, 1901.” Social Science History 25, no. 3 (2001): 321-46.

Baskerville, Peter. “Did Religion Matter? Religion and Wealth in Urban Canada at the Turn of the Twentieth Century: An Exploratory Study.” Histoire sociale-Social History XXXIV, no. 67 (2001): 61-96.

Baskerville, Peter A. and Eric W. Sager. “Finding the Work Force in the 1901 Census of Canada.” Histoire sociale-Social History XXVIII, no. 56 (1995): 521-40.

Baskerville, Peter A., and Eric W. Sager. Unwilling Idlers: The Urban Unemployed and Their Families in Late Victorian Canada. Toronto: University of Toronto Press, 1998

Baskerville, Peter A. “Women and Investment in Late-Nineteenth Century Urban Canada: Victoria and Hamilton, 1880-1901.” Canadian Historical Review 80, no. 2 (1999): 191-218.

Borsa, Joan, and Kris Inwood. Codebook and Interpretation Manual for the 1870-71 Canadian Industrial Database. Guelph, 1993.

Bouchard, Gerard. “Introduction à l’étude de la société saguenayenne aux XIXe et XXe siècles.” Revue d’histoire de l’Amérique française 31, no. 1 (1977): 3-27.

Bouchard, Gerard. “Les systèmes de transmission des avoirs familiaux et le cycle de la société rurale au Québec, du XVIIe au XXe siècle.” Histoire sociale-Social History XVI, no. 31 (1983): 35-60.

Bouchard, Gerard. “Les fichiers-réseaux de population: Un retour à l’individualité.” Histoire sociale-Social History XXI, no. 42 (1988): 287-94.

Bouchard, Gerard and Regis Thibeault. “Change and Continuity in Saguenay Agriculture: The Evolution of Production and Yields (1852-1971).” In Canadian Papers in Rural History, Vol. VIII, edited Donald H. Akenson, 231-59. Gananoque, ON: Langdale Press, 1992.

Bouchard, Gerard. “Computerized Family Reconstitution and the Measure of Literacy. Presentation of a New Index.” History and Computing 5, no 1 (1993): 12-24.

Bouchard, Gerard. Quelques arpents d’Amérique: Population, économie, famille au Saguenay, 1838-1971. Montreal: Boréal, 1996.

Bouchard, Gerard. “Economic Inequalities in Saguenay Society, 1879-1949: A Descriptive Analysis.” Canadian Historical Review 79, no. 4 (1998): 660-90.

Bourbeau, Robert, and Jacques Légaré. Évolution de la mortalité au Canada et au Québec 1831-1931. Montreal: Les Presses de l’Université de Montréal, 1982.

Bradbury, Bettina. “Women and Wage Labour in a Period of Transition: Montreal, 1861-1881.” Histoire sociale-Social History XVII (1984): 115-31.

Bradbury, Bettina, Peter Gossage, Evelyn Kolish, and Alan Stewart. “Property and Marriage: The Law and the Practice in Early Nineteenth-Century Montreal.” Histoire sociale-Social History XXVI, no. 51 (1993): 9-40.

Carlos, Ann, and Frank Lewis. “Property Rights and Competition in the Depletion of the Beaver: Native Americans and the Hudson’s Bay Company, 1700-1763.” In The Other Side of the Frontier: Economic Explanations into Native American History, edited by Linda Barrington. Boulder, CO: Westview Press, 1998.

Carlos, Ann, and Frank Lewis. “Property Rights, Competition, and Depletion in the Eighteenth-century Canadian Fur Trade: The Role of the European Market.” Canadian Journal of Economics 32, no. 3 (1999): 705-28.

Carlos, Ann, and Frank Lewis. “Marketing in the Land of Hudson Bay: Indian Consumers and the Hudson’s Bay Company, 1670-1770.” Enterprise and Society 3 (2002): 285-317.

Carlos, Ann and Frank Lewis. “Trade, Consumption, and the Native Economy: Lessons from York Factory, Hudson Bay.” Journal of Economic History 61, no. 4 (2001): 1037-64.

Charbonneau, Hubert. “Le registre de population du Québec ancien Bilan de vingt annés de recherches.” Histoire sociale-Social History XXI, no. 42 (1988): 295-99.

Darroch, A. Gordon. “Occupational Structure, Assessed Wealth and Homeowning during Toronto’s Early Industrialization, 1861-1899.” Histoire sociale-Social History XVI (1983): 381-419.

Darroch, A. Gordon. “Early Industrialization and Inequality in Toronto, 1861-1899.” Labour/Le Travailleur 11 (1983): 31-61.

Darroch, A. Gordon. “A Study of Census Manuscript Data for Central Ontario, 1861-1871: Reflections on a Project and On Historical Archives.” Histoire sociale-Social History XXI, no. 42 (1988): 304-11.

Darroch, A. Gordon, and Michael Ornstein. “Ethnicity and Occupational Structure in Canada in 1871: The Vertical Mosaic in Historical Perspective.” Canadian Historical Review 61 (1980): 305-33.

Darroch, A. Gordon, and Michael Ornstein. “Family Coresidence in Canada in 1871: Family Life Cycles, Occupations and Networks of Mutual Aid.” Canadian Historical Association Historical Papers (1984): 30-55.

Darroch, A. Gordon, and Lee Soltow. “Inequality in Landed Wealth in Nineteenth-Century Ontario: Structure and Access.” Canadian Review of Sociology and Anthropology 29 (1992): 167-200.

Darroch, A. Gordon, and Lee Soltow. Property and Inequality in Victorian Ontario: Structural Patterns and Cultural Communities in the 1871 Census. Toronto: University of Toronto Press, 1994.

Denton, Frank T., and Peter George. “An Explanatory Statistical Analysis of Some Socio-economic Characteristics of Families in Hamilton, Ontario, 1871.” Histoire sociale-Social History 5 (1970): 16-44.

Denton, Frank T., and Peter George. “The Influence of Socio-Economic Variables on Family Size in Wentworth County, Ontario, 1871: A Statistical Analysis of Historical Micro-data.” Review of Canadian Sociology and Anthropology 10 (1973): 334-45.

Di Matteo, Livio. “Wealth and Inequality on Ontario’s Northwestern Frontier: Evidence from Probate.” Histoire sociale-Social History XXXVIII, no. 75, (2006): 79-104.

Di Matteo, Livio. “Boom and Bust, 1885-1920: Regional Wealth Evidence from Probate Records.” Australian Economic History Review 44, no. 1 (2004): 52-78.

Di Matteo, Livio. “Patterns of Inequality in Late Nineteenth-Century Ontario: Evidence from Census-Linked Probate Data.” Social Science History 25, no. 3 (2001): 347-80.

Di Matteo, Livio. “Wealth Accumulation and the Life Cycle in Economic History: Implications of Alternative Approaches to Micro-Data.” Explorations in Economic History 35 (1998): 296-324.

Di Matteo, Livio. “The Determinants of Wealth and Asset Holding in Nineteenth Century Canada: Evidence from Micro-data.” Journal of Economic History 57, no. 4 (1997): 907-34.

Di Matteo, Livio. “The Wealth of the Irish in Nineteenth-Century Ontario.” Social Science History 20, no. 2 (1996): 209-34.

Di Matteo, Livio, and J.C. Herbert Emery. “Wealth and the Demand for Life Insurance: Evidence from Ontario, 1892.” Explorations in Economic History 39, no. 4 (2002): 446-69.

Di Matteo, Livio, and Peter George. “Patterns and Determinants of Wealth among Probated Decedents in Wentworth County, Ontario, 1872-1902.” Histoire sociale-Social History XXXI, no. 61(1998): 1-34.

Di Matteo, Livio, and Peter George. “Canadian Wealth Inequality in the Late Nineteenth Century: A Study of Wentworth County, Ontario, 1872-1902.” Canadian Historical Review LXXIII, no. 4 (1992): 453-83.

Emery, George N. Facts of Life: The Social Construction of Vital Statistics, Ontario, 1869-1952. Montreal: McGill-Queen’s University Press, 1993.

Emery, George, and Kevin McQuillan. “A Case Study Approach to Ontario Mortality History: The Example of Ingersoll, 1881-1971.” Canadian Studies in Population 15, (1988): 135-58.

Ens, Gerhard. Homeland to Hinterland: The Changing Worlds of the Red River Metis in the Nineteenth Century. Toronto: University of Toronto Press, 1996.

Gaffield, Chad. “Canadian Families in Cultural Context: Hypotheses from the Mid-Nineteenth Century.” Historical Papers, Canadian Historical Association (1979): 48-70.

Gaffield, Chad. “Schooling, the Economy and Rural Society in Nineteenth-Century Ontario.” in Childhood and Family in Canadian History, edited by Joy Parr. Toronto: McClelland and Stewart (1983): 69-92.

Gaffield, Chad. _Language, Schooling and Cultural Conflict: The Origins of the French-Language Controversy in Ontario.” Kingston and Montreal: McGill-Queen’s, 1987.

Gaffield, Chad. “Machines and Minds: Historians and the Emerging Collaboration.” Histoire sociale-Social History XXI, no. 42 (1988): 312-17.

Gagan, David. Hopeful Travellers: Families, Land and Social Change in Mid-Victorian Peel County, Canada West. Toronto: University of Toronto Press, 1981.

Gagan, David. “Some Comments on the Canadian Experience with Historical Databases.” Histoire sociale-Social History XXI, no. 42 (1988): 300-03.

Gossage, Peter. “Family Formation and Age at Marriage at Saint-Hyacinthe, Quebec, 1854-1891.” Histoire sociale-Social History XXIV, no. 47 (1991): 61-84.

Green, Alan, Mary Mackinnon and Chris Minns. “Conspicuous by Their Absence: French Canadians and the Settlement of the Canadian West.” Journal of Economic History 65, no. 3 (2005): 822-49.

Green, Alan, and Mary MacKinnon. “The Slow Assimilation of British Immigrants in Canada: Evidence from Montreal and Toronto, 1901.”Explorations in Economic History 38, no. 3 (2001): 315-38

Green, Alan G., and Malcolm C. Urquhart. “New Estimates of Output Growth in Canada: Measurement and Interpretation.” In Perspectives on Canadian Economic History, edited by Douglas McCalla, 182-199. Toronto: Copp Clark Pitman Ltd., 1987.

Gwyn, Julian, and Fazley K. Siddiq. “Wealth Distribution in Nova Scotia during the Confederation Era, 1851 and 1871.” Canadian Historical Review LXXIII, no. 4 (1992): 435-52.

Hamilton Barton, and Mary MacKinnon. “Quits and Layoffs in Early Twentieth Century Labour Markets.” Explorations in Economic History 21 (1996): 346-66.

Hamilton, Gillian. “The Decline of Apprenticeship in North America: Evidence from Montreal.” Journal of Economic History 60, no. 3 (2000): 627-64.

Hamilton, Gillian. “Property Rights and Transaction Costs in Marriage: Evidence from Prenuptial Contracts.” Journal of Economic History 59, no. 1 (1999): 68-103.

Hamilton, Gillian. “The Market for Montreal Apprentices: Contract Length and Information.”Explorations in Economic History 33, no 4 (1996): 496-523

Hamilton, Michelle, and Kris Inwood. “The Identification of the Aboriginal Population in the 1891 Census of Canada.” Manuscript, University of Guelph, 2006.

Henripin, Jacques. Tendances at facteurs de la fécondité au Canada Bureau fédéral de la Statistique. Ottawa: Bureau fe?de?ral de la statistique, 1968.

Huberman, Michael, and Denise Young. “Cross-Border Unions: Internationals in Canada, 1901-1914.” Explorations in Economic History 36 (1999): 204-31.

Igartua Jose E. “Les bases de donnés historiques: L’expérience canadienne depuis quinze ans – Introduction.” Histoire sociale-Social History XXI, no. 42 (1988): 283-86.

Inwood, Kris, and Phyllis Wagg. “The Survival of Handloom Weaving in Rural Canada circa 1870.” Journal of Economic History 53 (1993): 346-58.

Inwood, Kris, and Sue Ingram, “The Impact of Married Women’s Property Legislation in Victorian Ontario.” Dalhousie Law Journal 23, no. 2 (2000): 405-49.

Inwood, Kris, and Sarah Van Sligtenhorst. “The Social Consequences of Legal Reform: Women and Property in a Canadian Community.” Continuity and Change 19 no. 1 (2004): 165-97.

Inwood, Kris, and Richard Reid. “Gender and Occupational Identity in a Canadian Census.” Historical Methods 32, no. 2 (2001): 57-70.

Inwood, Kris, and Kevin James. “The 1891 Census of Canada.” Cahiers québécois de démographie, forthcoming.

Inwood, Kris, and Ian.Keay. “Bigger Establishments in Thicker Markets: Can We Explain Early Productivity Differentials between Canada and the United States.” Canadian Journal of Economics 38, no. 4 (2005): 1327-63.

Jones, Alice Hanson. Wealth of a Nation to Be: The American Colonies on the Eve of the Revolution. New York: Columbia Press, 1980.

Katz, Michael B. _The People of Hamilton, Canada West: Family and Class in a Mid-nineteenth-century City_. Cambridge: Harvard University Press, 1975.

Keay, Ian. “Canadian Manufacturers’ Relative Productivity Performance: 1907-1990.” Canadian Journal of Economics 44, no. 4 (2000): 1049-68.

Keay, Ian, and Angela Redish. “The Micro-economic Effects of Financial Market Structure: Evidence from Twentieth -century North American Steel Firms.” Explorations in Economic History 41, no. 4 (2004): 377-403.

Landry, Yves. “Fertility in France and New France: The Distinguishing Characteristics of Canadian Behaviour in the Seventeenth and Eighteenth Centuries.” Social Science History 17, no. 4 (1993): 577-92.

Mackinnon, Mary. “Relief Not Insurance: Canadian Unemployment Relief in the 1930s.”Explorations in Economic History 27, no. 1 (1990): 46-83

Mackinnon, Mary. “New Evidence on Canadian Wage Rates, 1900-1930.”

Canadian Journal of Economics XXIX, no.1 (1996): 114-31.

MacKinnon, Mary. “Providing for Faithful Servants: Pensions at the Canadian Pacific Railway.” Social Science History 21, no. 1 (1997): 59-83.

Marr, William. “Micro and Macro Land Availability as a Determinant of Human Fertility in Rural Canada West, 1851.” Social Science History 16 (1992): 583-90.

McCalla, Doug. “Upper Canadians and Their Guns: An Exploration via Country Store Accounts (1808-61).” Ontario History 97 (2005): 121-37.

McCalla, Doug. “A World without Chocolate: Grocery Purchases at Some Upper Canadian Country Stores, 1808-61.” Agricultural History 79 (2005): 147-72.

McCalla, Doug. “Textile Purchases by Some Ordinary Upper Canadians, 1808-1862.” Material History Review 53, (2001): 4-27.

McInnis, Marvin. “Childbearing and Land Availability: Some Evidence from Individual Household Data.” In Population Patterns in the Past, edited by Ronald Demos Lee, 201-27. New York: Academic Press, 1977.

Moore, Eric G., and Brian S. Osborne. “Marital Fertility in Kingston, 1861-1881: A Study of Socio-economic Differentials.” Histoire sociale-Social History XX (1987): 9-27.

Muise, Del. “The Industrial Context of Inequality: Female Participation in Nova Scotia’s Paid Workforce, 1871-1921.” Acadiensis XX, no. 2 (1991).

Myers, Sharon. “‘Not to Be Ranked as Women’: Female Industrial Workers in Halifax at the Turn of the Twentieth Century.” In Separate Spheres: Women’s Worlds in the Nineteenth-Century Maritimes, edited by Janet Guildford and Suzanne Morton, 161-83. Fredericton: Acadiensis Press, 1994.

Osberg, Lars, and Fazley Siddiq. “The Acquisition of Wealth in Nova Scotia in the Late Nineteenth Century.” Research in Economic Inequality 4 (1993): 181-202.

Osberg, Lars, and Fazley Siddiq. “The Inequality of Wealth in Britain’s North American Colonies: The Importance of the Relatively Poor.” Review of Income and Wealth 34 (1988): 143-63.

Paquet, Gilles, and Jean-Pierre Wallot. “Les Inventaires après décès à Montréal au tournant du XIXe siècle: preliminaires à une analyse.” Revue d’histoire de l’Amérique française 30 (1976): 163-221.

Paquet, Gilles, and Jean-Pierre Wallot. “Stratégie Foncière de l’Habitant: Québec (1790-1835).” Revue d’histoire de l’Amérique française 39 (1986): 551-81.

Seager, Allen, and Adele Perry. “Mining the Connections: Class, Ethnicity and Gender in Nanaimo, British Columbia, 1891.” Histoire sociale/Social History 30 , no. 59 (1997): 55-76.

Siddiq, Fazley K. “The Size Distribution of Probate Wealth Holdings in Nova Scotia in the Late Nineteenth Century.” Acadiensis 18 (1988): 136-47.

Soltow, Lee. Patterns of Wealthholding in Wisconsin since 1850. Madison: University of Wisconsin Press, 1971.

Sylvester, Kenneth Michael. “All Things Being Equal: Land Ownership and Ethnicity in Rural Canada, 1901.” Histoire sociale-Social History XXXIV, no. 67 (2001): 35-59.

Thernstrom, Stephan. The Other Bostonians: Poverty and Progress in the American Metropolis, 1880-1970. Cambridge: Harvard University Press, 1973.

Urquhart, Malcolm C. Gross National Product, Canada, 1870-1926: The Derivation of the Estimates. Montreal: McGill-Queens, 1993.

Urquhart, Malcolm C. “New Estimates of Gross National Product Canada, 1870-1926: Some Implications for Canadian Development.” In Long Term Factors in American Economic Growth, edited by Stanley L. Engerman and Robert E. Gallman, 9-94. Chicago: University of Chicago Press (1986).

Wayne, Michael. “The Black Population of Canada West on the Eve of the American Civil War: A Reassessment Based on the Manuscript Census of 1861.” In A Nation of Immigrants: Women, Workers and Communities in Canadian History, edited by Franca Iacovetta, Paula Draper and Robert Ventresca. Toronto: University of Toronto Press, 1998.

Footnotes

1 The helpful comments of Herb Emery, Mary MacKinnon and Kris Inwood on earlier drafts are acknowledged.

2 See especially Mac Urquhart’s spearheading of the major efforts in national income and output estimates. (Urquhart, 1986, 1993)

3 “Individual response” means by individuals, households and firms.

4 See Gaffield (1988) and Igartua (1988).

5 The Conference on the Use of Census Manuscripts for Historical Research held at Guelph in March 1993 was an example of the interdisciplinary nature of historical micro-data research. The conference was sponsored by the Canadian Committee on History and Computing, the Social Sciences and Humanities Research Council of Canada and the University of Guelph. The conference was organized by economist Kris Inwood and historian Richard Reid and featured presentations by historians, economists, demographers, sociologists and anthropologists.

6 The Denton/George project had its origins in a proposal to the Second Conference on Quantitative Research in Canadian Economic History in 1967 that a sampling of the Canadian census be undertaken. Denton and George drew a sample from the manuscript census returns for individuals for 1871 that had recently been made available, and reported their preliminary findings to the Fourth Conference in March, 1970 in a paper that was published shortly afterwards in Histoire sociale/Social History (1970). Mac Urquhart’s role here must be acknowledged. He and Ken Buckley were insistent that a sampling of Census manuscripts would be an important venture for the conference members to initiate.

7 Also, sources such as the aggregate census have been used to examine fertility by Henripin (1968) and mortality by Bourbeau and Legaré (1982)).

8 Chad Gaffield, Peter Baskerville and Alan Artibise were also involved in the creation of a machine-readable listing of archival sources on Vancouver Island known as the Vancouver Islands Project (Gaffield, 1988, 313).

9 See Chad Gaffield, “Ethics, Technology and Confidential Research Data: The Case of the Canadian Century Research Infrastructure Project,” paper presented to the World History Conference, Sydney, July 3-9, 2005.

10 Baskerville and Sager have been involved in the Canadian Families Project. See “The Canadian Families Project”, a special issue of the journal Historical Methods, 33 no. 4 (2000).

11 See Don Paterson’s Economic and Social History Data Base at the University of British Columbia at http://www2.arts.ubc.ca/econsochistory/data/data_list.html

12 Examples of other aspects of gender and economic status in a regional context ar e covered by Muise (1991), Myers (1994) and Seager and Perry (1997).

13 See http://www.collectionscanada.ca/genealogy/022-500-e.html

14 See for example the work by Gerhard Ens (1996) on the Red River Metis.

15 Hamilton and Inwood (2006) have begun research into identifying the aboriginal population in the 1891 Census of Canada.

Citation: Di Matteo, Livio. “The Use of Quantitative Micro-data in Canadian Economic History: A Brief Survey”. EH.Net Encyclopedia, edited by Robert Whaples. January 27, 2007. URL
http://eh.net/encyclopedia/the-use-of-quantitative-micro-data-in-canadian-economic-history-a-brief-survey/

The Economic Impact of the Black Death

David Routt, University of Richmond

The Black Death was the largest demographic disaster in European history. From its arrival in Italy in late 1347 through its clockwise movement across the continent to its petering out in the Russian hinterlands in 1353, the magna pestilencia (great pestilence) killed between seventeen and twenty—eight million people. Its gruesome symptoms and deadliness have fixed the Black Death in popular imagination; moreover, uncovering the disease’s cultural, social, and economic impact has engaged generations of scholars. Despite growing understanding of the Black Death’s effects, definitive assessment of its role as historical watershed remains a work in progress.

A Controversy: What Was the Black Death?

In spite of enduring fascination with the Black Death, even the identity of the disease behind the epidemic remains a point of controversy. Aware that fourteenth—century eyewitnesses described a disease more contagious and deadlier than bubonic plague (Yersinia pestis), the bacillus traditionally associated with the Black Death, dissident scholars in the 1970s and 1980s proposed typhus or anthrax or mixes of typhus, anthrax, or bubonic plague as the culprit. The new millennium brought other challenges to the Black Death—bubonic plague link, such as an unknown and probably unidentifiable bacillus, an Ebola—like haemorrhagic fever or, at the pseudoscientific fringes of academia, a disease of interstellar origin.

Proponents of Black Death as bubonic plague have minimized differences between modern bubonic and the fourteenth—century plague through painstaking analysis of the Black Death’s movement and behavior and by hypothesizing that the fourteenth—century plague was a hypervirulent strain of bubonic plague, yet bubonic plague nonetheless. DNA analysis of human remains from known Black Death cemeteries was intended to eliminate doubt but inability to replicate initially positive results has left uncertainty. New analytical tools used and new evidence marshaled in this lively controversy have enriched understanding of the Black Death while underscoring the elusiveness of certitude regarding phenomena many centuries past.

The Rate and Structure of mortality

The Black Death’s socioeconomic impact stemmed, however, from sudden mortality on a staggering scale, regardless of what bacillus caused it. Assessment of the plague’s economic significance begins with determining the rate of mortality for the initial onslaught in 1347—53 and its frequent recurrences for the balance of the Middle Ages, then unraveling how the plague chose victims according to age, sex, affluence, and place.

Imperfect evidence unfortunately hampers knowing precisely who and how many perished. Many of the Black Death’s contemporary observers, living in an epoch of famine and political, military, and spiritual turmoil, described the plague apocalyptically. A chronicler famously closed his narrative with empty membranes should anyone survive to continue it. Others believed as few as one in ten survived. One writer claimed that only fourteen people were spared in London. Although sober eyewitnesses offered more plausible figures, in light of the medieval preference for narrative dramatic force over numerical veracity, chroniclers’ estimates are considered evidence of the Black Death’s battering of the medieval psyche, not an accurate barometer of its demographic toll.

Even non—narrative and presumably dispassionate, systematic evidence — legal and governmental documents, ecclesiastical records, commercial archives — presents challenges. No medieval scribe dragged his quill across parchment for the demographer’s pleasure and convenience. With a paucity of censuses, estimates of population and tracing of demographic trends have often relied on indirect indicators of demographic change (e.g., activity in the land market, levels of rents and wages, size of peasant holdings) or evidence treating only a segment of the population (e.g., assignment of new priests to vacant churches, payments by peasants to take over holdings of the deceased). Even the rare census—like record, like England’s Domesday Book (1086) or the Poll Tax Return (1377), either enumerates only heads of households or excludes slices of the populace or ignores regions or some combination of all these. To compensate for these imperfections, the demographer relies on potentially debatable assumptions about the size of the medieval household, the representativeness of a discrete group of people, the density of settlement in an undocumented region, the level of tax evasion, and so forth.

A bewildering array of estimates for mortality from the plague of 1347—53 is the result. The first outbreak of the Black Death indisputably was the deadliest but the death rate varied widely according to place and social stratum. National estimates of mortality for England, where the evidence is fullest, range from five percent, to 23.6 percent among aristocrats holding land from the king, to forty to forty—five percent of the kingdom’s clergy, to over sixty percent in a recent estimate. The picture for the continent likewise is varied. Regional mortality in Languedoc (France) was forty to fifty percent while sixty to eighty percent of Tuscans (Italy) perished. Urban death rates were mostly higher but no less disparate, e.g., half in Orvieto (Italy), Siena (Italy), and Volterra (Italy), fifty to sixty—six percent in Hamburg (Germany), fifty—eight to sixty—eight percent in Perpignan (France), sixty percent for Barcelona’s (Spain) clerical population, and seventy percent in Bremen (Germany). The Black Death was often highly arbitrary in how it killed in a narrow locale, which no doubt broadened the spectrum of mortality rates. Two of Durham Cathedral Priory’s manors, for instance, had respective death rates of twenty—one and seventy—eighty percent (Shrewsbury, 1970; Russell, 1948; Waugh, 1991; Ziegler, 1969; Benedictow, 2004; Le Roy Ladurie, 1976; Bowsky, 1964; Pounds, 1974; Emery, 1967; Gyug, 1983; Aberth, 1995; Lomas, 1989).

Credible death rates between one quarter and three quarters complicate reaching a Europe—wide figure. Neither a casual and unscientific averaging of available estimates to arrive at a probably misleading composite death rate nor a timid placing of mortality somewhere between one and two thirds is especially illuminating. Scholars confronting the problem’s complexity before venturing estimates once favored one third as a reasonable aggregate death rate. Since the early 1970s demographers have found higher levels of mortality plausible and European mortality of one half is considered defensible, a figure not too distant from less fanciful contemporary observations.

While the Black Death of 1347—53 inflicted demographic carnage, had it been an isolated event European population might have recovered to its former level in a generation or two and its economic impact would have been moderate. The disease’s long—term demographic and socioeconomic legacy arose from it recurrence. When both national and local epidemics are taken into account, England endured thirty plague years between 1351 and 1485, a pattern mirrored on the continent, where Perugia was struck nineteen times and Hamburg, Cologne, and Nuremburg at least ten times each in the fifteenth century. Deadliness of outbreaks declined — perhaps ten to twenty percent in the second plague (pestis secunda) of 1361—2, ten to fifteen percent in the third plague (pestis tertia) of 1369, and as low as five and rarely above ten percent thereafter — and became more localized; however, the Black Death’s persistence ensured that demographic recovery would be slow and socioeconomic consequences deeper. Europe’s population in 1430 may have been fifty to seventy—five percent lower than in 1290 (Cipolla, 1994; Gottfried, 1983).

Enumeration of corpses does not adequately reflect the Black Death’s demographic impact. Who perished was equally significant as how many; in other words, the structure of mortality influenced the time and rate of demographic recovery. The plague’s preference for urbanite over peasant, man over woman, poor over affluent, and, perhaps most significantly, young over mature shaped its demographic toll. Eyewitnesses so universally reported disproportionate death among the young in the plague’s initial recurrence (1361—2) that it became known as the Childen’s Plague (pestis puerorum, mortalité des enfants). If this preference for youth reflected natural resistance to the disease among plague survivors, the Black Death may have ultimately resembled a lower—mortality childhood disease, a reality that magnified both its demographic and psychological impact.

The Black Death pushed Europe into a long—term demographic trough. Notwithstanding anecdotal reports of nearly universal pregnancy of women in the wake of the magna pestilencia, demographic stagnancy characterized the rest of the Middle Ages. Population growth recommenced at different times in different places but rarely earlier than the second half of the fifteenth century and in many places not until c. 1550.

The European Economy on the Cusp of the Black Death

Like the plague’s death toll, its socioeconomic impact resists categorical measurement. The Black Death’s timing made a facile labeling of it as a watershed in European economic history nearly inevitable. It arrived near the close of an ebullient high Middle Ages (c. 1000 to c. 1300) in which urban life reemerged, long—distance commerce revived, business and manufacturing innovated, manorial agriculture matured, and population burgeoned, doubling or tripling. The Black Death simultaneously portended an economically stagnant, depressed late Middle Ages (c. 1300 to c. 1500). However, even if this simplistic and somewhat misleading portrait of the medieval economy is accepted, isolating the Black Death’s economic impact from manifold factors at play is a daunting challenge.

Cognizant of a qualitative difference between the high and late Middle Ages, students of medieval economy have offered varied explanations, some mutually exclusive, others not, some favoring the less dramatic, less visible, yet inexorable factor as an agent of change rather than a catastrophic demographic shift. For some, a cooling climate undercut agricultural productivity, a downturn that rippled throughout the predominantly agrarian economy. For others, exploitative political, social, and economic institutions enriched an idle elite and deprived working society of wherewithal and incentive to be innovative and productive. Yet others associate monetary factors with the fourteenth— and fifteenth—century economic doldrums.

The particular concerns of the twentieth century unsurprisingly induced some scholars to view the medieval economy through a Malthusian lens. In this reconstruction of the Middle Ages, population growth pressed against the society’s ability to feed itself by the mid—thirteenth century. Rising impoverishment and contracting holdings compelled the peasant to cultivate inferior, low—fertility land and to convert pasture to arable production and thereby inevitably reduce numbers of livestock and make manure for fertilizer scarcer. Boosting gross productivity in the immediate term yet driving yields of grain downward in the longer term exacerbated the imbalance between population and food supply; redressing the imbalance became inevitable. This idea’s adherents see signs of demographic correction from the mid—thirteenth century onward, possibly arising in part from marriage practices that reduced fertility. A more potent correction came with subsistence crises. Miserable weather in 1315 destroyed crops and the ensuing Great Famine (1315—22) reduced northern Europe’s population by perhaps ten to fifteen percent. Poor harvests, moreover, bedeviled England and Italy to the eve of the Black Death.

These factors — climate, imperfect institutions, monetary imbalances, overpopulation — diminish the Black Death’s role as a transformative socioeconomic event. In other words, socioeconomic changes already driven by other causes would have occurred anyway, merely more slowly, had the plague never struck Europe. This conviction fosters receptiveness to lower estimates of the Black Death’s deadliness. Recent scrutiny of the Malthusian analysis, especially studies of agriculture in source—rich eastern England, has, however, rehabilitated the Black Death as an agent of socioeconomic change. Growing awareness of the use of “progressive” agricultural techniques and of alternative, non—grain economies less susceptible to a Malthusian population—versus—resources dynamic has undercut the notion of an absolutely overpopulated Europe and has encouraged acceptance of higher rates of mortality from the plague (Campbell, 1983; Bailey, 1989).

The Black Death and the Agrarian Economy

The lion’s share of the Black Death’s effect was felt in the economy’s agricultural sector, unsurprising in a society in which, except in the most urbanized regions, nine of ten people eked out a living from the soil.

A village struck by the plague underwent a profound though brief disordering of the rhythm of daily life. Strong administrative and social structures, the power of custom, and innate human resiliency restored the village’s routine by the following year in most cases: fields were plowed, crops were sown, tended, and harvested, labor services were performed by the peasantry, the village’s lord collected dues from tenants. Behind this seeming normalcy, however, lord and peasant were adjusting to the Black Death’s principal economic consequence: a much smaller agricultural labor pool. Before the plague, rising population had kept wages low and rents and prices high, an economic reality advantageous to the lord in dealing with the peasant and inclining many a peasant to cleave to demeaning yet secure dependent tenure.

As the Black Death swung the balance in the peasant’s favor, the literate elite bemoaned a disintegrating social and economic order. William of Dene, John Langland, John Gower, and others polemically evoked nostalgia for the peasant who knew his place, worked hard, demanded little, and squelched pride while they condemned their present in which land lay unplowed and only an immediate pang of hunger goaded a lazy, disrespectful, grasping peasant to do a moment’s desultory work (Hatcher, 1994).

Moralizing exaggeration aside, the rural worker indeed demanded and received higher payments in cash (nominal wages) in the plague’s aftermath. Wages in England rose from twelve to twenty—eight percent from the 1340s to the 1350s and twenty to forty percent from the 1340s to the 1360s. Immediate hikes were sometimes more drastic. During the plague year (1348—49) at Fornham All Saints (Suffolk), the lord paid the pre—plague rate of 3d. per acre for more half of the hired reaping but the rest cost 5d., an increase of 67 percent. The reaper, moreover, enjoyed more and larger tips in cash and perquisites in kind to supplement the wage. At Cuxham (Oxfordshire), a plowman making 2s. weekly before the plague demanded 3s. in 1349 and 10s. in 1350 (Farmer, 1988; Farmer, 1991; West Suffolk Record Office 3/15.7/2.4; Harvey, 1965).

In some instances, the initial hikes in nominal or cash wages subsided in the years further out from the plague and any benefit they conferred on the wage laborer was for a time undercut by another economic change fostered by the plague. Grave mortality ensured that the European supply of currency in gold and silver increased on a per—capita basis, which in turned unleashed substantial inflation in prices that did not subside in England until the mid—1370s and even later in many places on the continent. The inflation reduced the purchasing power (real wage) of the wage laborer so significantly that, even with higher cash wages, his earnings either bought him no more or often substantially less than before the magna pestilencia (Munro, 2003; Aberth, 2001).

The lord, however, was confronted not only by the roving wage laborer on whom he relied for occasional and labor—intensive seasonal tasks but also by the peasant bound to the soil who exchanged customary labor services, rent, and dues for holding land from the lord. A pool of labor services greatly reduced by the Black Death enabled the servile peasant to bargain for less onerous responsibilities and better conditions. At Tivetshall (Norfolk), vacant holdings deprived its lord of sixty percent of his week—work and all his winnowing services by 1350—51. A fifth of winter and summer week—work and a third of reaping services vanished at Redgrave (Suffolk) in 1349—50 due to the magna pestilencia. If a lord did not make concessions, a peasant often gravitated toward any better circumstance beckoning elsewhere. At Redgrave, for instance, the loss of services in 1349—50 directly due to the plague was followed in 1350—51 by an equally damaging wave of holdings abandoned by surviving tenants. For the medieval peasant, never so tightly bound to the manor as once imagined, the Black Death nonetheless fostered far greater rural mobility. Beyond loss of labor services, the deceased or absentee peasant paid no rent or dues and rendered no fees for use of manorial monopolies such as mills and ovens and the lord’s revenues shrank. The income of English lords contracted by twenty percent from 1347 to 1353 (Norfolk Record Office WAL 1247/288×1; University of Chicago Bacon 335—6; Gottfried, 1983).

Faced with these disorienting circumstances, the lord often ultimately had to decide how or even whether the pre—plague status quo could be reestablished on his estate. Not capitalistic in the sense of maximizing productivity for reinvestment of profits to enjoy yet more lucrative future returns, the medieval lord nonetheless valued stable income sufficient for aristocratic ostentation and consumption. A recalcitrant peasantry, diminished dues and services, and climbing wages undermined the material foundation of the noble lifestyle, jostled the aristocratic sense of proper social hierarchy, and invited a response.

In exceptional circumstances, a lord sometimes kept the peasant bound to the land. Because the nobility in Spanish Catalonia had already tightened control of the peasantry before the Black Death, because underdeveloped commercial agriculture provided the peasantry narrow options, and because the labor—intensive demesne agriculture common elsewhere was largely absent, the Catalan lord through a mix of coercion (physical intimidation, exorbitant fees to purchase freedom) and concession (reduced rents, conversion of servile dues to less humiliating fixed cash payments) kept the Catalan peasant in place. In England and elsewhere on the continent, where labor services were needed to till the demesne, such a conservative approach was less feasible. This, however, did not deter some lords from trying. The lord of Halesowen (Worcestershire) not only commanded the servile tenant to perform the full range of services but also resuscitated labor obligations in abeyance long before the Black Death, tantamount to an unwillingness to acknowledge anything had changed (Freedman, 1991; Razi, 1981).

Europe’s political elite also looked to legal coercion not only to contain rising wages and to limit the peasant’s mobility but also to allay a sense of disquietude and disorientation arising from the Black Death’s buffeting of pre—plague social realities. England’s Ordinance of Laborers (1349) and Statute of Laborers (1351) called for a return to the wages and terms of employment of 1346. Labor legislation was likewise promulgated by the Córtes of Aragon and Castile, the French crown, and cities such as Siena, Orvieto, Pisa, Florence, and Ragusa. The futility of capping wages by legislative fiat is evident in the French crown’s 1351 revision of its 1349 enactment to permit a wage increase of one third. Perhaps only in England, where effective government permitted robust enforcement, did the law slow wage increases for a time (Aberth, 2001; Gottfried, 1983; Hunt and Murray, 1999; Cohn, 2007).

Once knee—jerk conservatism and legislative palliatives failed to revivify pre—plague socioeconomic arrangements, the lord cast about for a modus vivendi in a new world of abundant land and scarce labor. A sober triage of the available sources of labor, whether it was casual wage labor or a manor’s permanent stipendiary staff (famuli) or the dependent peasant, led to revision of managerial policy. The abbot of Saint Edmund’s, for example, focused on reconstitution of the permanent staff (famuli) on his manors. Despite mortality and flight, the abbot by and large achieved his goal by the mid—1350s. While labor legislation may have facilitated this, the abbot’s provision of more frequent and lucrative seasonal rewards, coupled with the payment of grain stipends in more valuable and marketable cereals such as wheat, no doubt helped secure the loyalty of famuli while circumventing statutory limits on higher wages. With this core of labor solidified, the focus turned to preserving the most essential labor services, especially those associated with the labor—intensive harvesting season. Less vital labor services were commuted for cash payments and ad hoc wage labor then hired to fill gaps. The cultivation of the demesne continued, though not on the pre—plague scale.

For a time in fact circumstances helped the lord continue direct management of the demesne. The general inflation of the quarter—century following the plague as well as poor harvests in the 1350s and 1360s boosted grain prices and partially compensated for more expensive labor. This so—called “Indian summer” of demesne agriculture ended quickly in the mid—1370s in England and subsequently on the continent when the post—plague inflation gave way to deflation and abundant harvests drove prices for commodities downward, where they remained, aside from brief intervals of inflation, for the rest of the Middle Ages. Recurrences of the plague, moreover, placed further stress on new managerial policies. For the lord who successfully persuaded new tenants to take over vacant holdings, such as happened at Chevington (Suffolk) by the late 1350s, the pestis secunda of 1361—62 often inflicted a decisive blow: a second recovery at Chevington never materialized (West Suffolk Records Office 3/15.3/2.9—2.23).

Under unremitting pressure, the traditional cultivation of the demesne ceased to be viable for lord after lord: a centuries—old manorial system gradually unraveled and the nature of agriculture was transformed. The lord’s earliest concession to this new reality was curtailment of cultivated acreage, a trend that accelerated with time. The 590.5 acres sown on average at Great Saxham (Suffolk) in the late 1330s was more than halved (288.67 acres) in the 1360s, for instance (West Suffolk Record Office, 3/15.14/1.1, 1.7, 1.8).

Beyond reducing the demesne to a size commensurate with available labor, the lord could explore types of husbandry less labor—intensive than traditional grain agriculture. Greater domestic manufacture of woolen cloth and growing demand for meat enabled many English lords to reduce arable production in favor of sheep—raising, which required far less labor. Livestock husbandry likewise became more significant on the continent. Suitable climate, soil, and markets made grapes, olives, apples, pears, vegetables, hops, hemp, flax, silk, and dye—stuffs attractive alternatives to grain. In hope of selling these cash crops, rural agriculture became more attuned to urban demand and urban businessmen and investors more intimately involved in what and how much of it was grown in the countryside (Gottfried, 1983; Hunt and Murray, 1999).

The lord also looked to reduce losses from demesne acreage no longer under the plow and from the vacant holdings of onetime tenants. Measures adopted to achieve this end initiated a process that gained momentum with each passing year until the face of the countryside was transformed and manorialism was dead. The English landlord, hopeful for a return to the pre—plague regime, initially granted brief terminal leases of four to six years at fixed rates for bits of demesne and for vacant dependent holdings. Leases over time lengthened to ten, twenty, thirty years, or even a lifetime. In France and Italy, the lord often resorted to métayage or mezzadria leasing, a type of sharecropping in which the lord contributed capital (land, seed, tools, plow teams) to the lessee, who did the work and surrendered a fraction of the harvest to the lord.

Disillusioned by growing obstacles to profitable cultivation of the demesne, the lord, especially in the late fourteenth century and the early fifteenth, adopted a more sweeping type of leasing, the placing of the demesne or even the entire manor “at farm” (ad firmam). A “farmer” (firmarius) paid the lord a fixed annual “farm” (firma) for the right to exploit the lord’s property and take whatever profit he could. The distant or unprofitable manor was usually “farmed” first and other manors followed until a lord’s personal management of his property often ceased entirely. The rising popularity of this expedient made direct management of demesne by lord rare by c. 1425. The lord often became a rentier bound to a fixed income. The tenurial transformation was completed when the lord sold to the peasant his right of lordship, a surrender to the peasant of outright possession of his holding for a fixed cash rent and freedom from dues and services. Manorialism, in effect, collapsed and was gone from western and central Europe by 1500.

The landlord’s discomfort ultimately benefited the peasantry. Lower prices for foodstuffs and greater purchasing power from the last quarter of the fourteenth century onward, progressive disintegration of demesnes, and waning customary land tenure enabled the enterprising, ambitious peasant to lease or purchase property and become a substantial landed proprietor. The average size of the peasant holding grew in the late Middle Ages. Due to the peasant’s generally improved standard of living, the century and a half following the magna pestilencia has been labeled a “golden age” in which the most successful peasant became a “yeoman” or “kulak” within the village community. Freed from labor service, holding a fixed copyhold lease, and enjoying greater disposable income, the peasant exploited his land exclusively for his personal benefit and often pursued leisure and some of the finer things in life. Consumption of meat by England’s humbler social strata rose substantially after the Black Death, a shift in consumer tastes that reduced demand for grain and helped make viable the shift toward pastoralism in the countryside. Late medieval sumptuary legislation, intended to keep the humble from dressing above his station and retain the distinction between low— and highborn, attests both to the peasant’s greater income and the desire of the elite to limit disorienting social change (Dyer, 1989; Gottfried, 1983; Hunt and Murray, 1999).

The Black Death, moreover, profoundly altered the contours of settlement in the countryside. Catastrophic loss of population led to abandonment of less attractive fields, contraction of existing settlements, and even wholesale desertion of villages. More than 1300 English villages vanished between 1350 and 1500. French and Dutch villagers abandoned isolated farmsteads and huddled in smaller villages while their Italian counterparts vacated remote settlements and shunned less desirable fields. The German countryside was mottled with abandoned settlements. Two thirds of named villages disappeared in Thuringia, Anhalt, and the eastern Harz mountains, one fifth in southwestern Germany, and one third in the Rhenish palatinate, abandonment far exceeding loss of population and possibly arising from migration from smaller to larger villages (Gottfried, 1983; Pounds, 1974).

The Black Death and the Commercial Economy

As with agriculture, assessment of the Black Death’s impact on the economy’s commercial sector is a complex problem. The vibrancy of the high medieval economy is generally conceded. As the first millennium gave way to the second, urban life revived, trade and manufacturing flourished, merchant and craft gilds emerged, commercial and financial innovations proliferated (e.g., partnerships, maritime insurance, double—entry bookkeeping, fair letters, letters of credit, bills of exchange, loan contracts, merchant banking, etc.). The integration of the high medieval economy reached its zenith c. 1250 to c. 1325 with the rise of large companies with international interests, such as the Bonsignori of Siena and the Buonaccorsi of Florence and the emergence of so—called “super companies” such as the Florentine Bardi, Peruzzi, and Acciaiuoli (Hunt and Murray, 1999).

How to characterize the late medieval economy has been more fraught with controversy, however. Historians a century past, uncomprehending of how their modern world could be rooted in a retrograde economy, imagined an entrepreneurially creative and expansive late medieval economy. Succeeding generations of historians darkened this optimistic portrait and fashioned a late Middle Ages of unmitigated decline, an “age of adversity” in which the economy was placed under the rubric “depression of the late Middle Ages.” The historiographical pendulum now swings away from this interpretation and a more nuanced picture has emerged that gives the Black Death’s impact on commerce its full due but emphasizes the variety of the plague’s impact from merchant to merchant, industry to industry, and city to city. Success or failure was equally possible after the Black Death and the game favored adaptability, creativity, nimbleness, opportunism, and foresight.

Once the magna pestilencia had passed, the city had to cope with a labor supply even more greatly decimated than in the countryside due to a generally higher urban death rate. The city, however, could reverse some of this damage by attracting, as it had for centuries, new workers from the countryside, a phenomenon that deepened the crisis for the manorial lord and contributed to changes in rural settlement. A resurgence of the slave trade occurred in the Mediterranean, especially in Italy, where the female slave from Asia or Africa entered domestic service in the city and the male slave toiled in the countryside. Finding more labor was not, however, a panacea. A peasant or slave performed an unskilled task adequately but could not necessarily replace a skilled laborer. The gross loss of talent due to the plague caused a decline in per capita productivity by skilled labor remediable only by time and training (Hunt and Murray, 1999; Miskimin, 1975).

Another immediate consequence of the Black Death was dislocation of the demand for goods. A suddenly and sharply smaller population ensured a glut of manufactured and trade goods, whose prices plummeted for a time. The businessman who successfully weathered this short—term imbalance in supply and demand then had to reshape his business’ output to fit a declining or at best stagnant pool of potential customers.

The Black Death transformed the structure of demand as well. While the standard of living of the peasant improved, chronically low prices for grain and other agricultural products from the late fourteenth century may have deprived the peasant of the additional income to purchase enough manufactured or trade items to fill the hole in commercial demand. In the city, however, the plague concentrated wealth, often substantial family fortunes, in fewer and often younger hands, a circumstance that, when coupled with lower prices for grain, left greater per capita disposable income. The plague’s psychological impact, moreover, it is believed, influenced how this windfall was used. Pessimism and the specter of death spurred an individualistic pursuit of pleasure, a hedonism that manifested itself in the purchase of luxuries, especially in Italy. Even with a reduced population, the gross volume of luxury goods manufactured and sold rose, a pattern of consumption that endured even after the extra income had been spent within a generation or so after the magna pestilencia.

Like the manorial lord, the affluent urban bourgeois sometimes employed structural impediments to block the ambitious parvenu from joining his ranks and becoming a competitor. A tendency toward limiting the status of gild master to the son or son—in—law of a sitting master, evident in the first half of the fourteenth century, gained further impetus after the Black Death. The need for more journeymen after the plague was conceded in the shortening of terms of apprenticeship, but the newly minted journeyman often discovered that his chance of breaking through the glass ceiling and becoming a master was virtually nil without an entrée through kinship. Women also were banished from gilds as unwanted competition. The urban wage laborer, by and large controlled by the gilds, was denied membership and had no access to urban structures of power, a potent source of frustration. While these measures may have permitted the bourgeois to hold his ground for a time, the winds of change were blowing in the city as well as the countryside and gild monopolies and gild restrictions were fraying by the close of the Middle Ages.

In the new climate created by the Black Death, the individual businessman did retain an advantage: the business judgment and techniques honed during the high Middle Ages. This was crucial in a contracting economy in which gross productivity never attained its high medieval peak and in which the prevailing pattern was boom and bust on a roughly generational basis. A fluctuating economy demanded adaptability and the most successful post—plague businessman not merely weathered bad times but located opportunities within adversity and exploited them. The post—plague entrepreneur’s preference for short—term rather than long—term ventures, once believed a product of a gloomy despondency caused by the plague and exacerbated by endemic violence, decay of traditional institutions, and nearly continuous warfare, is now viewed as a judicious desire to leave open entrepreneurial options, to manage risk effectively, and to take advantage of whatever better opportunity arose. The successful post—plague businessman observed markets closely and responded to them while exercising strict control over his concern, looking for greater efficiency, and trimming costs (Hunt and Murray, 1999).

The fortunes of the textile industry, a trade singularly susceptible to contracting markets and rising wages, best underscores the importance of flexibility. Competition among textile manufacturers, already great even before the Black Death due to excess productive capacity, was magnified when England entered the market for low— and medium—quality woolen cloth after the magna pestilencia and was exporting forty—thousand pieces annually by 1400. The English took advantage of proximity to raw material, wool England itself produced, a pattern increasingly common in late medieval business. When English producers were undeterred by a Flemish embargo on English cloth, the Flemish and Italians, the textile trade’s other principal players, were compelled to adapt in order to compete. Flemish producers that emphasized higher—grade, luxury textiles or that purchased, improved, and resold cheaper English cloth prospered while those that stubbornly competed head—to—head with the English in lower—quality woolens suffered. The Italians not only produced luxury woolens, improved their domestically—produced wool, found sources for wool outside England (Spain), and increased production of linen but also produced silks and cottons, once only imported into Europe from the East (Hunt and Murray, 1999).

The new mentality of the successful post—plague businessman is exemplified by the Florentines Gregorio Dati and Buonaccorso Pitti and especially the celebrated merchant of Prato, Francesco di Marco Datini. The large companies and super companies, some of which failed even before the Black Death, were not well suited to the post—plague commercial economy. Datini’s family business, with its limited geographical ambitions, better exercised control, was more nimble and flexible as opportunities vanished or materialized, and more effectively managed risk, all keys to success. Datini through voluminous correspondence with his business associates, subordinates, and agents and his conspicuously careful and regular accounting grasped the reins of his concern tightly. He insulated himself from undue risk by never committing too heavily to any individual venture, by dividing cargoes among ships or by insuring them, by never lending money to notoriously uncreditworthy princes, and by remaining as apolitical as he could. His energy and drive to complete every business venture likewise served him well and made him an exemplar for commercial success in a challenging era (Origo, 1957; Hunt and Murray, 1999).

The Black Death and Popular Rebellion

The late medieval popular uprising, a phenomenon with undeniable economic ramifications, is often linked with the demographic, cultural, social, and economic reshuffling caused by the Black Death; however, the connection between pestilence and revolt is neither exclusive nor linear. Any single uprising is rarely susceptible to a single—cause analysis and just as rarely was a single socioeconomic interest group the fomenter of disorder. The outbreak of rebellion in the first half of the fourteenth century (e.g., in urban [1302] and maritime [1325—28] Flanders and in English monastic towns [1326—27]) indicates the existence of socioeconomic and political disgruntlement well before the Black Death.

Some explanations for popular uprising, such as the placing of immediate stresses on the populace and the cumulative effect of centuries of oppression by manorial lords, are now largely dismissed. At times of greatest stress —— the Great Famine and the Black Death —— disorder but no large—scale, organized uprising materialized. Manorial oppression likewise is difficult to defend when the peasant in the plague’s aftermath was often enjoying better pay, reduced dues and services, broader opportunities, and a higher standard of living. Detailed study of the participants in the revolts most often labeled “peasant” uprisings has revealed the central involvement and apparent common cause of urban and rural tradesmen and craftsmen, not only manorial serfs.

The Black Death may indeed have made its greatest contribution to popular rebellion by expanding the peasant’s horizons and fueling a sense of grievance at the pace of change, not at its absence. The plague may also have undercut adherence to the notion of a divinely—sanctioned, static social order and buffeted a belief that preservation of manorial socioeconomic arrangements was essential to the survival of all, which in turn may have raised receptiveness to the apocalyptic socially revolutionary message of preachers like England’s John Ball. After the Black Death, change was inevitable and apparent to all.

The reasons for any individual rebellion were complex. Measures in the environs of Paris to check wage hikes caused by the plague doubtless fanned discontent and contributed to the outbreak of the Jacquerie of 1358 but high taxation to finance the Hundred Years’ War, depredation by marauding mercenary bands in the French countryside, and the peasantry’s conviction that the nobility had failed them in war roiled popular discontent. In the related urban revolt led by étienne Marcel (1355—58), tensions arose from the Parisian bourgeoisie’s discontent with the war’s progress, the crown’s imposition of regressive sales and head taxes, and devaluation of currency rather than change attributable to the Black Death.

In the English Peasants’ Rebellion of 1381, continued enforcement of the Statute of Laborers no doubt rankled and perhaps made the peasantry more open to provocative sermonizing but labor legislation had not halted higher wages or improvement in the standard of living for peasant. It seems likely that discontent may have arisen from an unsatisfying pace of improvement of the peasant’s lot. The regressive Poll Taxes of 1380 and 1381 also contributed to the discontent. It is furthermore noteworthy that the rebellion began in relatively affluent eastern England, not in the poorer west or north.

In the Ciompi revolt in Florence (1378—83), restrictive gild regulations and denial of political voice to workers due to the Black Death raised tensions; however, Florence’s war with the papacy and an economic slump in the 1370s resulting in devaluation of the penny in which the worker was paid were equally if not more important in fomenting unrest. Once the value of the penny was restored to its former level in 1383 the rebellion in fact subsided.

In sum, the Black Death played some role in each uprising but, as with many medieval phenomena, it is difficult to gauge its importance relative to other causes. Perhaps the plague’s greatest contribution to unrest lay in its fostering of a shrinking economy that for a time was less able to absorb socioeconomic tensions than had the growing high medieval economy. The rebellions in any event achieved little. Promises made to the rebels were invariably broken and brutal reprisals often followed. The lot of the lower socioeconomic strata was improved incrementally by the larger economic changes already at work. Viewed from this perspective, the Black Death may have had more influence in resolving the worker’s grievances than in spurring revolt.

Conclusion

The European economy at the close of the Middle Ages (c. 1500) differed fundamentally from the pre—plague economy. In the countryside, a freer peasant derived greater material benefit from his toil. Fixed rents if not outright ownership of land had largely displaced customary dues and services and, despite low grain prices, the peasant more readily fed himself and his family from his own land and produced a surplus for the market. Yields improved as reduced population permitted a greater focus on fertile lands and more frequent fallowing, a beneficial phenomenon for the peasant. More pronounced socioeconomic gradations developed among peasants as some, especially more prosperous ones, exploited the changed circumstances, especially the availability of land. The peasant’s gain was the lord’s loss. As the Middle Ages waned, the lord was commonly a pure rentier whose income was subject to the depredations of inflation.

In trade and manufacturing, the relative ease of success during the high Middle Ages gave way to greater competition, which rewarded better business practices and leaner, meaner, and more efficient concerns. Greater sensitivity to the market and the cutting of costs ultimately rewarded the European consumer with a wider range of good at better prices.

In the long term, the demographic restructuring caused by the Black Death perhaps fostered the possibility of new economic growth. The pestilence returned Europe’s population roughly its level c. 1100. As one scholar notes, the Black Death, unlike other catastrophes, destroyed people but not property and the attenuated population was left with the whole of Europe’s resources to exploit, resources far more substantial by 1347 than they had been two and a half centuries earlier, when they had been created from the ground up. In this environment, survivors also benefited from the technological and commercial skills developed during the course of the high Middle Ages. Viewed from another perspective, the Black Death was a cataclysmic event and retrenchment was inevitable, but it ultimately diminished economic impediments and opened new opportunity.

References and Further Reading:

Aberth, John. “The Black Death in the Diocese of Ely: The Evidence of the Bishop’s Register.” Journal of Medieval History 21 (1995): 275—87.

Aberth, John. From the Brink of the Apocalypse: Confronting Famine, War, Plague, and Death in the Later Middle Ages. New York: Routledge, 2001.

Aberth, John. The Black Death: The Great Mortality of 1348—1350, a Brief History with Documents . Boston and New York: Bedford/St. Martin’s, 2005.

Aston, T. H. and C. H. E. Philpin, eds. The Brenner Debate: Agrarian Class Structure and Economic Development in Pre—Industrial Europe. Cambridge: Cambridge University Press, 1985.

Bailey, Mark D. “Demographic Decline in Late Medieval England: Some Thoughts on Recent Research.” Economic History Review 49 (1996): 1—19.

Bailey, Mark D. A Marginal Economy? East Anglian Breckland in the Later Middle Ages. Cambridge: Cambridge University Press, 1989.

Benedictow, Ole J. The Black Death, 1346—1353: The Complete History. Woodbridge, Suffolk: Boydell Press, 2004.

Bleukx, Koenraad. “Was the Black Death (1348—49) a Real Plague Epidemic? England as a Case Study.” In Serta Devota in Memoriam Guillelmi Lourdaux. Pars Posterior: Cultura Medievalis, edited by W. Verbeke, M. Haverals, R. de Keyser, and J. Goossens, 64—113. Leuven: Leuven University Press, 1995.

Blockmans, Willem P. “The Social and Economic Effects of Plague in the Low Countries, 1349—1500.” Revue Belge de Philologie et d’Histoire 58 (1980): 833—63.

Bolton, Jim L. “‘The World Upside Down': Plague as an Agent of Economic and Social Change.” In The Black Death in England, edited by M. Ormrod and P. Lindley. Stamford: Paul Watkins, 1996.

Bowsky, William M. “The Impact of the Black Death upon Sienese Government and Society.” Speculum 38 (1964): 1—34.

Campbell, Bruce M. S. “Agricultural Progress in Medieval England: Some Evidence from Eastern Norfolk.” Economic History Review 36 (1983): 26—46.

Campbell, Bruce M. S., ed. Before the Black Death: Studies in the ‘Crisis’ of the Early Fourteenth Century. Manchester: Manchester University Press, 1991.

Cipolla, Carlo M. Before the Industrial Revolution: European Society and Economy, 1000—1700, Third edition. New York: Norton, 1994.

Cohn, Samuel K. The Black Death Transformed: Disease and Culture in Early Renaissance Europe. London: Edward Arnold, 2002.

Cohn, Sameul K. “After the Black Death: Labour Legislation and Attitudes toward Labour in Late—Medieval Western Europe.” Economic History Review 60 (2007): 457—85.

Davis, David E. “The Scarcity of Rats and the Black Death.” Journal of Interdisciplinary History 16 (1986): 455—70.

Davis, R. A. “The Effect of the Black Death on the Parish Priests of the Medieval Diocese of Coventry and Lichfield.” Bulletin of the Institute of Historical Research 62 (1989): 85—90.

Drancourt, Michel, Gerard Aboudharam, Michel Signoli, Olivier Detour, and Didier Raoult. “Detection of 400—Year—Old Yersinia Pestis DNA in Human Dental Pulp: An Approach to the Diagnosis of Ancient Septicemia.” Proceedings of the National Academy of the United States 95 (1998): 12637—40.

Dyer, Christopher. Standards of Living in the Middle Ages: Social Change in England, c. 1200—1520. Cambridge: Cambridge University Press, 1989.

Emery, Richard W. “The Black Death of 1348 in Perpignan.” Speculum 42 (1967): 611—23.

Farmer, David L. “Prices and Wages.” In The Agrarian History of England and Wales, Vol. II, edited H. E. Hallam, 715—817. Cambridge: Cambridge University Press, 1988.

Farmer, D. L. “Prices and Wages, 1350—1500.” In The Agrarian History of England and Wales, Vol. III, edited E. Miller, 431—94. Cambridge: Cambridge University Press, 1991.

Flinn, Michael W. “Plague in Europe and the Mediterranean Countries.” Journal of European Economic History 8 (1979): 131—48.

Freedman, Paul. The Origins of Peasant Servitude in Medieval Catalonia. New York: Cambridge University Press, 1991.

Gottfried, Robert. The Black Death: Natural and Human Disaster in Medieval Europe. New York: Free Press, 1983.

Gyug, Richard. “The Effects and Extent of the Black Death of 1348: New Evidence for Clerical Mortality in Barcelona.” Mediæval Studies 45 (1983): 385—98.

Harvey, Barbara F. “The Population Trend in England between 1300 and 1348.” Transactions of the Royal Historical Society 4th ser. 16 (1966): 23—42.

Harvey, P. D. A. A Medieval Oxfordshire Village: Cuxham, 1240—1400. London: Oxford University Press, 1965.

Hatcher, John. “England in the Aftermath of the Black Death.” Past and Present 144 (1994): 3—35.

Hatcher, John and Mark Bailey. Modelling the Middle Ages: The History and Theory of England’s Economic Development. Oxford: Oxford University Press, 2001.

Hatcher, John. Plague, Population, and the English Economy 1348—1530. London and Basingstoke: MacMillan Press Ltd., 1977.

Herlihy, David. The Black Death and the Transformation of the West, edited by S. K. Cohn. Cambridge and London: Cambridge University Press, 1997.

Horrox, Rosemary, transl. and ed. The Black Death. Manchester: Manchester University Press, 1994.

Hunt, Edwin S.and James M. Murray. A History of Business in Medieval Europe, 1200—1550. Cambridge: Cambridge University Press, 1999.

Jordan, William C. The Great Famine: Northern Europe in the Early Fourteenth Century. Princeton: Princeton University Press, 1996.

Lehfeldt, Elizabeth, ed. The Black Death. Boston: Houghton and Mifflin, 2005.

Lerner, Robert E. The Age of Adversity: The Fourteenth Century. Ithaca: Cornell University Press, 1968.

Le Roy Ladurie, Emmanuel. The Peasants of Languedoc, transl. J. Day. Urbana: University of Illinois Press, 1976.

Lomas, Richard A. “The Black Death in County Durham.” Journal of Medieval History 15 (1989): 127—40.

McNeill, William H. Plagues and Peoples. Garden City, New York: Anchor Books, 1976.

Miskimin, Harry A. The Economy of the Early Renaissance, 1300—1460. Cambridge: Cambridge University Press, 1975.

Morris, Christopher “The Plague in Britain.” Historical Journal 14 (1971): 205—15.

Munro, John H. “The Symbiosis of Towns and Textiles: Urban Institutions and the Changing Fortunes of Cloth Manufacturing in the Low Countries and England, 1270—1570.” Journal of Early Modern History 3 (1999): 1—74.

Munro, John H. “Wage—Stickiness, Monetary Changes, and the Real Incomes in Late—Medieval England and the Low Countries, 1300—1500: Did Money Matter?” Research in Economic History 21 (2003): 185—297.

Origo. Iris The Merchant of Prato: Francesco di Marco Datini, 1335—1410. Boston: David R. Godine, 1957, 1986.

Platt, Colin. King Death: The Black Death and its Aftermath in Late—Medieval England. Toronto: University of Toronto Press, 1996.

Poos, Lawrence R. A Rural Society after the Black Death: Essex 1350—1575. Cambridge: Cambridge University Press, 1991.

Postan, Michael M. The Medieval Economy and Society: An Economic History of Britain in the Middle Ages. Harmondswworth, Middlesex: Penguin, 1975.

Pounds, Norman J. D. An Economic History of Europe. London: Longman, 1974.

Raoult, Didier, Gerard Aboudharam, Eric Crubézy, Georges Larrouy, Bertrand Ludes, and Michel Drancourt. “Molecular Identification by ‘Suicide PCR’ of Yersinia Pestis as the Agent of Medieval Black Death.” Proceedings of the National Academy of Sciences of the United States of America 97 (7 Nov. 2000): 12800—3.

Razi, Zvi “Family, Land, and the Village Community in Later Medieval England.” Past and Present 93 (1981): 3—36.

Russell, Josiah C. British Medieval Population. Albuquerque: University of New Mexico Press, 1948.

Scott, Susan and Christopher J. Duncan. Return of the Black Death: The World’s Deadliest Serial Killer. Chicester, West Sussex and Hoboken, NJ: Wiley, 2004.

Shrewsbury, John F. D. A History of Bubonic Plague in the British Isles. Cambridge: Cambridge University Press, 1970.

Twigg, Graham The Black Death: A Biological Reappraisal. London: Batsford Academic and Educational, 1984.

Waugh, Scott L. England in the Reign of Edward III. Cambridge: Cambridge University Press, 1991.

Ziegler, Philip. The Black Death. London: Penguin, 1969, 1987.

Citation: Routt, David. “The Economic Impact of the Black Death”. EH.Net Encyclopedia, edited by Robert Whaples. July 20, 2008. URL http://eh.net/encyclopedia/the-economic-impact-of-the-black-death/