EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

History of Food and Drug Regulation in the United States

Marc T. Law, University of Vermont

Throughout history, governments have regulated food and drug products. In general, the focus of this regulation has been on ensuring the quality and safety of food and drugs. Food and drug regulation as we know it today in the United States had its roots in the late nineteenth century when state and local governments began to enact food and drug regulations in earnest. Federal regulation of the industry began on a large scale in the early twentieth century when Congress enacted the Pure Food and Drugs Act of 1906. The regulatory agency spawned by this law – the U.S. Food and Drug Administration (FDA) – now directly regulates between one-fifth and one-quarter of U.S. gross domestic product (GDP) and possesses significant power over product entry, the ways in which food and drugs are marketed to consumers, and the manufacturing practices of food and drug firms. This article will focus on the evolution of food and drug regulation in the United States from the middle of the nineteenth century until the present day.1

General Issues in Food and Drug Regulation

Perhaps the most enduring problem in the food and drug industry has been the issue of “adulteration” – the cheapening of products through the addition of impure or inferior ingredients. Since ancient times, producers of food and drug products have attempted to alter their wares in an effort to obtain dear prices for cheaper goods. For instance, water has often been added to wine, the cream skimmed from milk, and chalk added to bread. Hence, regulations governing what could or could not be added to food and drug products have been very common, as have regulations that require the use of official weights and measures. Because the adulteration of food and drugs may pose both economic and health risks to consumers, the stated public interest motivation for food and drug regulation has generally been to protect consumers from fraudulent and/or unsafe food and drug products.

From an economic perspective, regulations like these may be justified in markets where producers know more about product quality than consumers. As Akerlof (1970) demonstrates, when consumers have less information about product quality than producers, lower quality products (which are generally cheaper to produce) may drive out higher quality products. Asymmetric information about product quality may thus result in lower quality products – the so-called “lemons” – dominating the market. To the extent that regulators are better informed about quality than consumers, regulation that punishes firms that cheat on quality or that requires firms to disclose information about product quality can improve efficiency. Thus, regulations governing what can or cannot be added to products, how products are labeled, and whether certain products can be safely sold to consumers, can be justified in the public interest if consumers do not possess the information to accurately discern these aspects of product quality on their own. Regulations that solve the asymmetric information problem benefit consumers who desire better information about product quality, as well as producers of higher quality products, who desire to segment the market for their wares.

For certain products, it may be relatively easy for consumers to know whether or not they have been deceived into purchasing a low quality product after consuming it. For such goods, sometimes called “experience goods,” market mechanisms like branding or repeat purchase may be adequate to solve the asymmetric information problem. Consumers can “punish” firms that cheat on quality by taking their business elsewhere (Klein and Leffler 1981). Hence, as long as consumers are able to identify whether or not they have been cheated, regulation may not be needed to solve the asymmetric information problem. However, for those products where quality is not easily ascertained by consumers even after consuming the product, market mechanisms are unlikely to be adequate since it is impossible for consumers to punish cheaters if they cannot determine whether or not they have in fact been cheated (Darby and Karni 1973; McCluskey 2000). For such “credence goods,” market mechanisms may therefore be insufficient to ensure that the right level of quality is delivered. Like all goods, food and drugs are multidimensional in terms of product quality. Some dimensions of quality (for instance, flavor or texture) are experience goods because they can be easily determined upon consumption. Other dimensions (for instance, the ingredients contained in certain foods, the caloric content of foods, whether or not an item is “organic,” or the therapeutic merits of medicines) are better characterized as credence goods since it may not be obvious to even a sophisticated consumer whether or not he has been cheated. Hence, there are a priori reasons to believe that market forces will not be adequate to solve the asymmetric information problem that plagues many dimensions of food and drug quality.

Economists have long recognized that regulation is not always enacted to improve efficiency and advance the public interest. Indeed, since Stigler (1971) and Peltzman (1976), it has often been argued that regulation is sought by specific industry groups in order to tilt the competitive playing field to their advantage. For instance, by functioning as an entry barrier, regulation may raise the profits of incumbent firms by precluding the entry of new firms and new products. In these instances of “regulatory capture,” regulation harms efficiency by limiting the extent of competition and innovation in the market. In the context of product quality regulations like those applying to food and drugs, regulation may help incumbent producers by making it more costly for newer products to enter the market. Indeed, regulations that require producers to meet certain minimum standards or that ban the use of certain additives may benefit incumbent producers at the expense of producers of cheaper substitutes. Such regulations may also harm consumers, whose needs may be better met by these new prohibited products. The observation that select producer interests are often among the most vocal proponents of regulation is consistent with this regulatory capture explanation for regulation. Indeed, as we will see, a desire to shift the competitive playing field in favor of the producers of certain products has historically been an important motivation for food and drug regulation.

The fact that producer groups are often among the most important political constituencies in favor of regulation need not, however, imply that regulation necessarily advances the interests of these producers at the expense of efficiency. As noted earlier, to the extent that regulation reduces informational asymmetries about product quality, regulation may benefit producers of higher quality items as well as the consumers of such goods. Indeed, such efficiency-enhancing regulation may be particularly desirable for those producers whose goods are least amenable to market-based solutions to the asymmetric information problem (i.e., credence goods) precisely because it helps these producers expand the market for their wares and increase their profits. Hence, because it is possible for regulation that benefits certain producers to also improve welfare, producer support for regulation should not be taken as prima facie evidence of Stiglerian regulation.

United States’ Experience with Food and Drug Regulation

From colonial times until the mid to late nineteenth century, most food and drug regulation in America was enacted at the state and local level. Additionally, these regulations were generally targeted toward specific food products (Hutt and Hutt 1984). For instance, in 1641 Massachusetts introduced its first food adulteration law, which required the official inspection of beef, pork and fish; this was followed in the 1650s with legislation that regulated the quality of bread. Meanwhile, Virginia in the 1600s enacted laws to regulate weights and measures for corn and to outlaw the sale of adulterated wines.

During the latter half of the nineteenth century, the scale and scope of state level food regulation expanded considerably. Several factors contributed to this growth in legislation. For instance:

  • Specialization and urbanization made households more dependent on food purchased in impersonal markets. While these forces increased the variety of foods available, it also increased uncertainty about quality, since the more specialized and urbanized consumers became, the less they knew about the quality of products purchased from others (Wallis and North 1986).
  • Technological change in food manufacturing gave rise to new products and increased product complexity. The late nineteenth century witnessed the introduction of several new food products including alum-based baking powders, oleomargarine (the first viable substitute for butter), glucose, canned foods, “dressed” (i.e. refrigerated) beef, blended whiskey, chemical preservatives, and so on (Strasser 1989; Young 1989; Goodwin 1999). Unfamiliarity with these new products generated consumer concerns about food safety and food adulteration. Moreover, because many of these new products directly challenged the dominant position enjoyed by more traditional foods, these developments also give rise to demands for regulation on the part of traditional food producers who desired regulation to disadvantage these new competitors (Wood 1986).
  • Related to the previous point, the rise of analytic chemistry facilitated the “cheapening” of food in ways that were difficult for consumers to detect. For instance, the introduction of preservatives made it possible for food manufacturers to mask food deterioration. Additionally, the development of glucose as a cheap alternative to sugar facilitated deception on the part of producers of high priced products like maple syrup. Hence, concerns about adulteration were increasingly felt. Curiously, however, the rise of analytic chemistry also improved the ability of experts to detect these more subtle forms of food adulteration.
  • Because food adulteration became more difficult to detect, market mechanisms that relied on the ability of consumers to detect cheating ex post became less effective in solving the food adulteration problem. Hence, there was a growing perception that regulation by experts was necessary.2

Given this environment, it is perhaps unsurprising that a mixture of incentives gave rise to food regulation in the late nineteenth century. General pure food and dairy laws that required producers to properly label their products to indicate whether mixtures or impurities were added were likely enacted to help reduce asymmetric information about product quality (Law 2003). While producers of “pure” items also played a role in demanding these regulations, consumer groups – specifically women’s groups and leaders of the fledgling home economics movement – were also an important constituency in favor of regulation because they desired better information about food ingredients (Young 1989; Goodwin 1999). In contrast, narrow producer interest motivations seem to have been more important in generating a demand for more specific food regulations. For instance, state and federal oleomargarine restrictions were clearly enacted at the behest of dairy producing interests, who wanted to limit the availability of oleomargarine (Dupré 1999). Additionally, state and federal meat inspection laws were introduced to placate local butchers and local slaughterhouses in eastern markets who desired to reduce the competitive threat posed by the large mid-western meat packers (Yeager 1981; Libecap 1992).

Federal regulation of the food and drug industry was mostly piecemeal until the early 1900s. In 1848, Congress enacted the Drug Importation Act to curb the import of adulterated medicines. The 1886 oleomargarine tax required margarine manufacturers to stamp their product in various ways, imposed an internal revenue tax of 2 cents per pound on all oleomargarine produced in the United States, and levied a fee of $600 per year on oleomargarine producers, $480 per year on oleomargarine wholesalers, and $48 per year on oleomargarine retailers (Lee 1973; Dupré 1999). The 1891 Meat Inspection Act mandated the inspection of all live cattle for export as well as for all live cattle that were to be slaughtered and the meat exported. In 1897 the Tea Importation Act was passed which required Customs inspection of tea imported into the United States. Finally, in 1902 Congress enacted the Biologics Control Act to regulate the safety of vaccinations and serums used to prevent diseases in humans.

The 1906 Pure Food and Drugs Act and the 1906 Meat Inspection Act

The first general pure food and drug law at the federal level was not enacted until 1906 with the passage of the Pure Food and Drugs Act. While interest in federal regulation arose contemporaneously with interest in state regulation, conflict among competing interest groups regarding the provisions of a federal law made it difficult to build an effective political constituency in favor of federal regulation (Anderson 1958; Young 1989; Law and Libecap 2004). The law that emerged from this long legislative battle was similar in character to the state pure food laws that preceded it in that its focus was on accurate product labeling: it outlawed interstate trade in “adulterated” and “misbranded” foods, and required producers to indicate the presence of mixtures and/or impurities on product labels. Unlike earlier state legislation, however, the adulteration and misbranding provisions of this law also applied to drugs. Additionally, drugs listed in the United States Pharmacopoeia (USP) and the National Formulary (NF) were required to conform to USP and NF standards. Congress enacted the Pure Food and Drug Act along with the 1906 Meat Inspection Act, which tightened the USDA’s oversight of meat production. This new meat inspection law mandated ante and post mortem inspection of livestock, established sanitary standards for slaughterhouses and processing plants, and required continuous USDA inspection of meat processing and packaging. While the desire to create more uniform national food regulations was an important underlying motivation for regulation, it is noteworthy that both of these laws were enacted following a flurry of investigative journalism about the quality of meat and patent medicines. Specifically, the publication of Upton Sinclair’s The Jungle, with its vivid description of the conditions of the meat packing industry, as well as a series of articles by Samuel Hopkins Adams published in Collier’s Weekly about the dangers associated with patent medicine use, played a key role in provoking legislators to enact federal regulation of food and drugs (Wood 1986; Young 1989; Carpenter 2001; Law and Libecap 2004).3

Responsibility for enforcing the Pure Food and Drugs Act fell to the Bureau of Chemistry, a division within the USDA, which conducted some of the earliest studies of food adulteration within the United States. The Bureau of Chemistry was renamed the Food, Drug, and Insecticide Administration in 1927. In 1931 the name was shortened to the Food and Drug Administration (FDA). In 1940 the FDA was transferred from the USDA to the Federal Security Agency, which, in 1953, was renamed the Department of Health, Education and Welfare.

Whether the 1906 Pure Food and Drugs Act was enacted to advance special interests or to improve efficiency is a subject of some debate. Kolko (1967), for instance, suggests that the law reflected regulatory capture by large, national food manufacturers, who wanted to use federal legislation to disadvantage smaller, local firms. Coppin and High (1999) argue that rent-seeking on the part of bureaucrats within the government – specifically, Dr. Harvey Wiley, chief of the Bureau of Chemistry – was a critical factor in the emergence of this law. According to Coppin and High, Wiley was a “bureaucratic entrepreneur” who sought to ensure the future of his agency. By building ties with pro-regulation interest groups and lobbying in favor of a federal food and drug law, Wiley secured a lasting policy area for his organization. Law and Libecap (2004) argue that a mixture of bureaucratic, producer and consumer interests were in favor of federal food and drugs regulation, but that the last-minute onset of consumer interest in regulation (provoked by muckraking journalism about food and drug quality) played a key role in influencing the timing of regulation.

Enforcement of the Pure Food and Drugs Act met with mixed success. Indeed, the evidence from the enforcement of this law suggests that neither the pure industry capture nor public interest hypotheses provide an adequate account for regulation. On the one hand, some evidence suggests that the fledgling FDA’s enforcement work helped raise standards and reduce informational asymmetries about food quality. For instance, under the Net Weight Amendment of 1919, food and drug packages shipped in interstate commerce were required to be “plainly and conspicuously marked to show the quantity of contents in terms of weight, measure, and numerical count” (Weber 1928, p. 28). Similarly, under the Seafood Amendment of 1934, Gulf coast shrimp packaged under FDA supervision was required to be stamped with a label stating “Production supervised by the U.S. Food and Drug Administration” as a mechanism for ensuring quality and freshness. Additionally, during this period, investigators from the FDA played a key role in helping manufacturers improve the quality and reliability of processed foods, poultry products, food colorings, and canned items (Robinson 1900; Young 1992; Law 2003). On the other hand, the FDA’s efforts to regulate the patent medicine industry – specifically, to regulate the therapeutic claims that patent medicine firms made about their products – were largely unsuccessful. In U.S. vs. Johnson (1911), the Supreme Court ruled that therapeutic claims were essentially subjective and hence beyond the reach of this law. This situation was partially alleviated by the Sherley Amendment of 1912, which made it possible for the government to prosecute patent medicine producers who intended to defraud consumers. Effective regulation of pharmaceuticals was generally not possible, however, because under this amendment the government needed to prove fraud in order to successfully prosecute a patent medicine firm for making false therapeutic claims about its products (Young 1967). Hence, until new legislation was enacted in 1938, the patent medicine industry continued to escape effective federal control.

The 1938 Food, Drugs and Cosmetics Act

Like the law it replaced (the 1906 Pure Food and Drugs Act), the Food, Drugs and Cosmetics Act of 1938 was enacted following a protracted legislative battle. In the early 1930s, the FDA and its Congressional supporters began to lobby in favor of replacing the Pure Food and Drugs Act with stronger legislation that would give the agency greater authority to regulate the patent medicine industry. These efforts were successfully challenged by the patent medicine industry and its Congressional allies until 1938, when the so-called “Elixir Sulfanilamide tragedy” made it impossible for Congress to continue to ignore demands for tighter regulation. The story behind the Elixir Sulfanilamide tragedy is as follows. In 1937, Massengill, a Tennessee drug company, began to market a liquid sulfa drug called Elixir Sulfanilamide. Unfortunately, the solvent in this drug was a highly toxic variant of antifreeze; as a result, over 100 people died from taking this drug. Public outcry over this tragedy was critical in breaking the Congressional deadlock over tighter regulation (Young 1967; Jackson 1970; Carpenter and Sin 2002).

Under the 1938 law, the FDA was given considerably greater authority over the food and drug industry. The FDA was granted the power to regulate the therapeutic claims drug manufacturers printed on their product labels; authority over drug advertising, however, rested with the Federal Trade Commission (FTC) under the Wheeler-Lea Act of 1938. Additionally, the new law required that drugs be marketed with adequate directions for safe use, and FDA authority was extended to include medical devices and cosmetics. Perhaps the most striking and novel feature of the 1938 law was that it introduced mandatory pre-market approval for new drugs. Under this new law, drug manufacturers were required to demonstrate to the FDA that a new drug was safe before it could be released to the market. This feature of the legislation was clearly a reaction to the Elixir Sulfanilamide incident; food and drug bills introduced in Congress prior to 1938 did not include provisions requiring mandatory pre-market approval of new drugs.

Within a short period of time, the FDA began to deem some drugs to be so dangerous that no adequate directions could be written for direct use by patients. As a consequence, the FDA created a new class of drugs which would only be available with a physician’s prescription. Ambiguity over whether certain medicines – specifically, amphetamines and barbiturates – could be safely marketed directly to consumers or required a physician’s prescription led to disagreements between physicians, pharmacists, drug companies, and the FDA (Temin 1980). The political response to these conflicts was the Humphrey-Durham Amendment in 1951, which permitted a drug to be sold directly to patients “unless, because of its toxicity or other potential for harmful effect or because the method of collateral measures necessary to its use, it may safely be sold and used only under the supervision of a practitioner.”

The most significant expansion in FDA authority over drugs in the post World War II period occurred when Congress enacted the 1962 Drug Amendments (also known as the Kefauver-Harris Amendments) to the Food, Drugs and Cosmetics Act. Like the 1938 law, the 1962 Drug Amendments were passed in response to a therapeutic crisis – in this instance, the discovery that the use of thalidomide (a sedative that was marketed to combat the symptoms associated with morning sickness) by pregnant women caused birth deformities in thousands of babies in Europe.4 As a result of these amendments, drug companies were required to establish that drugs were both safe and effective prior to market release (the 1938 law only required proof of safety) and the FDA was granted greater authority to oversee clinical trials for new drugs. Under the 1962 Drug Amendments, responsibility for regulating prescription drug advertising was transferred from the FTC to the FDA; furthermore, the FDA was given the authority to establish good manufacturing practices in the drug industry and the power to access company records to monitor these practices. As a result of these amendments, the United States today has among the toughest drug approval regimes in the developed world.

A large and growing body of scholarship has been devoted to analyzing the economics and politics of the drug approval process. Early work has focused on the extent to which the FDA’s pre-market approval process has affected the rate of innovation and the availability of new pharmaceuticals.5 Peltzman (1973), among others, argues that 1962 Drug Amendments significantly reduced the flow of new drugs onto the market and imposed large welfare losses on society. These views have been challenged by Temin (1980) who maintains that much of the decline in new drug introductions occurred prior to the 1962 Drug Amendments. More recent work, however, suggests that the FDA’s pre-market approval process has indeed reduced the availability of new medicines (Wiggins 1981). In international comparisons, scholars have also found that new medicines generally become available more quickly in Europe than in America, suggesting that tighter regulation in the U.S. has induced a drug-lag (Wardell and Lasagna 1975; Grabowsky and Vernon 1983; Kaitin and Brown 1995). Some critics believe that the costs of this drug lag are large relative to the benefits because delay in the introduction of new drugs prevents patients from accessing new and more effective medicines. Gieringer (1985), for instance, estimates that the number of deaths that can be attributed to the drug lag far exceeds the number of lives saved by extra caution on the part of the FDA. Hence, according to these authors, the 1962 Drug Amendments may have had adverse consequences for overall welfare.

Other scholarship has examined the pattern of drug approval times in the post 1962 period. It is commonly observed that larger pharmaceutical firms receive faster drug approvals than smaller firms. One interpretation of this fact is that larger firms have “captured” the drug approval process and use the process to disadvantage their smaller competitors. Empirical work by Olson (1997) and Carpenter (2002), however, casts some doubt on this Stiglerian interpretation.6 These authors find that while larger firms do generally receive quicker drug approvals, drug approval times are also responsive to several other factors, including the specific disease at which a drug is directed, the number of applications submitted by the drug company, and the existence of a disease-specific interest group. Indeed, in other work, Carpenter (2004a) demonstrates that a regulator that seeks to maximize its reputation for protecting consumer safety may approve new drugs in ways that appear to benefit large firms.7 Hence, the fact that large pharmaceutical firms obtain faster drug approvals than small firms need not imply that the FDA has been “captured” by these corporations.

Food and Drug Regulation since the 1960s

Since the passage of the 1962 Drug Amendments, federal food and drug regulation in the United States has evolved along several lines. In some cases, regulation has strengthened the government’s authority over various aspects of the food and drug trade. For instance, the 1976 Medical Device Amendments required medical device manufacturers to register with the FDA and to follow quality control guideline. These amendments also established pre-market approval guidelines for medical devices. Along similar lines, the 1990 Nutrition Labeling and Education Act required all packaged foods to contain standardized nutritional information and standardized information on serving sizes.8

In other cases, regulations have been enacted to streamline the pre-market approval process for new drugs. Concerns that mandatory pre-market approval of new drugs may have reduced the rate at which new pharmaceuticals become available to consumers prompted the FDA to issue new rules in 1991 to accelerate the review of drugs for life-threatening diseases. Similar concerns also motivated Congress to enact the Prescription Drug User Fee Act of 1992 which required drug manufacturers to pay fees to the FDA to review drug approval applications and required the FDA to use these fees to pay for more reviewers to assess these new drug applications.9 Speedier drug approval times have not, however, come without costs. Evidence presented by Olson (2002) suggests that faster drug approval times have also contributed to a higher incidence of adverse drug reactions from new pharmaceuticals.

Finally, in a few instances, legislation has weakened government’s authority over food and drug products. For example, the 1976 Vitamins and Minerals Amendments precluded the FDA from establishing standards that limited the potency of vitamins and minerals added to foods. Similarly, the 1994 Dietary Supplements and Nutritional Labeling Act weakened the FDA’s ability to regulate dietary supplements by classifying them as foods rather than drugs. In these cases, the consumers and producers of “natural” or “herbal” remedies played a key role in pushing Congress to limit the FDA’s authority.

References

Akerlof, George A. “The Market for ‘Lemons’: Quality Uncertainty and the Market Mechanism.” Quarterly Journal of Economics 84, no. 3 (1970): 488-500

Anderson, Oscar E. Jr. The Health of a Nation: Harvey W. Wiley and the Fight for Pure Food. Chicago: University of Chicago Press, 1958.

Carpenter, Daniel P. The Forging of Bureaucratic Autonomy: Reputation, Networks, and Policy Innovation in Executive Agencies, 1862-1928. Princeton: Princeton University Press, 2001.

Carpenter, Daniel P. “Groups, the Media, Agency Waiting Costs, and FDA Drug Approval.” American Journal of Political Science 46, no. 2 (2002):490-505

Carpenter, Daniel P. “Protection without Capture: Drug Approval by a Political Responsive, Bayesian Regulator.” American Political Science Review, (2004a), Forthcoming.

Carpenter, Daniel P. “The Political Economy of FDA Drug Review: Processing, Politics, and Lessons for Policy.” Health Affairs 23, no. 1 (2004b):52-63.

Carpenter, Daniel P. and Gisela Sin. “Crisis and the Emergence of Economic Regulation: The Food, Drug and Cosmetics Act of 1938.” University of Michigan, Department of Political Science, unpublished manuscript, 2002.

Comanor, William S. “The Political Economy of the Pharmaceutical Industry.” Journal of Economic Literature 24, no. 3 (1986): 1178-1217.

Coppin, Clayton and Jack High. The Politics of Purity: Harvey Washington Wiley and the Origins of Federal Food Policy. Ann Arbor: University of Michigan Press, 1999.

Darby, Michael R. and Edi Karni. “Free Competition and the Optimal Amount of Fraud.” Journal of Law and Economics 16, no. 1 (1973): 67-88.

Dupré, Ruth. “If It’s Yellow, It Must be Butter: Margarine Regulation in North America since 1886.” Journal of Economic History 59, no 2 (1999): 353-71.

French, Michael and Jim Phillips. Cheated Not Poisoned? Food Regulation in the United Kingdom, 1875-1938. Manchester: Manchester University Press, 2000.

Gieringer, Dale H. “The Safety and Efficacy of New Drug Approvals.” Cato Journal 5, no. 1 (1985): 177-201.

Goodwin, Lorine S. The Pure Food, Drink, and Drug Crusaders, 1879-1914. Jefferson, NC: McFarland & Company, 1999.

Grabowski, Henry G. and John M. Vernon. The Regulation of Pharmaceuticals: Balancing the Benefits and Risks. Washington, DC: American Enterprise Institute, 1983.

Harris, Steven B. “The Right Lesson to Learn from Thalidomide.” 1992. Available at: http://w3.aces.uiuc.edu:8001/Liberty/Tales/Thalidomide.html.

Hutt, Peter Barton and Peter Barton Hutt II. “A History of Government Regulation of Adulteration and Misbranding of Food.” Food, Drug and Cosmetic Law Journal 39 (1984): 2-73.

Ippolito, Pauline M. and Janis K. Pappalardo. Advertising, Nutrition, and Health: Evidence from Food Advertising, 1977-1997. Bureau of Economics Staff Report. Washington, DC: Federal Trade Commission, 2002.

Jackson, Charles O. Food and Drug Legislation in the New Deal. Princeton: Princeton University Press, 1970.

Kaitin, Kenneth I. and Jeffrey S. Brown. “A Drug Lag Update.” Drug Information Journal 29, no. 2 (1995): 361-73.

Klein, Benjamin and Keith B. Leffler. “The Role of Market Forces in Assuring Contractual Performance.” Journal of Political Economy 89, no. 4 (1981): 615-41.

Kolko, Gabriel. The Triumph of Conservatism: A Reinterpretation of American History. New York: MacMillan, 1967.

Law, Marc T. “The Origins of State Pure Food Regulation.” Journal of Economic History 63, no. 4 (2003): 1103-1130.

Law, Marc T. “How Do Regulators Regulate? Enforcement of the Pure Food and Drugs Act, 1907-38.” University of Vermont, Department of Economics, unpublished manuscript, 2003.

Law, Marc T. and Gary D. Libecap. “The Determinants of Progressive Era Reform: The Pure Food and Drug Act of 1906.” In Corruption and Reform: Lessons from America’s History, edited by Edward Glaeser and Claudia Goldin. Chicago: University of Chicago Press, 2004 (forthcoming).

Lee, R. Alton. A History of Regulatory Taxation. Lexington: University of Kentucky Press, 1973.

Libecap, Gary D. “The Rise of the Chicago Packers and the Origins of Meat Inspection and Antitrust.” Economic Inquiry 30, no. 2 (1992): 242-262.

Mathios, Alan D. “The Impact of Mandatory Disclosure Laws on Product Choices: An Analysis of the Salad Dressing Market.” Journal of Law and Economics 43, no. 2 (2002): 651-77.

McCluskey, Jill J. “A Game Theoretic Approach to Organic Foods: An Analysis of Asymmetric Information and Policy.” Agricultural and Resource Economics Review 29, no. 1 (2000): 1-9.

Olson, Mary K. “Regulatory Agency Discretion Among Competing Industries: Inside the FDA.” Journal of Law, Economics, and Organization 11, no. 2 (1995): 379-401.

Olson, Mary K. “Explaining Regulatory Behavior in the FDA: Political Control vs. Agency Discretion.” In Advances in the Study of Entrepreneurship, Innovation, and Economic Growth, edited by Gary D. Libecap, 71-108, Greenwich: JAI Press, 1996a.

Olson, Mary K. “Substitution in Regulatory Agencies: FDA Enforcement Alternatives.” Journal of Law, Economics, and Organization 12, no. 2 (1996b): 376-407.

Olson, Mary K. “Firms’ Influences on FDA Drug Approval.” Journal of Economics and Management Strategy 6, no. 2 (1997): 377-401.

Olson, Mary K. “Regulatory Reform and Bureaucratic Responsiveness to Firms: The Impact of User Fees in the FDA.” Journal of Economics and Management Strategy 9, no. 3 (2000): 363-95.

Olson, Mary K. “Pharmaceutical Policy Change and the Safety of New Drugs.” Journal of Law and Economics 45, no 2, Part II (2002): 615-42.

Peltzman, Sam. “An Evaluation of Consumer Protection Legislation: The 1962 Drug Amendments.” Journal of Political Economy 81, no. 5 (1973): 1049-1091

Peltzman, Sam. “Toward a More General Theory of Regulation.” Journal of Law and Economics 19, no. 2 (1976): 211-40.

Robinson, Lisa M. “Regulating What We Eat: Mary Engle Pennington and the Food Research Laboratory.” Agricultural History 64 (1990): 143-53.

Stigler, George J. “The Theory of Economic Regulation.” Bell Journal of Economics and Management Science 2, no. 1 (1971): 3-21.

Strasser, Susan. Satisfaction Guaranteed: The Making of the American Mass Market. New York: Pantheon Books, 1989.

Temin, Peter. Taking Your Medicine: Drug Regulation in the United States. Cambridge: Harvard University Press, 1980.

Wallis, John J. and Douglass C. North. “Measuring the Transaction Sector of the American Economic, 1870-1970.” In Long Term Factors in American Economic Growth, edited by Stanley Engerman and Robert Gallman, 95-148. Chicago: University of Chicago Press, 1986.

Wardell, William M. and Louis Lasagna. Regulation and Drug Development. Washington, DC: American Enterprise Institute, 1975.

Weber, Gustavus. The Food, Drug and Insecticide Administration: Its History, Activities, and Organization. Baltimore: Johns Hopkins University Press, 1928.

Wiggins, Steven N. “Product Quality Regulation and New Drug Introductions: Some New Evidence from the 1970s.” Review of Economics and Statistics 63, no. 4 (1981): 615-19.

Wood, Donna J. The Strategic Use of Public Policy: Business and Government in the Progressive Era. Marshfield, MA: Pitman Publishing, 1986.

Yeager, Mary A. Competition and Regulation: The Development of Oligopoly in the Meat Packing Industry. Greenwich, CT: JAI Press, 1981.

Young, James H. The Medical Messiahs: A Social History of Quackery in Twentieth Century America. Princeton: Princeton University Press, 1967.

Young, James H. Pure Food: Securing the Federal Food and Drugs Act of 1906. Princeton: Princeton University Press. 1986.

Young, James H. “Food and Drug Enforcers in the 1920s: Restraining and Educating Business.” Business and Economic History 21 (1992): 119-128.

1 See Hutt and Hutt (1984) for an excellent survey of the history of food regulation in earlier times. French and Phillips (2000) discuss the development of food regulation in the United Kingdom.

2 This rationale for regulation was articulated by a member of the 49th Congress (1885):

In ordinary cases the consumer may be left to his own intelligence to protect himself against impositions. By the exercise of a reasonable degree of caution, he can protect himself from frauds in under-weight and in under-measure. If he can not detect a paper-soled shoe on inspection he detects it in the wearing of it, and in one way or another he can impose a penalty upon the fraudulent vendor. As a general rule the doctrine of laissez faire can be applied. Not so with many of the adulterations of food. Scientific inspection is needed to detect the fraud, and scientific inspection is beyond the reach of the ordinary consumer. In such cases, the Government should intervene (Congressional Record, 49th Congress, 1st Session, pp. 5040-41).

3 It is noteworthy that in writing The Jungle, Sinclair’s motivation was not to obtain federal meat inspection legislation, but rather, to provoke public outrage over industrial working conditions. “I aimed at the public’s heart,” he later wrote, “and by accident I hit it in the stomach.” (Quoted in Kolko 1967, p. 103.)

4 Thalidomide was not approved for sale in the U.S. The fact that an FDA official – Dr. Frances Kelsey, an FDA drug examiner – played a key role in blocking its availability in the United States gave even more legitimacy to the view that the FDA’s authority over pharmaceuticals needed to be strengthened. See Temin (1980, pp. 123-24). Ironically, Dr. Kelsey’s efforts to block the introduction of thalidomide in the United States stemmed not from knowledge about the fact that thalidomide caused birth defects, but rather, from concerns that thalidomide might cause neuropathy (a disease of the nervous system) in some of its users. Indeed, the association between thalidomide and birth defects was discovered by researchers in Europe, not by drug investigators at the FDA. Hence, the FDA may not in fact have deserved the credit it was given in preventing the thalidomide tragedy from spreading to the U.S. (Harris 1992).

5 See Comanor (1986) for a summary of this literature.

6 Along these lines, Olson (1995, 1996a, 1996b) also finds that other aspects of the FDA’s enforcement work from the 1970s until the present are generally responsive to pressures from multiple interest groups including firms, consumer groups, the media, and Congress.

7 For a very readable discussion of this perspective see Carpenter (2004b).

8 See Mathios (2000) and Ippolito and Pappalardo (2002) for analyses of the effects of this law on food consumption choices.

9 See Olson (2000) for analysis of the effects of these user fees on approval times.

Citation: Law, Marc. “History of Food and Drug Regulation in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. October 11, 2004. URL http://eh.net/encyclopedia/history-of-food-and-drug-regulation-in-the-united-states/

Economy of England at the Time of the Norman Conquest

John McDonald, Flinders University, Adelaide, Australia

The Domesday Survey of 1086 provides high quality and detailed information on the inputs, outputs and tax assessments of most English estates. This article describes how the data have been used to reconstruct the eleventh-century Domesday economy. By exploiting modern economic theory and statistical methods the reconstruction has led to a radically different assessment of the way in which the Domesday economy and fiscal system were organized. It appears that tax assessments were based on a capacity to pay principle subject to politically expedient concessions and we can discover who received lenient assessments and why. Penetrating questions can be asked about the economy. We can compare the efficiency of Domesday agricultural production with the efficiency of more modern economies, measure the productivity of inputs and assess the impact of feudalism and manorialism on economic activity. The emerging picture of a reasonably well organized economy and fair tax system contrasts with the assessment of earlier historians who saw the Normans as capable military and civil administrators but regarded the economy as haphazardly run and tax assessments as “artificial” or arbitrary. The next section describes the Survey, the contemporary institutional arrangements and the main features of Domesday agricultural production. Some key findings on the Domesday economy and tax system are then briefly discussed.

Domesday England and the Domesday Survey

William the Conqueror invaded England from France in 1066 and carried out the Domesday Survey twenty years later. By 1086, Norman rule had been largely consolidated, although only after rebellion and civil dissent had been harshly put down. The Conquest was achieved by an elite, and, although the Normans brought new institutions and practices, these were superimposed on the existing order. Most of the Anglo-Saxon aristocracy were eliminated, the lands of over 4,000 English lords passing to less than 200 Norman barons, with much of the land held by just a handful of magnates.

William ruled vigorously through the Great Council. England was divided into shires, or counties, which were subdivided into hundreds. There was a sophisticated and long established shire administration. The sheriff was the king’s agent in the county, royal orders could be transmitted through the county and hundred courts, and an effective taxation collection system was in place.

England was a feudal state. All land belonged to the king. He appointed tenants-in-chief, both lay and ecclesiastical, who usually held land in return for providing a quota of fully equipped knights. The tenants-in-chief might then grant the land to sub-tenants in return for rents or services, or work the estate themselves through a bailiff. Although the Survey records 112 boroughs, agriculture was the predominant economic activity, with stock rearing of greater importance in the south-west and arable farming more important in the east and midlands. Manorialism was a pervasive influence, although it existed in most parts of England in a modified form. On the manor the peasants worked the lord’s demesne in return for protection, housing, and the use of plots of land to cultivate their own crops. They were tied to the lord and the manor and provided a resident workforce. The demesne was also worked by slaves who were fed and housed by the lord.

The Domesday Survey was commissioned on Christmas day, 1085, and it is generally thought that work on summarizing the Survey was terminated with the death of William in September 1087. The task was facilitated by the availability of Anglo-Saxon hidage (tax) lists. The counties of England were grouped into (probably) seven circuits. Each circuit was visited by a team of commissioners, bishops, lawyers and lay barons who had no material interests in the area. The commissioners were responsible for circulating a list of questions to land holders, for subjecting the responses to a review in the county court by the hundred juries, often consisting of half Englishmen and half Frenchmen, and for supervising the compilation of county and circuit returns. The circuit returns were then sent to the Exchequer in Winchester where they were summarized, edited and compiled into Great Domesday Book.

Unlike modern surveys, individual questionnaire responses were not treated confidentially but became public knowledge, being verified in the courts by landholders with local knowledge. In such circumstances, the opportunities for giving false or misleading evidence were limited.

Domesday Book consists of two volumes, Great (or Exchequer) Domesday and Little Domesday. Little Domesday is a detailed original survey circuit return of circuit VII, Essex, Norfolk and Suffolk. Great Domesday is a summarized version of the other circuit returns sent to the King’s treasury in Winchester. (It is thought that the death of William occurred before Essex and East Anglia could be included in Great Domesday.) The two volumes contain information on the net incomes or outputs (referred to as the annual values), tax assessments and resources of most manors in England in 1086, some information for 1066, and sometimes also for an intermediate year. The information was used to revise tax assessments and document the feudal structure, “who held what, and owed what, to whom.”

Taxation

The Domesday tax assessments relate to a non-feudal tax, the geld, thought to be levied annually by the end of William’s reign. The tax can be traced back to the danegeld, and, although originally a land tax, by Norman times, it was more broadly based and a significant impost on landholders.

There is an extensive literature on the Norman tax system, much of it influenced by Round (1895), who considered the assessments to be “artificial,” in the sense that they were imposed from above via the county and hundred with little or no consideration of the capacity of an individual estate to pay the tax. Round largely based his argument on an unsystematic and subjective review of the distribution of the assessments across estates, vills and the hundreds of counties.

In (1985a) and (1986, Ch. 4), Graeme Snooks and I argued that, contrary to Round’s hypothesis, the tax assessments were based on a capacity to pay principle, subject to some politically expedient tax concessions. Similar tax systems operate in most modern societies and reflect an attempt to collect revenue in a politically acceptable way. We found empirical support for the hypothesis, using statistical methods. We showed, for example, that for Essex lay estates about 65 percent of variation in the tax assessments could be attributed to variations in manorial net incomes or manorial resources, two alternative ways of measuring capacity to pay. Similar results were obtained for other counties. Capacity to pay explains from 64 to 89 percent of variation in individual estate assessment data for the counties of Buckinghamshire, Cambridgeshire, Essex and Wiltshire, and from 72 to 81 percent for aggregate data for 29 counties (see McDonald and Snooks, 1987a). The estimated tax relationships capture the main features of the tax system.

Capacity to pay explains most variation in tax assessments, but some variation remains. Who and which estates were treated favorably? And what factors were associated with lenient taxation? These issues were investigated in McDonald (1998) where frontier methods were used to derive a measure of how favorable the tax assessments were for each Essex lay estate. (The frontier methods, also known as “data envelopment analysis,” use the tax and income observations to trace out an outer bound, or frontier, for the tax relationship.) Estates, tenants-in-chief and local areas (hundreds) of the county with lenient assessments were identified, and statistical methods used to discover factors associated with favorable assessments. Some significant factors were the tenant-in-chief holding the estate (assessments tended to be less beneficial for the tenants-in-chief holding a large number of estates in Essex), the hundred location (some hundreds receiving more favorable treatment than others), proximity to an urban center (estates remote from the urban centers being more favorably treated), economic size of the estate (larger estates being less favorably treated) and tenure (estates held as sub-tenancies having more lenient assessments). The results suggest a similarity with more modern tax systems, with some groups and activities receiving minor concessions and the administrative process inducing some unevenness in the assessments. Although many details of the tax system have been lost in the mists of time, careful analysis of the Survey data has enabled us to rediscover its main features.

Production

Since Victorian times historians have used Domesday Book to study the political, institutional and social structures and the geography of Domesday England. However, the early scholars tended to draw away from economic issues. They were unable to perceive that systematic economic relationships were present in the Domesday economy, and, in contrast to their view that the Normans displayed considerable ability in civil administration and military matters, economic production was regarded as poorly organized (see McDonald and Snooks, 1985a, 1985b and 1986, especially Ch 3). One explanation why the Domesday scholars were unable to discover consistent relationships in the economy lies in the empirical method they adopted. Rather than examining the data as a whole using statistical techniques, conclusions were drawn by generalizing from a few (often atypical) cases. It is not surprising that no consistent pattern was evident when data were restricted to a few unusual observations. It would also appear that the researchers often did not have a firm grasp of economic theory (for example, seemingly being perplexed that the same annual value, that is, net output, could be generated by estates with different input mixes, see McDonald and Snooks, 1986, Ch. 3).

In McDonald and Snooks (1986), using modern economic and statistical methods, Graeme Snooks and I reanalyzed manorial production relationships. The study shows that strong relationships existed linking estate net output to inputs. We estimated manorial production functions which indicate many interesting characteristics of Domesday production: returns to scale were close to constant, oxen plough teams and meadowland were prized inputs in production but horses contributed little, and villans, bordars and slaves (the less free workers) contributed far more than freemen and sokemen ( the more free) to the estate’s net output. The evidence suggested that in many ways Domesday landholders operated in a manner similar to modern entrepreneurs. Unresolved by this research was the question of how similar was the pattern of medieval and modern economic activity. In particular, how well organized was estate production?

Clearly, in an absolute sense Domesday estate production was inefficient. With modern technology, using, for example, motorized tractors, output could have been increased many-fold. A more interesting question is: Given the contemporary technology and institutions, how efficient was production?

In McDonald (1998) frontier methods were used to measure best practice, given the economic environment. We then measured how far, on average, estate production was below the best practice frontier. Providing some estates were effectively organized, so that best practice was good practice, this will be a useful measure. If many estates were run haphazardly and ineffectively, average efficiency will be low and efficiency dispersion measures large. Comparisons with average efficiency levels in similar production situations will give an indication of whether Domesday average efficiency was unusually low.

A large number of efficiency studies have been reported in the literature. Three case studies with characteristics similar to Domesday production are Hall’s (1975) study of agriculture after the Civil War in the American South, Hall and LeVeen’s (1978) analysis of small Californian farms and Byrnes, Färe, Grosskopf and Lovell’s (1988) study of American surface coalmines. For all three studies the individual establishment is the production unit, the economic activity is unsophisticated primary production and similar frontier methods are used to measure efficiency.

The comparison studies suggest that efficiency levels varied less across Domesday estates than they did among postbellum Southern farms and small Californian farms in the 1970s (and were very similar for Domesday estates and US surface coalmines). Certainly, the average Domesday estate efficiency level does not appear to be unusually low when compared with average efficiency levels in similar production situations.

In McDonald (1998) estate efficiency measures are also used to examine details of production on individual estates and statistical methods employed to find factors associated with efficiency. Some of these include the estate’s tenant-in-chief (some tenants-in-chief displayed more entrepreneurial flair than others), the size of the estate (larger estates, using inputs in different proportions to smaller estates, tended to be more efficient) and the kind of agriculture undertaken (estates specialized in grazing were more efficient).

Largely through the influences of feudalism and manorialism, Domesday agriculture suffered from poorly developed factor markets and considerable immobility of inputs. Although there were exceptions to the rule, as a first approximation, manorial production can be characterized in terms of estates worked by a residential labor force using the resources, which were available on the estate.

Input productivity depends on the mix of inputs used in production, and with estates endowed with widely different resource mixes, one might expect that input productivities would vary greatly across estates. The frontier analysis generates input productivity measures (shadow prices), and these confirm this expectation — indeed on many estates some inputs made very little contribution to production. The frontier analysis also allows us to estimate the economic cost of input rigidity induced by the feudal and manorial arrangements. The calculation indicates that if inputs had been mobile among estates an increase in total net output of 40.1 percent would have been possible. This potential loss in output is considerable. The frontier analysis indicates the loss in total net output resulting from estates not being fully efficient was 51.0 percent. The loss in output due to input rigidities is smaller, but of a similar order of magnitude.

Domesday Book is indeed a rich data source. It is remarkable that so much can be discovered about the English economy almost one thousand years ago.

Further reading

Background information on Domesday England is contained in McDonald and Snooks (1986, Ch. 1 and 2; 1985a, 1985b, 1987a and 1987b) and McDonald (1998). For more comprehensive accounts of the history of the period see Brown (1984), Clanchy (1983), Loyn (1962), (1965), (1983), Stenton (1943), and Stenton (1951). Other useful references include Ballard (1906), Darby (1952), (1977), Galbraith (1961), Hollister (1965), Lennard (1959), Maitland (1897), Miller and Hatcher (1978), Postan (1966), (1972), Round (1895), (1903), the articles in Williams (1987) and references cited in McDonald and Snooks (1986). The Survey is discussed in McDonald and Snooks (1986, sec. 2.2), the references cited there, and the articles in Williams (1987). The Domesday and modern surveys are compared in McDonald and Snooks (1985c).
The reconstruction of the Domesday economy is described in McDonald and Snooks (1986). Part 1 contains information on the basic tax and production relationships and Part 2 describes the methods used to estimate the relationships. The tax and production frontier analysis and efficiency comparisons are described in McDonald (1998). The book also explains the frontier methodology. A series of articles describe features of the research to different audiences: McDonald and Snooks (1985a, 1985b, 1987a, 1987b), economic historians; McDonald (2000), economists; McDonald (1997), management scientists; McDonald (2002), accounting historians (who recognize that Domesday Book possesses many attributes of an accounting record); and McDonald and Snooks (1985c), statisticians. Others who have made important contributions to our understanding of the Domesday economy include Miller and Hatcher (1978), Harvey (1983) and the contributors to the volumes edited by Aston (1987), Holt (1987), Hallam (1988) and Britnell and Campbell (1995).

References

Aston, T.H., editor. Landlords, Peasants and Politics in Medieval England. Cambridge: Cambridge University Press, 1987.
Ballard, Adolphus. The Domesday Inquest. London: Methuen, 1906.
Brittnell, Richard H. and Bruce M.S. Campbell, editors. A Commercialising Economy: England 1086 to c. 1300. Manchester: Manchester University Press, 1995.
Brown, R. Allen. The Normans. Woodbridge: Boydell Press, 1984.
Byrnes, P., R. Färe, S. Grosskopf and C.A. K. Lovell. “The Effect of Unions on Productivity: U.S. Surface Mining of Coal.” Management Science 34 (1988): 1037-53.
Clanchy, M.T. England and Its Rulers, 1066-1272. Glasgow: Fontana, 1983.
Darby, H.C. The Domesday Geography of Eastern England. Reprinted 1971. Cambridge: Cambridge University Press, 1952.
Darby, H.C. Domesday England. Reprinted 1979. Cambridge: Cambridge University Press, 1977.
Darby, H.C. and I.S. Maxwell, editor. The Domesday Geography of Northern England. Cambridge: Cambridge University Press, 1962.
Galbraith, V.H. The Making of Domesday Book. Oxford: Clarendon Press,1961.
Hall, A. R. “The Efficiency of Post-Bellum Southern Agriculture.” Ann Arbor, MI: University Microfilms International, 1975.
Hall, B. F. and E. P. LeVeen. “Farm Size and Economic Efficiency: The Case of California.” American Journal of Agricultural Economics 60 (1978): 589-600.
Hallam, H.E. Rural England, 1066-1348. Brighton: Fontana, 1981.
Hallam, H.E., editor. The Agrarian History of England and Wales, II: 1042-1350. Cambridge: Cambridge University Press, 1988.
Harvey, S.P.J. “The Extent and Profitability of Demesne Agriculture in the Latter Eleventh Century.” In Social Relations and Ideas: Essays in Honour of R.H. Hilton, edited by T.H. Ashton et al. Cambridge, Cambridge University Press, 1983.
Hollister, C.W. The Military Organisation of Norman England. Oxford: Clarendon Press. 1965.
Holt, J. C., editor. Domesday Studies. Woodbridge: Boydell Press, 1987.
Langdon, J. “The Economics of Horses and Oxen in Medieval England.” Agricultural History Review 30 (1982): 31-40.
Lennard, R. Rural England 1086-1135: A Study of Social and Agrarian Conditions. Oxford: Clarendon Press, 1959.
Loyn, R. Anglo-Saxon England and the Norman Conquest. Reprinted 1981. London: Longman, 1962.
Loyn, R. The Norman Conquest. Reprinted 1981. London: Longman, 1965.
Loyn, R. The Governance of Anglo-Saxon England, 500-1087. London: Edward Arnold, 1983.
McDonald, John. “Manorial Efficiency in Domesday England.” Journal of Productivity Analysis 8 (1997): 199-213.
McDonald, John. Production Efficiency in Domesday England. London: Routledge, 1998.
McDonald, John. “Domesday Economy: An Analysis of the English Economy Early in the Second Millennium.” National Institute Economic Review 172 (2000): 105-114.
McDonald, John. “Tax Fairness in Eleventh Century England.” Accounting Historians Journal 29 (2002): 173-193.
McDonald, John. and G. D. Snooks. “Were the Tax Assessments of Domesday England Artificial? The Case of Essex.” Economic History Review 38 (1985a): 353-373.
McDonald, John. and G. D. Snooks. “The Determinants of Manorial Income in Domesday England: Evidence from Essex.” Journal of Economic History 45 (1985b): 541-556.
McDonald, John. and G. D. Snooks. “Statistical Analysis of Domesday Book (1086).” Journal of the Royal Statistical Society, Series A 148 (1985c): 147-160.
McDonald, John. and G. D. Snooks. Domesday Economy: A New Approach to Anglo-Norman History. Oxford: Clarendon Press, 1986.
McDonald, John. and G. D. Snooks. “The Suitability of Domesday Book for Cliometric Analysis.” Economic History Review 40 (1987a): 252-261.
McDonald, John. and G. D. Snooks. “The Economics of Domesday England.” In
A. Williams, editor, Domesday Book Studies. London: Alecto Historical Editions, 1987.
Maitland, Frederic William. Domesday Book and Beyond. Reprinted 1921, Cambridge: Cambridge University Press, 1897.
Miller, Edward, and John Hatcher. Medieval England: Rural Society and Economic Change 1086-1348. London: Longman, 1978.
Morris, J., general editor. Domesday Book: A Survey of the Counties of England. Chichester: Phillimore, 1975.
Postan, M. M. Medieval Agrarian Society in Its Prime, The Cambridge Economic History of Europe. Vol. 1, M. M. Postan, editor. Cambridge: Cambridge University Press, 1966.
Postan, M. M. The Medieval Economy and Society: An Economic History of Britain in the Middle Ages. London: Weidenfeld & Nicolson, 1972.
Raftis, J. A. The Estates of Ramsey Abbey: A Study in Economic Growth and Organisation. Toronto: Pontifical Institute of Medieval Studies, 1957.
Round, John Horace. Feudal England: Historical Studies on the Eleventh and Twelfth Centuries. Reprinted 1964. London: Allen & Unwin, 1895.
Round, John Horace. “Essex Survey.” In VCH Essex. Vol. 1, reprinted 1977. London: Dawson, 1903.
Snooks, G. D. “The Dynamic Role of the Market in the Anglo-Saxon Economy and Beyond, 1086-1300.” In A Commercialising Economy: England 1086 to c. 1300, edited by R. H. Brittnell and M. S. Campbell. Manchester: Manchester University Press, 1995.
Stenton, D. M. English Society in the Middle Ages. Reprinted 1983. Harmondsworth: Penguin, 1951.
Stenton, F. M. Anglo-Saxon England. Reprinted 1975. Oxford: Clarendon Press, 1943.
Victoria County History. London: Oxford University Press, 1900-.
Williams, A., editor. Domesday Book Studies. London: Alecto Historical Editions, 1987.

Citation: McDonald, John. “Economy of England at the Time of the Norman Conquest”. EH.Net Encyclopedia, edited by Robert Whaples. September 9, 2004. URL http://eh.net/encyclopedia/economy-of-england-at-the-time-of-the-norman-conquest/

The Depression of 1893

David O. Whitten, Auburn University

The Depression of 1893 was one of the worst in American history with the unemployment rate exceeding ten percent for half a decade. This article describes economic developments in the decades leading up to the depression; the performance of the economy during the 1890s; domestic and international causes of the depression; and political and social responses to the depression.

The Depression of 1893 can be seen as a watershed event in American history. It was accompanied by violent strikes, the climax of the Populist and free silver political crusades, the creation of a new political balance, the continuing transformation of the country’s economy, major changes in national policy, and far-reaching social and intellectual developments. Business contraction shaped the decade that ushered out the nineteenth century.

Unemployment Estimates

One way to measure the severity of the depression is to examine the unemployment rate. Table 1 provides estimates of unemployment, which are derived from data on output — annual unemployment was not directly measured until 1929, so there is no consensus on the precise magnitude of the unemployment rate of the 1890s. Despite the differences in the two series, however, it is obvious that the Depression of 1893 was an important event. The unemployment rate exceeded ten percent for five or six consecutive years. The only other time this occurred in the history of the US economy was during the Great Depression of the 1930s.

Timing and Depth of the Depression

The National Bureau of Economic Research estimates that the economic contraction began in January 1893 and continued until June 1894. The economy then grew until December 1895, but it was then hit by a second recession that lasted until June 1897. Estimates of annual real gross national product (which adjust for this period’s deflation) are fairly crude, but they generally suggest that real GNP fell about 4% from 1892 to 1893 and another 6% from 1893 to 1894. By 1895 the economy had grown past its earlier peak, but GDP fell about 2.5% from 1895 to 1896. During this period population grew at about 2% per year, so real GNP per person didn’t surpass its 1892 level until 1899. Immigration, which had averaged over 500,000 people per year in the 1880s and which would surpass one million people per year in the first decade of the 1900s, averaged only 270,000 from 1894 to 1898.

Table 1
Estimates of Unemployment during the 1890s

Year Lebergott Romer
1890 4.0% 4.0%
1891 5.4 4.8
1892 3.0 3.7
1893 11.7 8.1
1894 18.4 12.3
1895 13.7 11.1
1896 14.5 12.0
1897 14.5 12.4
1898 12.4 11.6
1899 6.5 8,7
1900 5.0 5.0

Source: Romer, 1984

The depression struck an economy that was more like the economy of 1993 than that of 1793. By 1890, the US economy generated one of the highest levels of output per person in the world — below that in Britain, but higher than the rest of Europe. Agriculture no longer dominated the economy, producing only about 19 percent of GNP, well below the 30 percent produced in manufacturing and mining. Agriculture’s share of the labor force, which had been about 74% in 1800, and 60% in 1860, had fallen to roughly 40% in 1890. As Table 2 shows, only the South remained a predominantly agricultural region. Throughout the country few families were self-sufficient, most relied on selling their output or labor in the market — unlike those living in the country one hundred years earlier.

Table 2
Agriculture’s Share of the Labor Force by Region, 1890

Northeast 15%
Middle Atlantic 17%
Midwest 43%
South Atlantic 63%
South Central 67%
West 29%

Economic Trends Preceding the 1890s

Between 1870 and 1890 the number of farms in the United States rose by nearly 80 percent, to 4.5 million, and increased by another 25 percent by the end of the century. Farm property value grew by 75 percent, to $16.5 billion, and by 1900 had increased by another 25 percent. The advancing checkerboard of tilled fields in the nation’s heartland represented a vast indebtedness. Nationwide about 29% of farmers were encumbered by mortgages. One contemporary observer estimated 2.3 million farm mortgages nationwide in 1890 worth over $2.2 billion. But farmers in the plains were much more likely to be in debt. Kansas croplands were mortgaged to 45 percent of their true value, those in South Dakota to 46 percent, in Minnesota to 44, in Montana 41, and in Colorado 34 percent. Debt covered a comparable proportion of all farmlands in those states. Under favorable conditions the millions of dollars of annual charges on farm mortgages could be borne, but a declining economy brought foreclosures and tax sales.

Railroads opened new areas to agriculture, linking these to rapidly changing national and international markets. Mechanization, the development of improved crops, and the introduction of new techniques increased productivity and fueled a rapid expansion of farming operations. The output of staples skyrocketed. Yields of wheat, corn, and cotton doubled between 1870 and 1890 though the nation’s population rose by only two-thirds. Grain and fiber flooded the domestic market. Moreover, competition in world markets was fierce: Egypt and India emerged as rival sources of cotton; other areas poured out a growing stream of cereals. Farmers in the United States read the disappointing results in falling prices. Over 1870-73, corn and wheat averaged $0.463 and $1.174 per bushel and cotton $0.152 per pound; twenty years later they brought but $0.412 and $0.707 a bushel and $0.078 a pound. In 1889 corn fell to ten cents in Kansas, about half the estimated cost of production. Some farmers in need of cash to meet debts tried to increase income by increasing output of crops whose overproduction had already demoralized prices and cut farm receipts.

Railroad construction was an important spur to economic growth. Expansion peaked between 1879 and 1883, when eight thousand miles a year, on average, were built including the Southern Pacific, Northern Pacific and Santa Fe. An even higher peak was reached in the late 1880s, and the roads provided important markets for lumber, coal, iron, steel, and rolling stock.

The post-Civil War generation saw an enormous growth of manufacturing. Industrial output rose by some 296 percent, reaching in 1890 a value of almost $9.4 billion. In that year the nation’s 350,000 industrial firms employed nearly 4,750,000 workers. Iron and steel paced the progress of manufacturing. Farm and forest continued to provide raw materials for such established enterprises as cotton textiles, food, and lumber production. Heralding the machine age, however, was the growing importance of extractives — raw materials for a lengthening list of consumer goods and for producing and fueling locomotives, railroad cars, industrial machinery and equipment, farm implements, and electrical equipment for commerce and industry. The swift expansion and diversification of manufacturing allowed a growing independence from European imports and was reflected in the prominence of new goods among US exports. Already the value of American manufactures was more than half the value of European manufactures and twice that of Britain.

Onset and Causes of the Depression

The depression, which was signaled by a financial panic in 1893, has been blamed on the deflation dating back to the Civil War, the gold standard and monetary policy, underconsumption (the economy was producing goods and services at a higher rate than society was consuming and the resulting inventory accumulation led firms to reduce employment and cut back production), a general economic unsoundness (a reference less to tangible economic difficulties and more to a feeling that the economy was not running properly), and government extravagance .

Economic indicators signaling an 1893 business recession in the United States were largely obscured. The economy had improved during the previous year. Business failures had declined, and the average liabilities of failed firms had fallen by 40 percent. The country’s position in international commerce was improved. During the late nineteenth century, the United States had a negative net balance of payments. Passenger and cargo fares paid to foreign ships that carried most American overseas commerce, insurance charges, tourists’ expenditures abroad, and returns to foreign investors ordinarily more than offset the effect of a positive merchandise balance. In 1892, however, improved agricultural exports had reduced the previous year’s net negative balance from $89 million to $20 million. Moreover, output of non-agricultural consumer goods had risen by more than 5 percent, and business firms were believed to have an ample backlog of unfilled orders as 1893 opened. The number checks cleared between banks in the nation at large and outside New York, factory employment, wholesale prices, and railroad freight ton mileage advanced through the early months of the new year.

Yet several monthly series of indicators showed that business was falling off. Building construction had peaked in April 1892, later moving irregularly downward, probably in reaction to over building. The decline continued until the turn of the century, when construction volume finally turned up again. Weakness in building was transmitted to the rest of the economy, dampening general activity through restricted investment opportunities and curtailed demand for construction materials. Meanwhile, a similar uneven downward drift in business activity after spring 1892 was evident from a composite index of cotton takings (cotton turned into yarn, cloth, etc.) and raw silk consumption, rubber imports, tin and tin plate imports, pig iron manufactures, bituminous and anthracite coal production, crude oil output, railroad freight ton mileage, and foreign trade volume. Pig iron production had crested in February, followed by stock prices and business incorporations six months later.

The economy exhibited other weaknesses as the March 1893 date for Grover Cleveland’s inauguration to the presidency drew near. One of the most serious was in agriculture. Storm, drought, and overproduction during the preceding half-dozen years had reversed the remarkable agricultural prosperity and expansion of the early 1880s in the wheat, corn, and cotton belts. Wheat prices tumbled twenty cents per bushel in 1892. Corn held steady, but at a low figure and on a fall of one-eighth in output. Twice as great a decline in production dealt a severe blow to the hopes of cotton growers: the season’s short crop canceled gains anticipated from a recovery of one cent in prices to 8.3 cents per pound, close to the average level of recent years. Midwestern and Southern farming regions seethed with discontent as growers watched staple prices fall by as much as two-thirds after 1870 and all farm prices by two-fifths; meanwhile, the general wholesale index fell by one-fourth. The situation was grave for many. Farmers’ terms of trade had worsened, and dollar debts willingly incurred in good times to permit agricultural expansion were becoming unbearable burdens. Debt payments and low prices restricted agrarian purchasing power and demand for goods and services. Significantly, both output and consumption of farm equipment began to fall as early as 1891, marking a decline in agricultural investment. Moreover, foreclosure of farm mortgages reduced the ability of mortgage companies, banks, and other lenders to convert their earning assets into cash because the willingness of investors to buy mortgage paper was reduced by the declining expectation that they would yield a positive return.

Slowing investment in railroads was an additional deflationary influence. Railroad expansion had long been a potent engine of economic growth, ranging from 15 to 20 percent of total national investment in the 1870s and 1880s. Construction was a rough index of railroad investment. The amount of new track laid yearly peaked at 12,984 miles in 1887, after which it fell off steeply. Capital outlays rose through 1891 to provide needed additions to plant and equipment, but the rate of growth could not be sustained. Unsatisfactory earnings and a low return for investors indicated the system was over built and overcapitalized, and reports of mismanagement were common. In 1892, only 44 percent of rail shares outstanding returned dividends, although twice that proportion of bonds paid interest. In the meantime, the completion of trunk lines dried up local capital sources. Political antagonism toward railroads, spurred by the roads’ immense size and power and by real and imagined discrimination against small shippers, made the industry less attractive to investors. Declining growth reduced investment opportunity even as rail securities became less appealing. Capital outlays fell in 1892 despite easy credit during much of the year. The markets for ancillary industries, like iron and steel, felt the impact of falling railroad investment as well; at times in the 1880s rails had accounted for 90 percent of the country’s rolled steel output. In an industry whose expansion had long played a vital role in creating new markets for suppliers, lagging capital expenditures loomed large in the onset of depression.

European Influences

European depression was a further source of weakness as 1893 began. Recession struck France in 1889, and business slackened in Germany and England the following year. Contemporaries dated the English downturn from a financial panic in November. Monetary stringency was a base cause of economic hard times. Because specie — gold and silver — was regarded as the only real money, and paper money was available in multiples of the specie supply, when people viewed the future with doubt they stockpiled specie and rejected paper. The availability of specie was limited, so the longer hard times prevailed the more difficult it was for anyone to secure hard money. In addition to monetary stringency, the collapse of extensive speculations in Australian, South African, and Argentine properties; and a sharp break in securities prices marked the advent of severe contraction. The great banking house of Baring and Brothers, caught with excessive holdings of Argentine securities in a falling market, shocked the financial world by suspending business on November 20, 1890. Within a year of the crisis, commercial stagnation had settled over most of Europe. The contraction was severe and long-lived. In England many indices fell to 80 percent of capacity; wholesale prices overall declined nearly 6 percent in two years and had declined 15 percent by 1894. An index of the prices of principal industrial products declined by almost as much. In Germany, contraction lasted three times as long as the average for the period 1879-1902. Not until mid-1895 did Europe begin to revive. Full prosperity returned a year or more later.

Panic in the United Kingdom and falling trade in Europe brought serious repercussions in the United States. The immediate result was near panic in New York City, the nation’s financial center, as British investors sold their American stocks to obtain funds. Uneasiness spread through the country, fostered by falling stock prices, monetary stringency, and an increase in business failures. Liabilities of failed firms during the last quarter of 1890 were $90 million — twice those in the preceding quarter. Only the normal year’s end grain exports, destined largely for England, averted a gold outflow.

Circumstances moderated during the early months of 1891, although gold flowed to Europe, and business failures remained high. Credit eased, if slowly: in response to pleas for relief, the federal treasury began the premature redemption of government bonds to put additional money into circulation, and the end of the harvest trade reduced demand for credit. Commerce quickened in the spring. Perhaps anticipation of brisk trade during the harvest season stimulated the revival of investment and business; in any event, the harvest of 1891 buoyed the economy. A bumper American wheat crop coincided with poor yields in Europe increase exports and the inflow of specie: US exports in fiscal 1892 were $150 million greater than in the preceding year, a full 1 percent of gross national product. The improved market for American crops was primarily responsible for a brief cycle of prosperity in the United States that Europe did not share. Business thrived until signs of recession began to appear in late 1892 and early 1893.

The business revival of 1891-92 only delayed an inevitable reckoning. While domestic factors led in precipitating a major downturn in the United States, the European contraction operated as a powerful depressant. Commercial stagnation in Europe decisively affected the flow of foreign investment funds to the United States. Although foreign investment in this country and American investment abroad rose overall during the 1890s, changing business conditions forced American funds going abroad and foreign funds flowing into the United States to reverse as Americans sold off foreign holdings and foreigners sold off their holdings of American assets. Initially, contraction abroad forced European investors to sell substantial holdings of American securities, then the rate of new foreign investment fell off. The repatriation of American securities prompted gold exports, deflating the money stock and depressing prices. A reduced inflow of foreign capital slowed expansion and may have exacerbated the declining growth of the railroads; undoubtedly, it dampened aggregate demand.

As foreign investors sold their holdings of American stocks for hard money, specie left the United States. Funds secured through foreign investment in domestic enterprise were important in helping the country meet its usual balance of payments deficit. Fewer funds invested during the 1890s was one of the factors that, with a continued negative balance of payments, forced the United States to export gold almost continuously from 1892 to 1896. The impact of depression abroad on the flow of capital to this country can be inferred from the history of new capital issues in Britain, the source of perhaps 75 percent of overseas investment in the United States. British issues varied as shown in Table 3.

Table 3
British New Capital Issues, 1890-1898 (millions of pounds, sterling)

1890 142.6
1891 104.6
1892 81.1
1893 49.1
1894 91.8
1895 104.7
1896 152.8
1897 157.3
1898 150.2

Source: Hoffmann, p. 193

Simultaneously, the share of new British investment sent abroad fell from one-fourth in 1891 to one-fifth two years later. Over that same period, British net capital flows abroad declined by about 60 percent; not until 1896 and 1897 did they resume earlier levels.

Thus, the recession that began in 1893 had deep roots. The slowdown in railroad expansion, decline in building construction, and foreign depression had reduced investment opportunities, and, following the brief upturn effected by the bumper wheat crop of 1891, agricultural prices fell as did exports and commerce in general. By the end of 1893, business failures numbering 15,242 averaging $22,751 in liabilities, had been reported. Plagued by successive contractions of credit, many essentially sound firms failed which would have survived under ordinary circumstances. Liabilities totaled a staggering $357 million. This was the crisis of 1893.

Response to the Depression

The financial crises of 1893 accelerated the recession that was evident early in the year into a major contraction that spread throughout the economy. Investment, commerce, prices, employment, and wages remained depressed for several years. Changing circumstances and expectations, and a persistent federal deficit, subjected the treasury gold reserve to intense pressure and generated sharp counterflows of gold. The treasury was driven four times between 1894 and 1896 to resort to bond issues totaling $260 million to obtain specie to augment the reserve. Meanwhile, restricted investment, income, and profits spelled low consumption, widespread suffering, and occasionally explosive labor and political struggles. An extensive but incomplete revival occurred in 1895. The Democratic nomination of William Jennings Bryan for the presidency on a free silver platform the following year amid an upsurge of silverite support contributed to a second downturn peculiar to the United States. Europe, just beginning to emerge from depression, was unaffected. Only in mid-1897 did recovery begin in this country; full prosperity returned gradually over the ensuing year and more.

The economy that emerged from the depression differed profoundly from that of 1893. Consolidation and the influence of investment bankers were more advanced. The nation’s international trade position was more advantageous: huge merchandise exports assured a positive net balance of payments despite large tourist expenditures abroad, foreign investments in the United States, and a continued reliance on foreign shipping to carry most of America’s overseas commerce. Moreover, new industries were rapidly moving to ascendancy, and manufactures were coming to replace farm produce as the staple products and exports of the country. The era revealed the outlines of an emerging industrial-urban economic order that portended great changes for the United States.

Hard times intensified social sensitivity to a wide range of problems accompanying industrialization, by making them more severe. Those whom depression struck hardest as well as much of the general public and major Protestant churches, shored up their civic consciousness about currency and banking reform, regulation of business in the public interest, and labor relations. Although nineteenth century liberalism and the tradition of administrative nihilism that it favored remained viable, public opinion began to slowly swing toward governmental activism and interventionism associated with modern, industrial societies, erecting in the process the intellectual foundation for the reform impulse that was to be called Progressivism in twentieth century America. Most important of all, these opposed tendencies in thought set the boundaries within which Americans for the next century debated the most vital questions of their shared experience. The depression was a reminder of business slumps, commonweal above avarice, and principle above principal.

Government responses to depression during the 1890s exhibited elements of complexity, confusion, and contradiction. Yet they also showed a pattern that confirmed the transitional character of the era and clarified the role of the business crisis in the emergence of modern America. Hard times, intimately related to developments issuing in an industrial economy characterized by increasingly vast business units and concentrations of financial and productive power, were a major influence on society, thought, politics, and thus, unavoidably, government. Awareness of, and proposals of means for adapting to, deep-rooted changes attending industrialization, urbanization, and other dimensions of the current transformation of the United States long antedated the economic contraction of the nineties.

Selected Bibliography

*I would like to thank Douglas Steeples, retired dean of the College of Liberal Arts and professor of history, emeritus, Mercer University. Much of this article has been taken from Democracy in Desperation: The Depression of 1893 by Douglas Steeples and David O. Whitten, which was declared an Exceptional Academic Title by Choice. Democracy in Desperation includes the most recent and extensive bibliography for the depression of 1893.

Clanton, Gene. Populism: The Humane Preference in America, 1890-1900. Boston: Twayne, 1991.

Friedman, Milton, and Anna Jacobson Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Goodwyn, Lawrence. Democratic Promise: The Populist Movement in America. New York: Oxford University Press, 1976.

Grant, H. Roger. Self Help in the 1890s Depression. Ames: Iowa State University Press, 1983.

Higgs, Robert. The Transformation of the American Economy, 1865-1914. New York: Wiley, 1971.

Himmelberg, Robert F. The Rise of Big Business and the Beginnings of Antitrust and Railroad Regulation, 1870-1900. New York: Garland, 1994.

Hoffmann, Charles. The Depression of the Nineties: An Economic History. Westport, CT: Greenwood Publishing, 1970.

Jones, Stanley L. The Presidential Election of 1896. Madison: University of Wisconsin Press, 1964.

Kindleberger, Charles Poor. Manias, Panics, and Crashes: A History of Financial Crises. Revised Edition. New York: Basic Books, 1989.

Kolko, Gabriel. Railroads and Regulation, 1877-1916. Princeton: Princeton University Press, 1965.

Lamoreaux, Naomi R. The Great Merger Movement in American Business, 1895-1904. New York: Cambridge University Press, 1985.

Rees, Albert. Real Wages in Manufacturing, 1890-1914. Princeton, NJ: Princeton University Press, 1961.

Ritter, Gretchen. Goldbugs and Greenbacks: The Antimonopoly Tradition and the Politics of Finance in America. New York: Cambridge University Press, 1997.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” Journal of Political Economy 94, no. 1. (1986): 1-37.

Schwantes, Carlos A. Coxey’s Army: An American Odyssey. Lincoln: University of Nebraska Press, 1985.

Steeples, Douglas, and David Whitten. Democracy in Desperation: The Depression of 1893. Westport, CT: Greenwood Press, 1998.

Timberlake, Richard. “Panic of 1893.” In Business Cycles and Depressions: An Encyclopedia, edited by David Glasner. New York: Garland, 1997.

White, Gerald Taylor. Years of Transition: The United States and the Problems of Recovery after 1893. University, AL: University of Alabama Press, 1982.

Citation: Whitten, David. “Depression of 1893″. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/the-depression-of-1893/

Deflation

Pierre L. Siklos, Wilfrid Laurier University

What is Deflation?

Deflation is a persistent fall in some generally followed aggregate indicator of price movements, such as the consumer price index or the GDP deflator. Generally, a one-time fall in the price level does not constitute a deflation. Instead, one has to see continuously falling prices for well over a year before concluding that the economy suffers from deflation. How long the fall has to continue before the public and policy makers conclude that the phenomenon is reflected in expectations of future price developments is open to question. For example, in Japan, which has the distinction of experiencing the longest post World War II period of deflation, it took several years for deflationary expectations to emerge.

Most observers tend to focus on changes in consumer or producer prices since, as far as monetary policy is concerned, central banks are responsible for ensuring some form of price stability (usually defined as inflation rates of +3% or less in much of the industrial world). However, sustained decreases in asset prices, such as for stock market shares or housing, can also pose serious economic problems since, other things equal, such outcomes imply lower wealth and, in turn, reduced consumption spending. While the connection between goods price and asset price inflation or deflation remains a contentious one in the economics profession, policy makers are undoubtedly worried about the existence of a link, as Alan Greenspan’s “irrational exuberance” remark of 1996 illustrates.

Historical and Contemporary Worries about Deflation

Until 2002, prospects for a deflation outside Japan remained unlikely. Prior to that time, deflation had been a phenomenon primarily of the 1930s and inextricably linked with the Great Depression, especially in the United States. Most observers viewed Japan’s deflation as part of a general economic malaise stemming from a mix of bad policy choices, bad politics, and a banking industry insolvency problem that would simply not go away. However, by 2001, reports of falling US producer prices, a sluggish economy, and the spread of deflation beyond Japan to China, Taiwan, and Hong Kong, to name a few countries, eventually led policy makers at the US Federal Reserve Board to publicly express their determination at avoiding deflation (e.g. See IMF 2003, Borio and Filardo 2004). Governor Bernanke of the US Federal Reserve raised the issue of deflation in late 2002 when he argued that the US public ought not to be overly worried since the Fed was on top of the issue and, in any event, the US was not Japan. Nevertheless, he also stressed that “central banks must today try to avoid major changes in the inflation rate in either direction. In central bank speak, we now face “symmetric” inflation risks.”1 The risks Governor Bernanke was referring to stem from the fact that, now that low inflation rates have been achieved, the public has to maintain the belief that central banks will neither allow inflation to creep up nor permit the onset of deflation. Even the IMF began to worry about the likelihood of deflation, as reflected in a major report, released in mid-2003, that assessed the probability that deflation might become a global phenomenon. While the risk that deflation might catch on in the US was deemed fairly low, the threat of deflation in Germany, for example, was viewed as being much greater.

Deflation in the Great Depression Era

It is evident from the foregoing illustrations that deflation has again emerged as public policy enemy number one in some circles. Most observers need only think back to the global depression of the 1930s, when the combination of a massive fall in output and the price level devastated the U.S. economy. While the Great Depression was a global phenomenon, actual output losses varied considerably from modest losses to the massive losses incurred by the U.S. economy. During the period 1928-1933 output fell by approximately 25% as did prices. Other countries, such as Canada and Germany, also suffered large output losses. Canada also experienced a fall in output of at least 25% over the same period while prices in 1933 were only about 78% of prices in 1928. In the case of Germany, the deflation rate over the same 1928-1933 period was similar to that experienced in Canada while output fell just over 20% in that time. No wonder analysts associate deflation with “ugly” economic consequences. Nevertheless, as shall see, there exist varieties of deflationary experiences. In any event, it needs to be underlined that the Great Depression period of the 1930s did not result in massive output losses worldwide. In particular, seminal analyses by Friedman and Schwartz (1982), and Meltzer (2003), concluded that the 1930s represented a deflationary episode driven by falling aggregate demand, compounded by poor policy choices by the leadership at the US Federal Reserve that was wedded at the time to a faulty ideology (a version of the ‘real bills’ doctrine2). Indeed, the competence of the interwar Fed has been the subject of considerable ongoing debate throughout the decades. Disagreements over the role of credit in deflation and concerns about how to reinvigorate the economy were, of course, also expressed in public at the time. Strikingly, the relationship between deflation and central bank policy was often entirely missing from the discussion, however.

The Debt-Deflation Problem

The prevailing ideology treated the occasional deflation as one that acted as a necessary spur for economic growth, a symptom of economic health, not one indicative of economic malaise. However, there were notable exceptions to the chorus of views favorable to deflation. Irving Fisher developed what is now referred to as the “debt-deflation” hypothesis. Falling prices increase the debt burden and adversely affect firms’ balance sheets. This was particularly true of the plight faced by farmers in many countries, including the United States, during the 1920s when falling agricultural prices combined with tight monetary policies sharply raised the costs of servicing existing debts. The same was true of the prices of raw materials. The Table below illustrates the rather precipitous drop in the price level of some key commodities in a single year.

Table 1
Commodity Prices in the U.S., 1923-24

Commodity Group May 1924 May 1923
All commodities 147 156
Farm Products 136 139
Foods 137 144
Clothes and Clothing 187 201
Fuel and Lighting 177 190
Metals 134 152
Building Materials 180 202
Chemicals and drugs 127 134
House furnishings 173 187
Miscellaneous 112 125
Source: Federal Reserve Bulletin, July 1924, p. 532.
Note: Prices in 1913 are defined to equal 100

The Postponing Purchases Problem

Hence, a deflation is a harbinger of a financial crisis with repercussions for the economy as a whole. Others, such as Keynes, also worried about the impact of deflation on aggregate demand in the economy, as individuals and firms postpone their purchases in the hopes of purchasing durable goods especially at lower future prices. He actually advocated a policy that is not too dissimilar to what we would refer to today as inflation targeting (e.g., see Burdekin and Siklos 2004, ch. 1). Unfortunately, the prevailing ideology was that deflation was a purgative of sorts, that is, the price to be paid for economic excesses during the boom years, and necessary to establish to conditions for economic recovery. The reason is that economic booms were believed to be associated with excessive inflation which had to be rooted out of the system. Hence, prices that rose too fast could only be cured if they returned to lower levels.

Not All Deflations Are Bad

So, are all deflations bad? Not necessarily. The United Kingdom experienced several years of falling prices in the 1870-1939. However, since the deflation was apparently largely anticipated (e.g., see Capie and Wood 2004) the deflation did not produce adverse economic consequences. Finally, an economy that experiences a surge of financial and technological innovations would effectively see rising aggregate supply that, with only modest growth in aggregate demand, would translate into lower prices over time. Indeed, estimates based on simple relationships suggest that the sometime calamitous effects that are thought to be associated with deflation can largely be explained by the rather unique event of the Great Depression of the 1930s (Burdekin and Siklos 2004, ch. 1). However, the other difficulty is that a deflation may at first appear to be supply driven until policy makers come to the realization that aggregate demand is the proximate cause. This seems to be the case of the modern-day episodes of deflation in Japan and China.

Differences between the 1930s and Today

What’s different about the prospects of a deflation today? First, and perhaps most obviously, we know considerably more than in the 1930s about the transmission mechanism of monetary policy decisions. Second, the prevailing economic ideology favors flexible exchange rates. Almost all of the countries that suffered from the Great Depression adhered to some form of fixed exchange rates, usually under the aegis of the Gold Standard. As a result, the transmission of deflation from one country to another was much stronger than under flexible exchange rate conditions. Third, policy makers have many more instruments of policy today than seventy years ago. Not only can monetary policy be more effective when correctly applied, but fiscal policy exists on a scale that was not possible during the 1930s. Nevertheless, fiscal policy, if misused, as has apparently been the case in Japan, can actually add to the difficulties of extricating an economy out of deflationary slump. There are similar worries about the US case as the anticipated surpluses have turned into large deficits for the foreseeable future. Likewise the fiscal rules adopted by the European Union severely hinder, even altogether prevent some would say, the scope for a stimulative fiscal policy. Fourth, policy-making institutions are both more open and accountable than in past decades. Central banks are autonomous and accountable and their efforts at making monetary policy more transparent to financial markets ought to reduce the likelihood of serious policy errors as these are considered to be powerful devices to enhance credibility.

Parallels between the 1930s and Today

Nevertheless, in spite of the obvious differences between the situation today and the ones faced seven decades ago, some parallels remain. For example, until 2000, many policy makers, including the central bankers at the Fed, felt that the technological developments of the 1990s might lead to economic growth almost without end and, in this “new” era, the prospect of a bad deflation seemed the furthest thing on their minds. Similarly, the Bank of Japan was long convinced that their deflation was of the good variety. It has taken its policy makers a decade to recognize the seriousness of their situation. In Japan, the debate over the menu of needed reforms and policies to extricate the economy from its deflationary trap continues unabated. Worse still the recent Japanese experience raises the specter of Keynes’ famous liquidity trap (Krugman 1998), namely a state of affairs where lower interest rates are unable to stimulate investment or economic activity more generally. Hence, deflation, combined with expectations of falling prices, conspires to make the so-called ‘zero lower bound’ for nominal interest rates an increasingly binding one (see below).

Two More Concerns: Labor Market and Credit Market Impacts of Deflation

There are at least two other reasons to worry about the onset of a deflation with devastating economic consequences. Labor markets exhibit considerably less flexibility than several decades ago. Consequently, it is considerably more difficult for the necessary fall in nominal wages to match a drop in prices. Otherwise, real wages would actually rise in a deflation and this would produce even more slack in the labor market with the resulting increases in the unemployment rate contributing to further reduce aggregate demand, the exact opposite of what is needed. A second consideration is the ability of monetary policy to stimulate the economy when interest rates are close to zero. The so-called “zero lower bound” constraint for interest rates means that if the rate of deflation rises so do real interest rates further depressing aggregate demand. Therefore, while history need not repeat itself, the mistakes of the past need to be kept firmly in mind.

Frequency of Deflation in the Historical Record

As noted above, inflation has been an all too common occurrence since 1945. The table below shows that deflation has become a much less common feature of the macroeconomic landscape. One has to go back to the 1930s before encountering successive years of deflation.3 Indeed, for the countries listed below, the number of times prices fell year over year for two years or more is a relatively small number. Hence, deflation is a fairly unusual event.

Table 2: Episodes of Deflation from the mid-1800s to 1945

Country
(year record begins)
Number of
occurrences of
deflation
until 1945
Years of persistent deflation/Crisis
Austria (1915) 1
Australia (1862) 5 BC: 1893
CC: 1933-33
Belgium (1851) 9 1892-96
BC, CC: 1924-26
1931-35, BC: 1931
BC,CC: 1934-35
Canada (1914) 2 CC: 1891,1893, 1908, 1921, 1929-31
1930-33
Denmark (1851) 9 1882-86, BC: 1885, 1892-96, BC: 1907-08
1921-32, BC, CC: 1921-22, 1931-32
Finland (1915) 1 BC: 1900,1921 1929-34, BC, CC: 1931-32
France (1851) 4 CC, BC: 1888-89, 1907, 1923, 1926
1932-35, BC: 1930-32
Germany (1851) 8 1892-96, BC, CC:1893, 1901,1907
1930-33, BC, CC: 1931, 1934
Ireland (1923) 2 1930-33
Italy (1862) 6 1881-84, BC, CC: 1891, 1893-94, 1907-08, 1921
1930-34, BC: 1930-31, 1934-35
Japan (1923) 1 CC, BC: 1900-01, 1907-08, 1921
1925-31, BC, CC: 1931-32
Netherlands (1881) 6 1893-96, BC, CC: 1897, 1921
1930-32
CC, BC: 1935, 1939
Norway (1902) 2 BC, CC: 1891, 1921-23
1926-33, BC CC: 1931
New Zealand (1908) 1 BC: 1920, 1924-25
1929-33, BC, CC: 1931
Spain (1915) 2
Sweden (1851) 9 1882-87
1930-33
Switzerland (1891) 4 1930-34
UK (1851) 8 1884-87
1926-33
US (1851) 9 1875-79
1930-33

Notes: Data are from chapter 1, Richard C.K. Burdekin and Pierre L. Siklos, editors, Deflation: Current and Historical Perspectives, New York: Cambridge University Press, 2004. The numbers in parenthesis in the first column refer to the first year for which we have data. The second column gives the frequency of occurrences of deflation defined as two or more consecutive years with falling prices. The last column provides some illustrations of especially persistent declines in the price level, defined in terms of consumer prices. In italics, years with currency crises (CC) or banking crises (BC), are shown where data are available. The dates are from Michael D. Bordo, Barry Eichengreen, Daniela Klingebiel, and Maria Soledad Martinez-Peria, “Financial Crises: Lessons from the Last 120 Years,” Economic Policy, April 2001.

Is There an Empirical Deflation-Recession Link?

If that is indeed the case why has there been so much concern expressed over the possibility of renewed deflation? One reason is the mediocre economic performance that has been associated with the Japan’s deflation. Furthermore, the foregoing table makes clear that in a number of countries the 1930s deflation was associated with the Great Depression. Indeed, as the Table also indicates for countries where we have data, the Great Depression represented a combination of several crises, simultaneously financial and economic in nature. However, it is also clear that deflation need not always be associated either with a currency crisis or a banking crisis. Since the Great Depression was a singularly devastating event from an economic perspective, it is not entirely surprising that observers would associate deflation with depression.

But is this necessarily so? After all, the era roughly from 1870 to 1890 was also a period of deflation in several countries and, as the figure below suggests, in the United States and elsewhere, deflation was accompanied by strong economic growth. It is what some economists might refer to as a “good” deflation since it occurred at a time of tremendous technological improvements (in transportation and communications especially). That is not to say, even under such circumstances, that opposition from some quarters over the effects of such developments was unheard of. Indeed, the deflation prompted some, most famously William Jennings Bryan in the United States, to run for office believing that the Gold Standard’s proclivity to create deflation was akin to crucifying “mankind upon a cross of gold.” In contrast, the Great Depression would be characterized as a “bad” or even “ugly” deflation since it is associated with a great deal of slack in the economy.

Figure 1
Prices Changes versus the Output Gap, 1870s and 1930s

Notes: The top figure plots the rate of CPI inflation for the periods 1875-79 and 1929-33 for the United States. The bottom figure is an estimate of the output gap for the U.S., that is, the difference between actual and potential real GDP. A negative number signifies actual real GDP is higher than potential real GDP and vice-versa when the output gap is positive. See Burdekin and Siklos (2004) for the details. The vertical line captures the gap in the data, as observations for 1880-1929 are not plotted.

Conclusions

Whereas policy makers today speak of the need to avoid deflation their assessment is colored by the experience of the bad deflation of the 1930s, and its spread internationally, and the ongoing deflation in Japan. Hence, not only do policy makers worry about deflation proper they also worry about its spread on a global scale.

If ideology can blind policymakers to introducing necessary reforms then the second lesson from history is that, once entrenched, expectations of deflation may be difficult to reverse. The occasional fall in aggregate prices is unlikely to significantly affect longer-term expectations of inflation. This is especially true if the monetary authority is independent from political control, and if the central bank is required to meet some kind of inflation objective. Indeed, many analysts have repeatedly suggested the need to introduce an inflation target for Japan. While the Japanese have responded by stating that inflation targeting alone is incapable of helping the economy escape from deflation, the Bank of Japan’s stubborn refusal to adopt such a monetary policy strategy signals an unwillingness to commit to a different monetary policy strategy. Hence, expectations are even more unlikely to be influenced by other policies ostensibly meant to reverse the course of Japanese prices. The Federal Reserve, of course, does not have a formal inflation target but has repeatedly stated that its policies are meant to control inflation within a 0-3% band. Whether formal versus informal inflation targets represent substantially different monetary policy strategies continues to be debated, though the growing popularity of this type of monetary policy strategy suggests that it greatly assists in anchoring expectations of inflation.

References

Borio, Claudio, and Andrew Filardo. “Back to the Future? Assessing the Deflation Record.” Bank for International Settlements, March 2004.

Burdekin, Richard C.K., and Pierre L. Siklos. “Fears of Deflation and Policy Responses Then and Now.” In Deflation: Current and Historical Perspectives, edited by Richard C.K. Burdekin and Pierre L. Siklos. New York: Cambridge: Cambridge University Press, 2004.

Capie, Forrest, and Geoffrey Wood. “Price Change, Financial Stability, and the British Economy, 1870-1939.” In Deflation: Current and Historical Perspectives, edited by Richard C.K. Burdekin and Pierre L. Siklos. New York: Cambridge: Cambridge University Press, 2004.

Friedman, Milton, and Anna J. Schwartz. Monetary Trends in the United States and the United Kingdom. Chicago: University of Chicago Press, 1982.

Humphrey, Thomas M. “The Real Bills Doctrine.” Federal Reserve Bank of Richmond Economic Review 68, no. 5 (1982).

International Monetary Fund. “Deflation: Determinants, Risks, and Policy Options “Findings of an Independent Task Force.” April 30, 2003.

Krugman, Paul. “Its Baaaaack: Japan’s Slump and the Return of the Liquidity Trap.” Brookings Papers on Economic Activity 2 (1998): 137-205.

Meltzer, Allan H. A History of the Federal Reserve. Chicago: Chicago University Press, 2003.

Citation: Siklos, Pierre. “Deflation”. EH.Net Encyclopedia, edited by Robert Whaples. May 11, 2004. URL http://eh.net/encyclopedia/deflation/

Mechanical Cotton Picker

Donald Holley, University of Arkansas at Monticello

Until World War II, the Cotton South remained poor, backward, and un-mechanized. With minor exceptions, most tasks — plowing, cultivating, and finally harvesting cotton — were done by hand. Though sharecropping stifled the region’s attempts to mechanize, too many farmers, both tenants and owners, were trying to survive on small, uneconomical farms, trapping themselves in poverty. From 1910 to 1970 the Great Migration, which included whites as well as blacks, reduced the region’s oversupply of small farmers and embodied a tremendous success story for both migrants and the region itself. The mechanical cotton picker played an indispensable role in the transition from the prewar South of over-population, sharecropping, and hand labor to the capital-intensive agriculture of the postwar South.

Inventions and Inventors

In 1850 Samuel S. Rembert and Jedediah Prescott of Memphis, Tennessee, received the first patent for a cotton harvester from the U.S. Patent Office, but it was almost a century later that a mechanical picker was commercially produced. The late nineteenth century was an age of inventions, and many inventors sought to perfect a mechanical cotton harvester. Their lack of success reinforced the belief that cotton would always be picked by hand. For almost a hundred years, it seemed, a successful cotton picker had been just around the corner.

Inventors experimented with a variety of devices that were designed to pick cotton.

  • Pneumatic harvesters removed cotton fiber from the bolls with suction or a blast of air.
  • Electrical cotton harvesters used a statically charged belt or finger to attract the lint and remove it from the boll.
  • The thresher type cut down the plant near the surface of the ground and took the entire plant into the machine, where the cotton fiber was separated from the vegetable material.
  • The stripper type harvester combed the plant with teeth or drew it between stationary slots or teeth.
  • The picker or spindle type machine was designed to pick the open cotton from the bolls using spindles, fingers, or prongs, without injuring the plant’s foliage and unopened bolls.

The picker or spindle idea drew the most attention. In the 1880s Angus Campbell of Chicago, Illinois, was an agricultural engineer who saw the tedious process of picking cotton. For twenty years he made annual trips to Texas to test the latest model his spindle picker, but his efforts met with ridicule. The consensus of opinion was that cotton would always be picked by hand. Campbell joined with Theodore H. Price and formed the Price-Campbell Cotton Picker Corporation in 1912. The Price-Campbell machine performed poorly, but they believed they were on the right track.

Hiram M. Berry of Greenville, Mississippi, designed a picker with barbed spindles, though it was never perfected. Peter Paul Haring of Goliad, Texas, worked for thirty years to build a mechanical cotton picker using curved prongs or corkscrews.

John Rust

John Rust, the man who was ultimately credited with the invention of the mechanical cotton picker, personified the popular image of the lone inventor working in his garage. As a boy, he had picked cotton himself, and he dreamed that he could invent a machine that would relieve people of one of the most onerous forms of stoop labor.

John Daniel Rust was born in Texas in 1892. He was usually associated with his younger brother Mack Donald Rust, who had a degree in mechanical engineering. Mack did the mechanical work, while John was the dreamer who worried about the social consequences of their invention.

John was intrigued with the challenge of constructing a mechanical cotton picker. Other inventers had used spindles with barbs, which twisted the fibers around the spindle and pulled the lint from the boll. But the problem was how to remove the lint from the barbs. The spindle soon became clogged with lint, leaves, and other debris. He finally hit on the answer: use a smooth, moist spindle. As he later recalled:

The thought came to me one night after I had gone to bed. I remembered how cotton used to stick to my fingers when I was a boy picking in the early morning dew. I jumped out of bed, found some absorbent cotton and a nail for testing. I licked the nail and twirled it in the cotton and found that it would work.

By the mid-1930s the widespread use of mechanical cotton harvesters seemed imminent and inevitable. When in 1935 the Rust brothers moved to Memphis, the self-styled headquarters of the Cotton South, John Rust announced flatly, “The sharecropper system of the Old South will have to be abandoned.” The Rust picker could do the work of between 50 and 100 hand pickers, reducing labor needs by 75 percent. Rust expected to put the machine on the market within a year. A widely read article in the American Mercury entitled “The Revolution in Cotton” predicted the end of the entire plantation system. Most people compared the Rust picker with Eli Whitney’s cotton gin.

Rust’s 1936 Public Demonstration

In 1936, the Rust machine received a public trial at the Delta Experiment Station near Leland, Mississippi. Though the Rust picker was not perfected, it did pick cotton and it picked it well. The machine produced a sensation, sending a shutter throughout the region. The Rust brothers’ machine provoked the fear that a mechanical picker would destroy the South’s sharecropping system and, during the Great Depression, throw millions of people out of work. An enormous human tragedy would then release a flood of rural migrants, mostly black, on northern cities. The Jackson (Miss.) Daily News editorialized that the Rust machine “should be driven right out of the cotton fields and sunk into the Mississippi River.”

Soon a less strident and more balanced view emerged. William E. Ayres, head of the Delta Experiment Station, encouraged Rust:

We sincerely hope you can arrange to build and market your machine shortly. Lincoln emancipated the Southern Negro. It remains for cotton harvesting machinery to emancipate the Southern cotton planter. The sooner this [is] done, the better for the entire South.

Professional agricultural men saw the mechanization of cotton as a gradual process. The cheap price of farm labor in the depression had slowed the progress of mechanization. Still, the prospects for the future were grim. One agricultural economist predicted that mechanical cotton picking would become reality over the next ten or fifteen years.

Cotton Harvester Sweepstakes

International Harvester

Major farm implement companies, which had far more resources than did the Rust brothers, entered what may be called the cotton harvester sweepstakes. Usually avoiding publicity, implement companies were happy to let the Rust brothers bear the brunt of popular criticism. International Harvester (IH) of Chicago, Illinois, had invented the popular Farmall tractor in 1924 and then experimented with pneumatic pickers. After three years of work, Harvester realized that a skilled hand picker could easily pick faster than their pneumatic machine.

IH then bought up the Price-Campbell patents and turned to spindle pickers. By the late 1930s Harvester was sending a caravan southward every fall to test their latest prototype, picking early cotton in Texas and late-maturing cotton in Arkansas and Mississippi. In 1940 chief engineer C. R. Hagen abandoned the idea of a tractor that pulled the picking unit. Instead of driving the tractor forward, the tractor moved backward enabling the picking unit to encounter the cotton plants first. The transmission was reversed so that it still used forward gears.

After the 1942 caravan, Fowler McCormick, chairman of the board of International Harvester, formally announced that his company had a commercial cotton picker ready for production. The IH picker was a one-row, spindle-type picker, but unlike the Rust machine it used a barbed spindle, which improved its ability to snag cotton fibers. This machine employed a doffer to clean the spindles before the next rotation. Unfortunately, the War Production Board allocated IH only enough steel to continue production of experimental models; IH was unable to start full-scale production until after World War II was over.

In late 1944, as World War II entered its final months, attention turned to a dramatic announcement. The Hopson Planting Company near Clarksdale, Mississippi, produced the first cotton crop totally without the use of hand labor. Machines planted the cotton, chopped it, and harvested the crop. It was a stunning achievement that foretold the future.

IH’s Memphis Factory, 1949

After the war, International Harvester constructed Memphis Works, a huge cotton picker factory located on the north side of the city, and manufactured the first pickers in 1949. Though the company had assembled experimental models for testing purposes, this event marked the first commercial production of mechanical cotton pickers. The plant’s location clearly showed that the company aimed its pickers for use in the cotton areas of the Mississippi River Valley.

Deere

Deere and Company of Moline, Illinois, had experimented with stripper-type harvesters and variations of the spindle idea, but discontinued these experiments in 1931. In 1944 the company resumed work after buying the Berry patents, though Deere’s machine incorporated its own innovative designs. Deere quickly regained the ground it had lost during the depression. In 1950, Deere’s Des Moines Works at Ankeny, Iowa, began production of a two-row picker that could do almost twice the harvesting job of one-row machines.

Allis Chalmers

Despite his success, John Rust realized that his picker was substandard, and during World War II he went back to his drafting board and redesigned his entire machine. His lack of financial resources was overcome when he received an offer from Allis Chalmers of Indianapolis, Indiana, to produce machines using his patents. He signed a non-exclusive agreement.

Pearson

In late 1948 cotton farmers near Pine Bluff, Arkansas, suffered from a labor shortage. Since cotton still stood unpicked in the fields at the end of the year, they invited Rust to demonstrate his picker. The demonstration was a success. Rust entered into an agreement with Ben Pearson, a Pine Bluff company known for archery equipment, to produce 100 machines for $1,000 each, paid in advance. All the machines were sold, and Ben Pearson hired Rust as a consultant and manufactured Rust cotton pickers.

Ancillary Developments

The mechanization of cotton did indeed proceeded slowly. The production of cotton involved three distinct “labor peaks”: land breaking, planting, and cultivating; thinning and weeding; and harvesting. Until the 1960s cotton growers did not have a full set of technological tools to mechanize all labor peaks.

Weed Control

The control of weeds with herbicides was the last labor peak to be conquered. Desperate to solve the problem, farmers cross-cultivated their cotton, plowing across rows as well as up and down rows. Taking advantage of the toughness of cotton stalks, flame weeders used a flammable gas to kill weeds. The most peculiar sight in northeast Arkansas was flocks of weed-hungry geese that sauntered through cotton fields. The weed problem was solved not by machines, but by chemicals. In 1964, the preemergence herbicide Treflan became a household word because of a television commercial. Ultimately, the need to chop and thin cotton was a problem of plant genetics.

Western cotton growers embraced mechanization earlier than did southern farmers. As early as 1951, more than half of California’s cotton crop was mechanically harvested, with hand picking virtually eliminated by the 1960s. Environmental conditions produced smaller cotton plants, not the “rank” cotton in the Delta, and small plants favored machine picking. Western farmers also did not have to overcome the burden of an antiquated labor system. (See Figure 1.)

Figure 1. Machine Harvested Cotton as a Percentage of the Total Cotton Crop, Arkansas, California, South Carolina, and U.S. Average, 1949-1972

Source: United States Department of Agriculture, Economic Research Service. Statistics on Cotton and Related Data, 1920-1973, Statistical Bulletin No. 535 (Wash­ing­ton: Government Printing Office, 1974), 218.

Mechanization and Migration

The most controversial issue raised by the introduction of the mechanical cotton harvester has been its role in the Great Migration. Popular opinion has accepted the view that machines eliminated jobs and forced poor families to leave their homes and farms in a forlorn search for urban jobs. On the other hand agricultural experts argued that mechanization was not the cause, but the result of economic change in the Cotton South. Wartime and postwar labor shortages were the major factors in stimulating the use of machines in cotton fields. Most of the out-migration from the South stemmed from a desire to obtain high paying jobs in northern industries, not from an “enclosure” movement motivated by landowners who mechanized as rapidly as possible. Indeed, the South’s cotton farmers were often reluctant to make the transition from hand labor, which was familiar and workable, to machines, which were expensive and untried.

Holley (2000) used an empirical analysis to compare the impact of mechanization and manufacturing wages on the labor available for picking cotton. The result showed that mechanization accounted for less than 40 percent of the decrease in handpicking, while the other 60 percent was attributed to the decrease in the supply of labor caused by higher wages in manufacturing industries. Hand labor was pulled out of the Cotton South by higher industrial wages rather than displaced by job-destroying machines.

Timing of Migration

The evidence is overwhelming that migration greatly accelerated mechanization. The first commercial production of mechanical cotton pickers were manufactured in 1949, and these machines did not exist in large numbers until the early 1950s. Since the Great Migration began during World War I, mechanical pickers cannot have played any causal role in the first four decades of the migration. By 1950, soon after the first mechanical cotton pickers were commercially available, over six million migrants had already left the South. (See Table 1.) A decade later, most of the nation’s cotton was still hand picked. Only by the late 1960s, when the migration was losing momentum, did machines harvest virtually the total cotton crop.

Table 1
Net Migration from the South, by Race, 1870-1970 (thousands)

Decade Native White Black Total
1870-1880 91 -68 23
1880-1890 -271 88 -183
1890-1900 -30 -185 -215
1900-1910 -69 -194 -218
1910-1920 -663 -555 -1,218
1920-1930 -704 -903 -1,607
1930-1940 -558 -480 -1,038
1940-1950 -866 -1,581 -2,447
1950-1960* -1,003* -1,575* -2,578
1960-1970* -508* -1,430* -1,938
Totals for 1940-1970 -2,377 -4,586 -6,963

Source: Hope T. Eldridge and Dorothy S. Thomas, Population Redistribution and Economic Growth, vol. 3 (Philadelphia: American Philosophical Society, 1964), 90. *United States Bureau of the Census, Historical Statistics of the United States: Colonial Times to 1970 (Washington: Government Printing Office, 1975), Series C 55-62, pp. 93-95.

Migration figures also provide a comparison of statewide migration estimates in Arkansas, Louisiana, and Mississippi with estimates for counties that actually used mechanical pickers (79 of 221 counties or parishes). During the 1950s these counties accounted for less than half of the total white migration from the three-state region and just over half of the black migration. The same was true in the 1960s except that the white population showed a net gain, not a loss. (See Table 2.) Though push factors played some role in the migration, pull factors were more important. People deserted the cotton areas because they hoped to obtain better jobs and more money elsewhere.

Table 2
Estimated Statewide Migration, Arkansas, Louisiana, and Mississippi
Compared to Migration Estimates for Cotton Counties, 1950-1970

 

1950-1960 1960-1970
State as a Whole Counties Using Mechanical Pickers Percent­age State as a Whole Counties Using Mechanical Pickers Percent­age
White
Arkansas -283,000 -106,388 37.6 38,000 -26,026 68.5
Louisiana 43,000 -15,769 36.7 26,000 -28,949* 111.3
Mississippi -110,000 -50,997 46.4 10,000 -771 7.7
Totals -350,000 -173,154 49.6 74,000 -55,746 75.3
Black
Arkansas -150,000 -74,297 49.5 -112,000 -64,445 57.5
Louisiana -93,000 -42,151 45.3 -163,000 -62,290 38.2
Mississippi -323,000 -175,577 54.4 -279,000 -152,357 54.6
Totals -566,000 -292,025 51.6 -554,000 -279,092 50.4

Source: Donald Holley. The Second Great Emancipation: The Mechanical Cotton Picker, Black Migration, and How They Shaped the Modern South (Fayetteville: University of Arkansas Press, 2000), 178.

*The selected counties lost population, but Louisiana statewide recorded a population gain for the decade.

Most of the Arkansas migrants, for example, were young people from farm families who saw little future in agriculture. They were people with skills and thus possessed high employment potential. They also had better than average educations. In other words, they were not a collection of pathetic sharecroppers who had been driven off the land.

Conclusion

During and after World War II, the Cotton South was caught up in a complex interplay of economic forces. The region suffered shortages of agricultural labor during the war, which led to the collapse of the old plantation system. The number of tenant farmers and sharecroppers declined precipitously, and the U.S. Department of Agriculture stopped counting them after its 1959 census. The structure of southern agriculture changed as the number of farms declined steadily, while the size of farms increased. The age of Agri-Business had arrived.

The migration solved the long-standing problem of rural overpopulation, and did so without producing social upheaval. The migrants found jobs and improved their living standards, and simultaneously rural areas were relieved of their overpopulation. The migration also enabled black people to gain political clout in northern and western cities, and since Jim Crow was in part a system of labor control, the declining need for black labor in the South loosened the ties of segregation.

After World War II southern farmers faced a world that had changed. While the Civil War had freed the slaves, the mechanical cotton picker emancipated workers from backbreaking labor and emancipated the region itself from its dependence on cotton and sharecropping. Indeed, mechanization made possible the continuation of cotton farming in the post-plantation era. Yet cotton acreages declined as farmers moved into rice and soybeans, crops that were already mechanized, creating a more diversified agricultural economy. The end of sharecropping also signaled the end of the need for cheap, docile labor — always a prerequisite of plantation agriculture. The labor control that the South had always exercised over poor whites and blacks proved unattainable after the war. Thus the mechanization of cotton was an essential condition for the civil rights movement in the 1950s, which freed the region from Jim Crow. The relocation of political power from farms to cities was a related by-product of agricultural mechanization. In the second half of the twentieth century, the South underwent a second great emancipation as revolutionary changes swept the region that earlier were unattainable and even unimaginable.

Selected Bibliography

Carlson, Oliver. “Revolution in Cotton.” American Mercury 34 (February 1935): 129-36. Reprinted in Readers’ Digest 26 (March 1935): 13-16.

Cobb, James C. The Most Southern Place on Earth: The Mississippi Delta and the Roots of Regional Identity. New York: Oxford University Press, 1992.

Day, Richard H. “The Economics of Technological Change and the Demise of the Sharecropper.” American Economic Review 57 (June 1967): 427-49.

Drucker, Peter. “Exit King Cotton.” Harper’s 192 (May 1946): 473-80.

Fite, Gilbert C. Cotton Fields No More: Southern Agriculture, 1865-1980. Lexington: University of Kentucky Press, 1984.

Hagen, C. R. “Twenty-Five Years of Cotton Picker Development.” Agricultural Engineering 32 (November 1951): 593-96, 599.

Hamilton, C. Horace. “The Social Effects of Recent Trends in the Mechaniza­tion of Agriculture.” Rural Sociology 4 (March 1939): 3-19.

Heinicke, Craig. “African-American Migration and Mechanized Cotton Harvesting, 1950-1960.” Explorations in Economic History 31 (October 1994): 501-20.

Holley, Donald. The Second Great Emancipation: The Mechanical Cotton Picker, Black Migration, and How They Shaped the Modern South. Fayetteville: University of Arkansas Press, 2000.

Johnston, Oscar. “Will the Machine Ruin the South?” Saturday Evening Post 219 (May 31, 1947): 36-37, 94-95, 388.

Maier, Frank H. An Economic Analysis of Adoption of the Mechanical Cotton Picker.”Ph.D. dissertation, University of Chicago, 1969.

Peterson, Willis, and Yoav Kislev. “The Cotton Harvester in Retrospect: Labor Displacement or Replacement.” Journal of Economic History 46 (March 1986): 199-216.

Rasmussen, Wayne D. “The Mechanization of Agriculture.” Scientific American 247 (September 1982): 77-89.

Rust, John. “The Origin and Development of the Cotton Picker.” West Tennessee Historical Society Papers 7 (1953): 38-56.

Street, James H. The New Revolution in the Cotton Economy: Mechanization and Its Consequences. Chapel Hill: University of North Carolina Press, 1957.

Whatley, Warren C. “New Estimates of the Cost of Harvesting Cotton: 1949-1964.” Research in Economic History 13 (1991): 199-225.

Whatley, Warren C. “A History of Mechanization in the Cotton South: The Institutional Hypothesis.” Quarterly Journal of Economics 100 (November 1985): 1191-1215.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy since the Civil War. New York: Basic Books, 1986.

Citation: Holley, Donald. “Mechanical Cotton Picker”. EH.Net Encyclopedia, edited by Robert Whaples. June 16, 2003. URL http://eh.net/encyclopedia/mechanical-cotton-picker/

The US Coal Industry in the Nineteenth Century

Sean Patrick Adams, University of Florida

Introduction

The coal industry was a major foundation for American industrialization in the nineteenth century. As a fuel source, coal provided a cheap and efficient source of power for steam engines, furnaces, and forges across the United States. As an economic pursuit, coal spurred technological innovations in mine technology, energy consumption, and transportation. When mine managers brought increasing sophistication to the organization of work in the mines, coal miners responded by organizing into industrial trade unions. The influence of coal was so pervasive in the United States that by the advent of the twentieth century, it became a necessity of everyday life. In an era where smokestacks equaled progress, the smoky air and sooty landscape of industrial America owed a great deal to the growth of the nation’s coal industry. By the close of the nineteenth century, many Americans across the nation read about the latest struggle between coal companies and miners by the light of a coal-gas lamp and in the warmth of a coal-fueled furnace, in a house stocked with goods brought to them by coal-fired locomotives. In many ways, this industry served as a major factor of American industrial growth throughout the nineteenth century.

The Antebellum American Coal Trade

Although coal had served as a major source of energy in Great Britain for centuries, British colonists had little use for North America’s massive reserves of coal prior to American independence. With abundant supplies of wood, water, and animal fuel, there was little need to use mineral fuel in seventeenth and eighteenth-century America. But as colonial cities along the eastern seaboard grew in population and in prestige, coal began to appear in American forges and furnaces. Most likely this coal was imported from Great Britain, but a small domestic trade developed in the bituminous fields outside of Richmond, Virginia and along the Monongahela River near Pittsburgh, Pennsylvania.

The Richmond Basin

Following independence from Britain, imported coal became less common in American cities and the domestic trade became more important. Economic nationalists such as Tench Coxe, Albert Gallatin, and Alexander Hamilton all suggested that the nation’s coal trade — at that time centered in the Richmond coal basin of eastern Virginia — would serve as a strategic resource for the nation’s growth and independence. Although it labored under these weighty expectations, the coal trade of eastern Virginia was hampered by its existence on the margins of the Old Dominion’s plantation economy. Colliers of the Richmond Basin used slave labor effectively in their mines, but scrambled to fill out their labor force, especially during peak periods of agricultural activity. Transportation networks in the region also restricted the growth of coal mining. Turnpikes proved too expensive for the coal trade and the James River and Kanawha Canal failed to make necessary improvements in order to accommodate coal barge traffic and streamline the loading, conveyance, and distribution of coal at Richmond’s tidewater port. Although the Richmond Basin was nation’s first major coalfield, miners there found growth potential to be limited.

The Rise of Anthracite Coal

At the same time that the Richmond Basin’s coal trade declined in importance, a new type of mineral fuel entered urban markets of the American seaboard. Anthracite coal has higher carbon content and is much harder than bituminous coal, thus earning the nickname “stone coal” in its early years of use. In 1803, Philadelphians watched a load of anthracite coal actually squelch a fire during a trial run, and city officials used the load of “stone coal” as attractive gravel for sidewalks. Following the War of 1812, however, a series of events paved the way for anthracite coal’s acceptance in urban markets. Colliers like Jacob Cist saw the shortage of British and Virginia coal in urban communities as an opportunity to promote the use of “stone coal.” Philadelphia’s American Philosophical Society and Franklin Institute enlisted the aid of the area’s scientific community to disseminate information to consumers on the particular needs of anthracite. The opening of several links between Pennsylvania’s anthracite fields via the Lehigh Coal and Navigation Company (1820), the Schuylkill Navigation Company (1825), and the Delaware and Hudson (1829) insured that the flow of anthracite from mine to market would be cheap and fast. “Stone coal” became less a geological curiosity by the 1830s and instead emerged as a valuable domestic fuel for heating and cooking, as well as a powerful source of energy for urban blacksmiths, bakers, brewers, and manufacturers. As demonstrated in Figure 1, Pennsylvania anthracite dominated urban markets by the late 1830s. By 1840, annual production had topped one million tons, or about ten times the annual production of the Richmond bituminous field.

Figure One: Percentage of Seaboard Coal Consumption by Origin, 1822-1842

Sources:
Hunt’s Merchant’s Magazine and Commercial Review 8 (June 1843): 548;
Alfred Chandler, “Anthracite Coal and the Beginnings of the Industrial Revolution,” p. 154.

The Spread of Coalmining

The antebellum period also saw the expansion of coal mining into many more states than Pennsylvania and Virginia, as North America contains a variety of workable coalfields. Ohio’s bituminous fields employed 7,000 men and raised about 320,000 tons of coal in 1850 — only three years later the state’s miners had increased production to over 1,300,000 tons. In Maryland, the George’s Creek bituminous region began to ship coal to urban markets by the Baltimore and Ohio Railroad (1842) and the Chesapeake and Ohio Canal (1850). The growth of St. Louis provided a major boost to the coal industries of Illinois and Missouri, and by 1850 colliers in the two states raised about 350,000 tons of coal annually. By the advent of the Civil War, coal industries appeared in at least twenty states.

Organization of Antebellum Mines

Throughout the antebellum period, coal mining firms tended to be small and labor intensive. The seams that were first worked in the anthracite fields of eastern Pennsylvania or the bituminous fields in Virginia, western Pennsylvania, and Ohio tended to lie close to the surface. A skilled miner and a handful of laborers could easily raise several tons of coal a day through the use of a “drift” or “slope” mine that intersected a vein of coal along a hillside. In the bituminous fields outside of Pittsburgh, for example, coal seams were exposed along the banks of the Monongahela and colliers could simply extract the coal with a pickax or shovel and roll it down the riverbank via a handcart into a waiting barge. Once the coal left the mouth of the mine, however, the size of the business handling it varied. Proprietary colliers usually worked on land that was leased for five to fifteen years — often from a large landowner or corporation. The coal was often shipped to market via a large railroad or canal corporation such as the Baltimore and Ohio Railroad, or the Delaware and Hudson Canal. Competition between mining firms and increases in production kept prices and profit margins relatively low, and many colliers slipped in and out of bankruptcy. These small mining firms were typical of the “easy entry, easy exit” nature of American business competition in the antebellum period.

Labor Relations

Since most antebellum coal mining operations were often limited to a few skilled miners aided by lesser skilled laborers, the labor relations in American coal mining regions saw little extended conflict. Early coal miners also worked close to the surface, often in horizontal drift mines, which meant that work was not as dangerous in the era before deep shaft mining. Most mining operations were far-flung enterprises away from urban centers, which frustrated attempts to organize miners into a “critical mass” of collective power — even in the nation’s most developed anthracite fields. These factors, coupled with the mine operator’s belief that individual enterprise in the anthracite regions insured a harmonious system of independent producers, had inhibited the development of strong labor organizations in Pennsylvania’s antebellum mining industry. In less developed regions, proprietors often worked in the mines themselves, so the lines between ownership, management, and labor were often blurred.

Early Unions

Most disputes, when they did occur, were temporary affairs that focused upon the low wages spurred by the intense competition among colliers. The first such action in the anthracite industry occurred in July of 1842 when workers from Minersville in Schuylkill County marched on Pottsville to protest low wages. This short-lived strike was broken up by the Orwigsburgh Blues, a local militia company. In 1848 John Bates enrolled 5,000 miners and struck for higher pay in the summer of 1849. But members of the “Bates Union” found themselves locked out of work and the movement quickly dissipated. In 1853, the Delaware and Hudson Canal Company’s miners struck for a 2½ cent per ton increase in their piece rate. This strike was successful, but failed to produce any lasting union presence in the D&H’s operations. Reports of disturbances in the bituminous fields of western Pennsylvania and Ohio follow the same pattern, as antebellum strikes tended to be localized and short-lived. Production levels thus remained high, and consumers of mineral fuel could count upon a steady supply reaching market.

Use of Anthracite in the Iron Industry

The most important technological development in the antebellum American coal industry was the successful adoption of anthracite coal to iron making techniques. Since the 1780s, bituminous coal or coke — which is bituminous coal with the impurities burned away — had been the preferred fuel for British iron makers. Once anthracite had nearly successfully entered American hearths, there seemed to be no reason why stone coal could not be used to make iron. As with its domestic use, however, the industrial potential of anthracite coal faced major technological barriers. In British and American iron furnaces of the early nineteenth century, the high heat needed to smelt iron ore required a blast of excess air to aid the combustion of the fuel, whether it was coal, wood, or charcoal. While British iron makers in the 1820s attempted to increase the efficiency of the process by using superheated air, known commonly as a “hot blast,” American iron makers still used a “cold blast” to stoke their furnaces. The density of anthracite coal resisted attempts to ignite it through the cold blast and therefore appeared to be an inappropriate fuel for most American iron furnaces.

Anthracite iron first appeared in Pennsylvania in 1840, when David Thomas brought Welsh hot blast technology into practice at the Lehigh Crane Iron Company. The firm had been chartered in 1839 under the general incorporation act. The Allentown firm’s innovation created a stir in iron making circles, and iron furnaces for smelting ore with anthracite began to appear across eastern and central Pennsylvania. In 1841, only a year after the Lehigh Crane Iron Company’s success, Walter Johnson found no less than eleven anthracite iron furnaces in operation. That same year, an American correspondent of London bankers cited savings on iron making of up to twenty-five percent after the conversion to anthracite and noted that “wherever the coal can be procured the proprietors are changing to the new plan; and it is generally believed that the quality of the iron is much improved where the entire process is affected with anthracite coal.” Pennsylvania’s investment in anthracite iron paid dividends for the industrial economy of the state and proved that coal could be adapted to a number of industrial pursuits. By 1854, forty-six percent of all American pig iron had been smelted with anthracite coal as a fuel, and by 1860 anthracite’s share of pig iron was more than fifty-six percent.

Rising Levels of Coal Output and Falling Prices

The antebellum decades saw the coal industry emerge as a critical component of America’s industrial revolution. Anthracite coal became a fixture in seaboard cities up and down the east coast of North America — as cities grew, so did the demand for coal. To the west, Pittsburgh and Ohio colliers shipped their coal as far as Louisville, Cincinnati, or New Orleans. As wood, animal, and waterpower became scarcer, mineral fuel usually took their place in domestic consumption and small-scale manufacturing. The structure of the industry, many small-scale firms working on short-term leases, meant that production levels remained high throughout the antebellum period, even in the face of falling prices. In 1840, American miners raised 2.5 million tons of coal to serve these growing markets and by 1850 increased annual production to 8.4 million tons. Although prices tended to fluctuate with the season, in the long run, they fell throughout the antebellum period. For example, in 1830 anthracite coal sold for about $11 per ton. Ten years later, the price had dropped to $7 per ton and by 1860 anthracite sold for about $5.50 a ton in New York City. Annual production in 1860 also passed twenty million tons for the first time in history. Increasing production, intense competition, low prices, and quiet labor relations all were characteristics of the antebellum coal trade in the United States, but developments during and after the Civil War would dramatically alter the structure and character of this critical industrial pursuit.

Coal and the Civil War

The most dramatic expansion of the American coal industry occurred in the late antebellum decades but the outbreak of the Civil War led to some major changes. The fuel needs of the federal army and navy, along with their military suppliers, promised a significant increase in the demand for coal. Mine operators planned for rising, or at least stable, coal prices for the duration of the war. Their expectations proved accurate. Even when prices are adjusted for wartime inflation, they increased substantially over the course of the conflict. Over the years 1860 to 1863, the real (i.e., inflation-adjusted) price of a ton of anthracite rose by over thirty percent, and in 1864 the real price had increased to forty-five percent above its 1860 level. In response, the production of coal increased to over twelve million tons of anthracite and over twenty-four million tons nationwide by 1865.

The demand for mineral fuel in the Confederacy led to changes in southern coalfields as well. In 1862, the Confederate Congress organized the Niter and Mining Bureau within the War Department to supervise the collection of niter (also known as saltpeter) for the manufacture of gunpowder and the mining of copper, lead, iron, coal, and zinc. In addition to aiding the Richmond Basin’s production, the Niter and Mining Bureau opened new coalfields in North Carolina and Alabama and coordinated the flow of mineral fuel to Confederate naval stations along the coast. Although the Confederacy was not awash in coal during the conflict, the work of the Niter and Mining Bureau established the groundwork for the expansion of mining in the postbellum South.

In addition to increases in production, the Civil War years accelerated some qualitative changes in the structure of the industry. In the late 1850s, new railroads stretched to new bituminous coalfields in states like Maryland, Ohio, and Illinois. In the established anthracite coal regions of Pennsylvania, railroad companies profited immensely from the increased traffic spurred by the war effort. For example, the Philadelphia & Reading Railroad’s margin of profit increased from $0.88 per ton of coal in 1861 to $1.72 per ton in 1865. Railroad companies emerged from the Civil War as the most important actors in the nation’s coal trade.

The American Coal Trade after the Civil War

Railroads and the Expansion of the Coal Trade

In the years immediately following the Civil War, the expansion of the coal trade accelerated as railroads assumed the burden of carrying coal to market and opening up previously inaccessible fields. They did this by purchasing coal tracts directly and leasing them to subsidiary firms or by opening their own mines. In 1878, the Baltimore and Ohio Railroad shipped three million tons of bituminous coal from mines in Maryland and from the northern coalfields of the new state of West Virginia. When the Chesapeake and Ohio Railroad linked Huntington, West Virginia with Richmond, Virginia in 1873, the rich bituminous coal fields of southern West Virginia were open for development. The Norfolk and Western developed the coalfields of southwestern Virginia by completing their railroad from tidewater to remote Tazewell County in 1883. A network of smaller lines linking individual collieries to these large trunk lines facilitated the rapid development of Appalachian coal.

Railroads also helped open up the massive coal reserves west of the Mississippi. Small coal mines in Missouri and Illinois existed in the antebellum years, but were limited to the steamboat trade down the Mississippi River. As the nation’s web of railroad construction expanded across the Great Plains, coalfields in Colorado, New Mexico, and Wyoming witnessed significant development. Coal had truly become a national endeavor in the United States.

Technological Innovations

As the coal industry expanded, it also incorporated new mining methods. Early slope or drift mines intersected coal seams relatively close to the surface and needed only small capital investments to prepare. Most miners still used picks and shovels to extract the coal, but some miners used black powder to blast holes in the coal seams, then loaded the broken coal onto wagons by hand. But as miners sought to remove more coal, shafts were dug deeper below the water line. As a result, coal mining needed larger amounts of capital as new systems of pumping, ventilation, and extraction required the implementation of steam power in mines. By the 1890s, electric cutting machines replaced the blasting method of loosening the coal in some mines, and by 1900 a quarter of American coal was mined using these methods. As the century progressed, miners raised more and more coal by using new technology. Along with this productivity came the erosion of many traditional skills cherished by experienced miners.

The Coke Industry

Consumption patterns also changed. The late nineteenth century saw the emergence of coke — a form of processed bituminous coal in which impurities are “baked” out under high temperatures — as a powerful fuel in the iron and steel industry. The discovery of excellent coking coal in the Connellsville region of southwestern Pennsylvania spurred the aggressive growth of coke furnaces there. By 1880, the Connellsville region contained more than 4,200 coke ovens and the national production of coke in the United States stood at three million tons. Two decades later, the United States consumed over twenty million tons of coke fuel.

Competition and Profits

The successful incorporation of new mining methods and the emergence of coke as a major fuel source served as both a blessing and a curse to mining firms. With the new technology they raised more coal, but as more coalfields opened up and national production neared eighty million tons by 1880, coal prices remained relatively low. Cheap coal undoubtedly helped America’s rapidly industrializing economy, but it also created an industry structure characterized by boom and bust periods, low profit margins, and cutthroat competition among firms. But however it was raised, the United States became more and more dependent upon coal as the nineteenth century progressed, as demonstrated by Figure 2.

Figure 2: Coal as a Percentage of American Energy Consumption, 1850-1900

Source: Sam H. Schurr and Bruce C. Netschert, Energy in the American Economy, 1850-1975 (Baltimore: Johns Hopkins Press, 1960), 36-37.

The Rise of Labor Unions

As coal mines became more capital intensive over the course of the nineteenth century, the role of miners changed dramatically. Proprietary mines usually employed skilled miners as subcontractors in the years prior to the Civil War; by doing so they abdicated a great deal of control over the pace of mining. Corporate reorganization and the introduction of expensive machinery eroded the traditional authority of the skilled miner. By the 1870s, many mining firms employed managers to supervise the pace of work, but kept the old system of paying mine laborers per ton rather than an hourly wage. Falling piece rates quickly became a source of discontent in coal mining regions.

Miners responded to falling wages and the restructuring of mine labor by organizing into craft unions. The Workingmen’s Benevolent Association founded in Pennsylvania in 1868, united English, Irish, Scottish, and Welsh anthracite miners. The WBA won some concessions from coal companies until Franklin Gowen, acting president of the Philadelphia and Reading Railroad led a concerted effort to break the union in the winter of 1874-75. When sporadic violence plagued the anthracite fields, Gowen led the charge against the “Molly Maguires,” a clandestine organization supposedly led by Irish miners. After the breaking of the WBA, most coal mining unions served to organize skilled workers in specific regions. In 1890, a national mining union appeared when delegates from across the United States formed the United Mine Workers of America. The UMWA struggled to gain widespread acceptance until 1897, when widespread strikes pushed many workers into union membership. By 1903, the UMWA listed about a quarter of a million members, raised a treasury worth over one million dollars, and played a major role in industrial relations of the nation’s coal industry.

Coal at the Turn of the Century

By 1900, the American coal industry was truly a national endeavor that raised fifty-seven million tons of anthracite and 212 million tons of bituminous coal. (See Tables 1 and 2 for additional trends.) Some coal firms grew to immense proportions by nineteenth-century standards. The U.S. Coal and Oil Company, for example, was capitalized at six million dollars and owned the rights to 30,000 acres of coal-bearing land. But small mining concerns with one or two employees also persisted through the turn of the century. New developments in mine technology continued to revolutionize the trade as more and more coal fields across the United States became integrated into the national system of railroads. Industrial relations also assumed nationwide dimensions. John Mitchell, the leader of the UMWA, and L.M. Bowers of the Colorado Fuel and Iron Company, symbolized a new coal industry in which hard-line positions developed in both labor and capital’s respective camps. Since the bituminous coal industry alone employed over 300,000 workers by 1900, many Americans kept a close eye on labor relations in this critical trade. Although “King Coal” stood unchallenged as the nation’s leading supplier of domestic and industrial fuel, tension between managers and workers threatened the stability of the coal industry in the twentieth century.

Table 1: Coal Production in the United States, 1829-1899

Year Coal Production (thousands of tons) Percent Increase over Decade Tons per capita
Anthracite Bituminous
1829 138 102 0.02
1839 1008 552 550 0.09
1849 3995 2453 313 0.28
1859 9620 6013 142 0.50
1869 17,083 15,821 110 0.85
1879 30,208 37,898 107 1.36
1889 45,547 95,683 107 2.24
1899 60,418 193,323 80 3.34

Source: Fourteenth Census of the United States, Vol. XI, Mines and Quarries, 1922, Tables 8 and 9, pp. 258 and 260.

Table 2: Leading Coal Producing States, 1889

State Coal Production (thousands of tons)
Pennsylvania 81,719
Illinois 12,104
Ohio 9977
West Virginia 6232
Iowa 4095
Alabama 3573
Indiana 2845
Colorado 2544
Kentucky 2400
Kansas 2221
Tennessee 1926

Source: Thirteenth Census of the United States, Vol. XI, Mines and Quarries, 1913, Table 4, p. 187

Suggestions for Further Reading

Adams, Sean Patrick. “Different Charters, Different Paths: Corporations and Coal in Antebellum Pennsylvania and Virginia,” Business and Economic History 27 (Fall 1998): 78-90.

Adams, Sean Patrick. Old Dominion, Industrial Commonwealth: Coal, Politics, and Economy in Antebellum America. Baltimore: Johns Hopkins University Press, 2004.

Binder, Frederick Moore. Coal Age Empire: Pennsylvania Coal and Its Utilization to 1860. Harrisburg: Pennsylvania Historical and Museum Commission, 1974.

Blatz, Perry. Democratic Miners: Work and Labor Relations in the Anthracite Coal Industry, 1875-1925. Albany: SUNY Press, 1994.

Broehl, Wayne G. The Molly Maguires. Cambridge, MA: Harvard University Press, 1964.

Bruce, Kathleen. Virginia Iron Manufacture in the Slave Era. New York: The Century Company, 1931.

Chandler, Alfred. “Anthracite Coal and the Beginnings of the ‘Industrial Revolution’ in the United States,” Business History Review 46 (1972): 141-181.

DiCiccio, Carmen. Coal and Coke in Pennsylvania. Harrisburg: Pennsylvania Historical and Museum Commission, 1996

Eavenson, Howard. The First Century and a Quarter of the American Coal Industry. Pittsburgh: Privately Printed, 1942.

Eller, Ronald. Miners, Millhands, and Mountaineers: Industrialization of the Appalachian South, 1880-1930. Knoxville: University of Tennessee Press, 1982.

Harvey, Katherine. The Best Dressed Miners: Life and Labor in the Maryland Coal Region, 1835-1910. Ithaca, NY: Cornell University Press, 1993.

Hoffman, John. “Anthracite in the Lehigh Valley of Pennsylvania, 1820-1845,” United States National Museum Bulletin 252 (1968): 91-141.

Laing, James T. “The Early Development of the Coal Industry in the Western Counties of Virginia,” West Virginia History 27 (January 1966): 144-155.

Laslett, John H.M. editor. The United Mine Workers: A Model of Industrial Solidarity? University Park: Penn State University Press, 1996.

Letwin, Daniel. The Challenge of Interracial Unionism: Alabama Coal Miners, 1878-1921 Chapel Hill: University of North Carolina Press, 1998.

Lewis, Ronald. Coal, Iron, and Slaves. Industrial Slavery in Maryland and Virginia, 1715-1865. Westport, Connecticut: Greenwood Press, 1979.

Long, Priscilla. Where the Sun Never Shines: A History of America’s Bloody Coal Industry. New York: Paragon, 1989.

Nye, David E.. Consuming Power: A Social History of American Energies. Cambridge: Massachusetts Institute of Technology Press, 1998.

Palladino, Grace. Another Civil War: Labor, Capital, and the State in the Anthracite Regions of Pennsylvania, 1840-1868. Urbana: University of Illinois Press, 1990.

Powell, H. Benjamin. Philadelphia’s First Fuel Crisis. Jacob Cist and the Developing Market for Pennsylvania Anthracite. University Park: The Pennsylvania State University Press, 1978.

Schurr, Sam H. and Bruce C. Netschert. Energy in the American Economy, 1850-1975: An Economic Study of Its History and Prospects. Baltimore: Johns Hopkins Press, 1960.

Stapleton, Darwin. The Transfer of Early Industrial Technologies to America. Philadelphia: American Philosophical Society, 1987.

Stealey, John E.. The Antebellum Kanawha Salt Business and Western Markets. Lexington: The University Press of Kentucky, 1993.

Wallace, Anthony F.C. St. Clair. A Nineteenth-Century Coal Town’s Experience with a Disaster-Prone Industry. New York: Alfred A. Knopf, 1981.

Warren, Kenneth. Triumphant Capitalism: Henry Clay Frick and the Industrial Transformation of America. Pittsburgh: University of Pittsburgh Press, 1996.

Woodworth, J. B.. “The History and Conditions of Mining in the Richmond Coal-Basin, Virginia.” Transactions of the American Institute of Mining Engineers 31 (1902): 477-484.

Yearley, Clifton K.. Enterprise and Anthracite: Economics and Democracy in Schuylkill County, 1820-1875. Baltimore: The

Citation: Adams, Sean. “US Coal Industry in the Nineteenth Century”. EH.Net Encyclopedia, edited by Robert Whaples. January 23, 2003. URL http://eh.net/encyclopedia/the-us-coal-industry-in-the-nineteenth-century/

Cliometrics

John Lyons, Miami University

Lou Cain, Loyola University Chicago and Northwestern University

Sam Williamson, Miami University

Introduction

In the 1950s a small group of North American scholars adopted a revolutionary approach to investigating the economic past that soon spread to Great Britain and Ireland, the European mainland, Australia, New Zealand, and Japan. What was first called “The New Economic History,” then “Cliometrics,” was impelled by the promise of significant achievement, by the novelties of the recent (mathematical) formalization of economic theory, by the rapid spread of econometric methods, and by the introduction of computers into academia. Cliometrics has three obvious elements: use of quantifiable evidence, use of theoretical concepts and models, and use of statistical methods of estimation and inference, and an important fourth element, employment of the historian’s skills in judging provenance and quality of sources, in placing an investigation in institutional and social context, and in choosing subject matter of significance to history as well as economics. Although the term cliometrics is used to describe work in a variety of historical social and behavioral sciences, the discussion here focuses on economic history.

A quantitative-analytical approach to economic history developed in the interwar years through the work of such scholars as Simon Kuznets in the U.S. and Colin Clark in Britain. Characteristic elements of cliometrics were stimulated by events, by changes in economics, and by an intensification of what might be called the statistical impulse.

First, depression, war, the dissolution of empires, a renewal of widespread and more rapid growth in the Western world, and the challenge of Soviet-style economic planning combined to focus attention on the sources and mechanisms of economic growth and development.

Second, new intellectual currents in economics, spurred in part by contemporary economic problems, arose and came to dominate the profession. In the 1930s, and especially during the war, theoretical approaches to the aggregate economy and its capabilities grew out of the new Keynesian macroeconomics and the development of national income accounting. Explicit techniques for analyzing resource allocation in detail were introduced and employed in wartime planning. Econometrics, the statistical analysis of economic data, continued to grow apace.

Third, the gathering of facts – with an emphasis on systematic arrays of quantitative facts – became more important. By the nineteenth century governments, citizens and scholars had become preoccupied with fact-gathering, but their collations were ordinarily ad hoc and unsystematic. Thoroughness and system became the desideratum of scholarly fact-gathering in the twentieth century.

All these forces had an impact on the birth of a more rigorous way of examining our economic past.

The New Economic History in North America

Cliometrics was unveiled formally in Williamstown, Massachusetts, in the autumn of 1957 at an unusual four-day gathering sponsored by the Economic History Association and the Conference on Research in Income and Wealth. Most of the program was designed to showcase recent work by economists who had ventured into history.

Young scholars in the Income and Wealth group presented their contributions to the historical national accounts of the United States and Canada, spearheaded by Robert Gallman’s estimates of U.S. commodity output, 1839-1899. A pair of headline sessions dealt with method; the one on economic theory and economic history was headed by Walt Rostow, who recalled his undergraduate years in the 1930s at Yale, where he had been led to ask himself “why not see what happened if the machinery of economic theory was brought to bear on modern economic history?” He asserted “economic history is a less interesting field than it could be, because we do not remain sufficiently loyal to the problem approach, which in fact underlies and directs our efforts.”

Newcomers John R. Meyer and Alfred H. Conrad presented two papers. The first was “Economic Theory, Statistical Inference, and Economic History” (1957), a manifesto for using formal theory and econometric methods to examine historical questions. They argued that particular historical circumstances are instances of more general phenomena, suitable for theoretical analysis, and that that quantitative historical evidence, although relatively scarce, is much more abundant than many historians believed and can be analyzed using formal statistical methods. At another session Conrad and Meyer presented “The Economics of Slavery in the Antebellum South,” which incorporated their methodological views to refute a long-standing proposition that the slave system in the southern United States had become moribund by the 1850s and would have died out had there been no Civil War. Conrad and Meyer buttressed the point by showing that slaveholding, viewed as a business activity, had been at least as remunerative as other uses of financial and physical capital. More broadly they illustrated “the ways in which economic theory might be used in ordering and organizing historical facts.”

Two decades later Robert Gallman recalled that the Williamstown “conference did more than put the ball in motion … It also set the tone and style of the new economic history and even forecast the chief methodological and substantive interests that were to occupy cliometricians for the next twenty-one years.” What began in the late 1950s as a trickle of work in the new style grew to a freshet and then a flood, incorporating new methods, examining bodies of data previously too difficult to analyze without the aid of computers, and investigating a variety of questions of traditional importance, mostly in American economic history. The watershed was continent-wide, collecting the work of small clusters of scholars bound together in a ramifying intellectual and social network.

An important and continuing node in this network was at Purdue University in West Lafayette, Indiana. In the late 1950s a group of young historical economists assembled there, among whom the cross-pollination of historical interests and technical expertise was exceptional. In this group were Lance Davis and Jonathan Hughes and several others known primarily for their work in other fields. One was Stanley Reiter, a mathematical economist who traveled with Davis and Hughes to the meetings of the Economic History Association in September 1960 to present their paper explaining the new quantitative historical research being undertaken at Purdue – and to introduce the term “cliometrics” to the profession. The term was coined by Reiter as a whimsical combination of the words Clio, the muse of history, and metrics, from econometrics. As the years went by, the word stuck and became the name of the field.

To build on the enthusiasm aroused by that presentation, and to “consolidate Purdue’s position as the leader in this country of quantitative research in economic history,” Davis and Hughes (with Reiter’s aid) sought and received funds from Purdue for a meeting in December 1960 of about a dozen like-minded economic historians. They gave it the imposing title, “Conference on the Application of Economic Theory and Quantitative Methods to the Study of Problems of Economic History.” For obvious reasons the meetings were soon called “Clio” or the “Cliometrics Conference” by their familiars. Of the six presentations at the first meeting, none was more intriguing than Robert Fogel’s estimates of the “social saving” accruing from the expansion of the American railroad network to 1890.

Sessions were renowned from Clio’s early days as occasions for engaging in sharp debate and asking probing (and occasionally unanswerable) questions. Those who attended the first Clio conference established a tradition of rigorous and detailed analysis of the presenters’ work. In the early years at Purdue and elsewhere, cliometricians developed a research program with mutual support and encouragement and conducted an unusually large proportion of collaborative work, all the while believing in the progressiveness of their efforts.

Indeed, like Walt Rostow, other established economic historians felt that economic history was in need of renewal: Alexander Gerschenkron wrote in 1957 “Economic history is in a poor way. It is unable to attract good students, mainly because the discipline does not present any intellectual challenge …” Some cliometric young Turks were not so mild. While often relying heavily on the wealth of detail amassed in earlier research, they asserted a distinctive identity. The old economic history, it was said, was riddled with errors in economic reasoning and embodied an inadequate approach to causal explanation. The cliometricians insisted on a scientific approach to economic-historical questions, on careful specification of explicit models of the phenomena they were investigating. By implication and by declaration they said that much of conventional wisdom was based on unscientific and unsystematic historical scholarship, on occasion employing language not calculated to endear them to outsiders. The most vocal proponents declared a new order. Douglass North proclaimed that a “revolution is taking place in economic history in the United States … initiated by a new generation of economic historians” intent on reappraising “traditional interpretations of U.S. economic history.” Robert Fogel said that the “novel element in the work of the new economic historians is their approach to measurement and theory,” especially in their ability to find “methods of measuring economic phenomena that cannot be measured directly.” In 1993, these two were awarded the Nobel Memorial Prize in Economics for, in the words of the Nobel committee, being “pioneers in the branch of economic history that has been called the ‘new economic history,’ or cliometrics.”

The hallmark of the top rung of work done by the new economic historians was its integration of fact with theory. As Donald [Deirdre] McCloskey observed in a series of surveys, the theory was often simple. The facts, when not conveniently available, were dug up from surviving sources, whether published or not. Indeed the discipline imposed by the need to measure usually requires more data than would serve for a qualitative argument. Many new economic historians expended considerable effort in the 1960s to expand the American quantitative record. Thus, with eyebrow raised, so to speak, Albert Fishlow remarked in 1970, “It is ironic … to read that … most of the “New Economic History” only applies its ingenuity to analyzing convenient (usually published) data.'” Many cliometricians worked their magic not merely by relying on their predecessors’ compilations; as Scott Eddie comments, “one of the most significant contributions of cliometricians’ painstaking search for data has been the uncovering of vast treasure troves of useful data hitherto either unknown, unappreciated, or simply ignored.” Very early in the computer age they put such data into forms suitable for tabulation and statistical analysis.

William Parker and Robert Gallman, with their students, were pioneers in analyzing individual-level data from the United States Census manuscripts, a project arising from Parker’s earlier study of Southern plantations. From the 1860 agricultural census schedule they drew a carefully constructed sample of over 5,000 farms in the cotton counties of the American South and matched those farms with the two separate schedules for the free and slave populations. The Parker-Gallman sample was followed by Census samples for northern agriculture and for the post-bellum South.

The early practitioners of cliometrics applied their theoretical and quantitative skills to some issues well established in the more “traditional” economic historiography, none more important than asking when and how rapidly the North American economy began to experience “modern economic growth.” In the nineteenth century, economic growth in both the U.S. and Canada was punctuated by booms, recessions and financial crises, but the new work provided a better picture of the path of GNP and its components, revealing steady upward trends in aggregate output and in incomes per person and per worker. This last, it seemed clear from the work in the 1950s of Moses Abramovitz and Robert Solow, must have derived significantly from the introduction of new techniques, as well as from expansion of the scale and penetration of the market. Several scholars thus established a related objective, understanding – or at least accounting for – productivity growth.

Attempting to provide sound explanations for growth, productivity change, and numerous other developments in modern economic history, especially of the U.S. and Britain, was the objective of the cliometricians’ theory and quantification. They were much criticized from without for the very use of these technical tools, and within the movement there was much methodological dispute and considerable dissent. Nonetheless, the early cliometricians spawned a sustained intellectual tradition that diffused worldwide from its North American origins.

Historical Economics in Britain

Cliometrics arrived relatively slowly among British economic historians, but it did arrive. Some was homegrown; some was imported. When Jonathan Hughes expressed doubts in 1970 that the American style of cliometrics could ever be an “export product,” he was already wrong. Admittedly, by then the new style had been employed by only a tiny minority of those writing economic history in Britain. Introduction of a more formal style, in Britain as in North America, fell to those trained as economists, initially to Alec Cairncross, Brinley Thomas and Robin Matthews. Cairncross’s book on home and foreign investment and Thomas’s on migration and growth developed, or collected into one place, a great deal of quantitative information for theoretical analysis; their method, as David Landes noted in 1955, was “in the tradition of historical economics, as opposed to economic history.” Matthews’s Study in Trade Cycle History (1954), which examines the trade cycle of 1833-42, was written, he said, in a “quantitative-historical” mode, and contains theoretical reasoning, economic models, and statistical estimates.

Systematic use of national accounting methods to study British economic development was a task undertaken by Phyllis Deane at Cambridge. Her work resulted in two early papers on British income growth and capital formation and in two books of major importance and lasting value: British Economic Growth, 1688-1959 (1962), written with W. A. Cole, and a compendium of underlying data compiled with Brian Mitchell. Despite skeptical reviews, the basics of the Deane-Cole estimates of eighteenth- and early nineteenth-century aggregate growth were accepted widely for two decades and provided a quantitative basis for discussing living standards and the dispersion of technical progress in the new industrial era. Also at Cambridge, Charles Feinstein estimated the composition and magnitude of British investment flows and produced detailed national income estimates for the nineteenth and twentieth centuries, augmenting, refining and revising, as well as extending, the work of Deane and Cole.

All these studies belong to a decidedly British empirical tradition, despite the use of contemporary theoretical constructs, and contained nothing like the later claims of some American cliometricians about the virtues of using formal theory and statistical methods. Research in a consciously cliometric style was strongly encouraged in the 1960s at Oxford by Hrothgar Habakkuk and Max Hartwell, although neither saw himself as a cliometrician. Separately and together, they supported the movement, encouraging students to absorb both quantitative and formal analytical elements into their work.

The incursion of cliometrics into British economic history was – and has remained – neither so widespread nor so dominant as in North America, partly for reasons suggested by Hughes. Although economic history had been taught and practiced in British universities since the 1870s, after the first World War most faculty members were housed in separate departments of economic (and social) history that tended to require of their students only a modicum of economics and little of quantitative methods. With the establishment of new British universities and the rapid expansion of others, a dozen new departments of economic history were founded in the 1960s, staffed largely by people taught in history and economic history departments. The limited presence of cliometric types in Britain at the turn of the 1970s did not come from deficient demand, nor was it due to hostility or indifference. It was due to limited supply stemming from the small scale of the British academic labor market and an aversion to excessive specialization among young economists. Yet the situation was being rectified. On the demand side, British faculties of economics began to welcome more economic historians as colleagues, and, on the supply side, advanced students were being aided by post-graduate stipends and research support provided by the new Social Science Research Council.

During the 1970s a British version of new historical economics began to take shape. Its practitioners expanded their informal networks into formal institutional structures and scholarly ventures. The organized British movement opened in September 1970 at an Anglo-American “Conference on the New Economic History of Britain” in Cambridge (Massachusetts), followed by two others. From these meetings grew a project to re-write British economic history in a cliometric mode, which resulted in the publication in 1981 of a path-breaking two-volume work, The Economic History of Britain since 1700, edited by Roderick Floud and Donald [Deirdre] McCloskey.

Equally path-breaking, perhaps more so, was the outcome of parallel developments in English historical demography, whose practitioners had become progressively more quantitatively and theoretically adept since the 1950s, and for whom 1981 was also a banner year. Although portions of the book had been circulating for some time, E. A. Wrigley’s and R. S. Schofield’s Population History of England, 1541-1871: A Reconstruction and its striking revisions of English demographic history were now available in one massive document.

As in North America, after the first wave of “quanitifiers” invaded parts of British historiography, cliometrics was refined in the heat of scholarly debate.

Controversies

Cliometricians started or continued a series of debates about the nature and sources of economic growth and its welfare consequences that decidedly have altered our picture of modern economic history. The first was initiated by Walt Rostow, who argued that modern economic growth begins with a brief and well-defined period of “take-off,” with the necessary “preconditions” having already become the normal condition of a given national economy or society. His metaphor of a “take-off into self-sustained growth”, which first appeared in a journal article, was popularized in Rostow’s famous book, The Stages of Economic Growth (1960). Rostow asserted that “The introduction of the railroad has been historically the most powerful single initiator of take-offs.” To test this contention, Robert Fogel and Albert Fishlow both wrote Ph.D. dissertations dealing in part with Rostow’s view: Fogel’s Railroads and American Economic Growth (1964) and Fishlow’s American Railroads and the Transformation of the Antebellum Economy (1965). These books contain their estimates of the extent of resource saving that had accrued from the adoption of a new transport system, with costs lower than those of canals. Their results rejected Rostow’s view.

Until the cliometricians made a pair of disputatious incursions into its economic history, the American South was largely the province of regional historians – almost a footnote to the story of U.S. economic development. Sparked by Conrad and Meyer, for two decades cliometricians focused intently on the place of the South in the national economy and of slavery in the Southern economy. To what extent was early national economic growth driven by Southern cotton exports and how self-sufficient was the South as an economic region? Douglass North argued that the key to American economic development before 1860 was regional specialization, that Southern cotton was the economy’s staple product, and that much of Western and Northern economic growth derived from Southern demand for food and manufactures. Indeed, Conrad and Meyer had touched a nerve. Their demonstration of current profitability did not demonstrate long-run viability of the slave system; Yasukichi Yasuba was able to fill that gap by showing that slave prices were regularly more than enough to finance rearing slaves for future sale or employment. Many others tested and refined these early results. As a system of organizing production, American slavery was found to have been thriving on the eve of the Civil War; the sources of that prosperity, however, needed deeper examination.

In Time on the Cross (1974), Robert Fogel and Stanley Engerman not only reaffirmed the profitability and viability of Southern slavery, but they also made claims about the superior productivity of Southern versus Midwestern agriculture and about the relatively generous material comforts afforded to the slave population. Their book sparked a long-running controversy that extended beyond academia and prompted critical examinations and rebuttals by political and social historians and, above all, by their fellow cliometricians. A major critique was Reckoning with Slavery (by Paul David and others, 1976), as much a defense of cliometric method as a catalogue of what the authors saw as the method’s improper or incomplete application in Time on the Cross. Fogel subsequently published Without Consent or Contract (1989), a defense and extension of his and Engerman’s earlier work.

The remarkable antebellum prosperity of the Southern slave economy was followed by an equally remarkable relative decline in Southern per-capita income after the war. While the remainder of the American economy grew rapidly, the South stagnated, with a distinctively low-wage, low-productivity economy and a poorly educated labor force, both black and white. The next generation of cliometricians asked “Why?” Was it the legacy of the slave system, of the virtual absence of industrial development in the antebellum South, of post-Civil War Reconstruction and backlash, of continued reliance on cotton, of Jim Crow, or of racism and discrimination? Roger Ransom and Richard Sutch investigated share-tenancy, debt peonage and labor effort in maintaining cotton cultivation, using individual level data, some derived a la Parker and Gallman, from a sample of the manuscript U.S. Censuses. Gavin Wright focused on an effective separation of the Southern from the national labor market, and Robert Margo examined the region’s low level of educational investment and its consequences.

An entirely new line of investigation derived from the research on slavery, measuring the “biological standard of living” using anthropometric data. Richard Steckel’s paper on slave height profiles led directly to the discussion of “Anthropometric Indexes of Malnutrition” in Without Consent or Contract. In a corrective to the Fogel-Engerman interpretation of the slave diet, Steckel showed how stunted (and thus how poorly fed) slave children were before they came of working age. John Komlos discovered that heights (of West Point cadets) were declining even as American per capita income was rising in the years before the Civil War, what he called the “Antebellum Puzzle.” Elsewhere, Roderick Floud led a project employing anthropometric data from records of British military recruits, while Stephen Nicholas, Deborah Oxley and Steckel analyzed records for male and female convicts transported to Australia.

Industrialization and its new technologies in the U.S. long predate the Civil War. In writing about technological progress, economic historians had, before the 1960s, tended to concentrate on single industries or economies. Yet distinctive “national” technologies emerged in the early nineteenth century (e.g., contemporary British observers distinguished “The American System of Manufactures” from their own). Amid the early ferment of quantitative economic history in the United States, Hrothgar Habakkuk published American and British Technology in the Nineteenth Century: The Search for Labour-Saving Inventions, a truly comparative study. It was 1962, when, as Paul David writes, “economic historians’ interests in Anglo-American technological divergences were suddenly raised from a quiet simmer to a furious boil by the publication of … Habakkuk’s now celebrated book on the subject.” Habakkuk expanded on an idea that the apparent labor-saving bias of American manufacturing techniques was due to land so abundant that American workers were paid (relative to other factors) much more than what their British counterparts received, but he did not resolve whether the bias was due to more machines per worker, better machines, or more inventiveness.

One strand of the debate over what Peter Temin called Habakkuk’s “labor-scarcity paradox” left to one side the question of “better machines.” It fell to Nathan Rosenberg and Paul David to explore the distinctive technological trajectories of different economies. Rosenberg pointed to the emergence of “technologically convergent” production processes and to the importance of very low relative materials costs in American manufacturing. Paul David reviewed the debate, beginning to formulate a theoretical approach to explain sources of technical change (and divergence). He argued that an economy’s trajectory of technological development is conditioned, perhaps only initially, by relative factor prices, but then by opportunities for further progress based on localized learning from, or constrained by, existing techniques and their histories. David developed the concept of “path dependence,” which is “a dynamic process whose evolution is governed by its own history.”

The first systematic cliometric debate involving European economic history was over an alleged British technological and economic failure in the late nineteenth century. The slower growth of income and exports, the loss of markets even in the Empire, and an “invasion” of foreign manufactures (many American) alarmed businessmen and policymakers alike and led to opposition to a half-century of British “Free Trade.” Who was to blame for loss of competitiveness? Although some scholars attributed Britain’s “climacteric” to the maturation of the technologies underpinning her success during the Industrial Revolution, others attributed it to “entrepreneurial failure” and cited the inability or refusal of British business leaders to adopt the best available technologies. Cliometricians argued, by and large, that British businessmen made their investment and production decisions in a sensible, economically rational fashion, given the constraints they faced; they had made the best of a bad situation. Subsequent research has demonstrated the problem to be more complex, and it is yet to be resolved.

Many results of the cliometrics revolution come from the application of theory and measurement in the service of history; a converse case comes from the macro economists. Monetarists, in particular, have placed economic history in the service of theory, prominently in analyzing the Great Depression of the 1930s. In 1963, Milton Friedman and Anna Schwartz, in A Monetary History of the United States, 1867-1960, opened a discussion that has led to widespread, but not universal, acceptance among economists of a sophisticated version of the “quantity theory of money.” Their detailed examination of several episodes in American monetary development under varying institutional regimes allowed them to use a set of “natural experiments” to assess the economic impact of exogenous changes in the stock of money. The Friedman-Schwartz enterprise sought support for the general proposition that money is not simply a veil over real transactions – that money does matter. Their demonstration of that point for the Great Depression initiated an entire scholarly literature involving not only economic historians but also monetary and macro economists. Peter Temin was among the first of the economic historians to question their argument, in Did Monetary Forces Cause the Great Depression? (1976). His answer was essentially “No,” stressing declines in consumer spending and in investment in the late 1920s as initiating factors and discounting money stock reductions for the continued downturn. In a later book, Lessons from the Great Depression (1989), Temin in effect recanted his earlier position, impelled by a good deal of further research, especially on international finance. The present consensus is that what Friedman and Schwartz call “The Great Contraction, 1929-1933″ may have been initiated by real factors in the late 1920s, but it was faulty public policy and adherence to the Gold Standard that played major roles in turning an economic downturn into “The Great Depression.”

A broad new approach to economic change over time has emerged from the mind of Douglass North. Confronted in the later 1960s with European economic development in its variety and antiquity, North became dissatisfied with the limited modes of analysis that he had applied fruitfully to the American case and concluded that “we couldn’t make sense out of European economic history without explicitly modeling institutions, property rights, and government.” For that matter, making sense of a wider view of American economic history was similarly difficult, as exemplified in the Lance Davis and North venture, Institutional Change and American Economic Growth (1971). The core of North’s model, conceptual rather than formal, is that, when changes in underlying circumstances alter the cost-benefit calculus of existing arrangements, new institutions will arise if there is a net benefit to be realized. Although their approach arose from dissatisfaction with the static nature of economic theory in the 1960s, North and his colleagues nonetheless followed what most other economists would do in arguing that optimal institutional forms will arise dynamically from an essentially profit-maximizing response to changes in incentives. As Davis and North were quick to admit, their effort was “a first (and very primitive) attempt” at formulating a theory of institutional change and applying that theory to American institutional development. North recognized the limitations of his early work on institutional change and has endeavored to develop a more subtle and articulated approach. In Understanding the Process of Economic Change (2005), North stresses again that modeling institutional change is less than straightforward, and he continues to examine the persistence of “institutions that provided incentives for stagnation and decline.”

Retrospect and Prospect

In the 1960s, when the first cliometricians began to group themselves into a distinct intellectual and social movement, buoyed by their revisionist achievements, they (at least many of them) thought they could use their scientific approach to re-write history. This hope may not have been a vain one, but it is yet to be realized. The best efforts of cliometricians have merged with those in other traditions to develop a rather different understanding of the economic past from views maintained half a century ago.

As economic history has evolved, so have the environs economic historians inhabit. In the Anglophone world, economic history – and cliometrics within it – burgeoned with the growth of higher education, but it has recently suffered the effects of retrenchment in that sector. Elsewhere, a new multi-lingual generation of enthusiastic economic historians and historical economists has arisen, with English as the language of international discourse. Both history and economics have been transformed by dissatisfaction with old verities and values, by adoption of new methods and points of view, and by posing new or revived questions. Economic history has been beneficiary of and contributor to such changes.

Although this entry focuses on the development of historical economics in the United States and the United Kingdom, we note that the cliometric approach has diffused well beyond their boundaries. In France the economist’s quantitative approach was fostered when Kuznets’s historical national accounts project recruited scholars in the 1950s to amass and organize the agricultural, output and population data available, in a new histoire quantitative. Still, that movement was overshadowed by the Annales school, whose histoire totale involved much data collection but limited economic analysis. Economic history of France, produced in the cliometric mode by scholars trained there, did not arrive in force until the mid-1980s. French cliometrics was first written by economic historians from (or trained in) North America or Britain; the Gallic cliometrics revolution occurred gradually, for “peculiarly French” institutional and ideological reasons. In Germany similar institutional barriers were partially breached in the 1960s with the arrival of a “turnkey” cliometrics operation in the form of an American-trained American scholar, Richard Tilly, who went from Wisconsin to Munster. Tilly was joined later by a few central Europeans who received American degrees, and all have since taught younger German cliometricians. Leading cliometric scholars from Italy, Spain and Portugal likewise received their post-graduate educations in Britain or America. The foremost Japanese cliometrician, Yasukichi Yasuba, received his Ph.D. from Johns Hopkins, supervised by Simon Kuznets.

If cliometrics in and of continental Europe could trace its roots to North America and Britain, by the 1980s it had developed indigenous strength and identity. At the Tenth International Economic History Congress in Leuven, Belgium (1990), a new association of analytical economic historians was founded. Rejecting the use of “cliometrics” as descriptor, the participants endorsed the nascent European Historical Economics Society. Subsequently national associations and seminars have grown up under the umbrella of the EHES – for example, French historical economists have the Association Francaise de Cliometrie and a new international journal, Cliometrica, while the Portuguese and Spaniards have sponsored a series of “Iberometrics” Conferences.

Cliometrics has transformed itself over the past half-century, forging important links with other disciplines and continuing to broaden its compass, and interpreting “new” phenomena. They are showing, for example, that recent “globalization” has origins and manifestations going back half a millennium and, given the recent experience of the formerly Socialist “transitional” economies, they are showing that the deep historical roots of institutions, organizations, values and behavior in the developed economies cannot be duplicated by following simple formulae. Despite the presentism of contemporary society, economic history will continue to address essential questions of origins and consequences, and it seems likely that cliometricians will complement and sometimes lead their colleagues in providing the answers. Cliometrics is a well-established field of study and its practitioners continue to increase our understanding of how economies evolve.

Source Note: The bulk of this article is a condensed version of the introduction to Lyons, Cain, and Williamson, eds., Reflections on the Cliometrics Revolution: Conversations with Economic Historians (2008), copyright (c) The Cliometric Society, Inc., which receives the royalties; reproduced by permission. Readers should consult that book for a more complete presentation, notes, and a full bibliography.

Further Reading

Coats, A. W. “The Historical Context of the ‘New’ Economic History.” Journal of European Economic History 9, no. 1 (1980): 185-207.

“Cliometrics after 40 Years.” American Economic Review: Papers and Proceedings 87:2, (1997): 396-414 [commentary by Claudia Goldin, Avner Greif, James J. Heckman, John R. Meyer, and Douglass C. North].

Crafts, N. F. R. “Cliometrics, 1971-1986: A Survey.” Journal of Applied Econometrics 2, no. 3 (1987): 171-92.

Davis, Lance E., Jonathan R. T. Hughes and Duncan McDougall. American Economic History. Homewood, IL: Irwin, 1961. [The first textbook of U.S. economic history to make systematic use of economic theory to organize the exposition. Second edition, 1965; third edition, 1969.]

Davis, Lance E., Jonathan R. T. Hughes and Stanley Reiter. “Aspects of Quantitative Research in Economic History.” _Journal of Economic History_ 20:4 (1960): 539-47 [in which “cliometrics” first appeared in print].

Drukker, J. W. The Revolution That Bit Its Own Tail: How Economic History Has Changed Our Ideas about Economic Growth. Amsterdam: Aksant, 2006.

Engerman, Stanley L. “Cliometrics.” In The Social Science Encyclopedia, second edition, edited by Adam Kuper and Jessica Kuper, 96-98. New York: Routledge, 1996.

Field, Alexander J. “The Future of Economic History.” In The Future of Economic History, edited by Alexander J. Field, 1-41. Boston: Kluwer-Nijhoff, 1987.

Fishlow, Albert, and Robert W. Fogel. “Quantitative Economic History: An Interim Evaluation. Past Trends and Present Tendencies.” Journal of Economic History 31, no. 1 (1971): 15-42.

Floud, Roderick. “Cliometrics.” In The New Palgrave: A Dictionary of Economics, edited by John Eatwell, Murray Milgate and Peter Newman, vol. 1, 452-54. London: Macmillan, 1987.

Goldin, Claudia. “Cliometrics and the Nobel.” Journal of Economic Perspectives 9, no. 2 (1995): 191-208.

Grantham, George. “The French Cliometric Revolution: A Survey of Cliometric Contributions to French Economic History.” European Review of Economic History 1, no. 3 (1997): 353-405.

Lamoreaux, Naomi R. “Economic History and the Cliometric Revolution.” In Imagined Histories: American Historians Interpret the Past, edited by Anthony Molho and Gordon S. Wood, 59-84. Princeton: Princeton University Press, 1998

Lyons, John S., Louis P Cain, and Samuel H. Williamson, eds. Reflections on the Cliometrics Revolution: Conversations with Economic Historians. New York: Routledge, 2008.

McCloskey, Donald [Deirdre] N. Econometric History. London: Macmillan, 1987

Parker, William, editor. Trends in the American Economy in the Nineteenth Century. Princeton, N.J.: Princeton University Press, 1960. [Volume 24 in Studies in Income and Wealth, in which many of the papers presented at the 1957 Williamstown conference appear.]

Tilly, Richard. “German Economic History and Cliometrics: A Selective Survey of Recent Tendencies.” European Review of Economic History 5, vol. 2 (2001): 151-87.

Whaples, Robert. “A Quantitative History of the Journal of Economic History and the Cliometric Revolution.” Journal of Economic History 51, no. 2 (1991): 289-301.

Williamson, Samuel H. “The History of Cliometrics.” In The Vital One: Essays in Honor of Jonathan R. T. Hughes, edited by Joel Mokyr, 15-31. Greenwich, Conn.: JAI Press, 1991. [Research in Economic History, Supplement 6.]

Williamson, Samuel H., and Robert Whaples. “Cliometrics.” In The Oxford Encyclopedia of Economic History, vol. 1, edited by Joel Mokyr, 446-47. Oxford: Oxford University Press, 2003.

Wright, Gavin. “Economic History, Quantitative: United States.” In International Encyclopedia of the Social and Behavioral Sciences, edited by Neil J. Smelser and Paul B. Baltes, 4108-14. Amsterdam: Elsevier, 2001.

Citation: Lyons, John. “Cliometrics”. EH.Net Encyclopedia, edited by Robert Whaples. August 27, 2009. URL http://eh.net/encyclopedia/cliometrics/

Economic History of Premodern China (from 221 BC to c. 1800 AD)

Kent Deng, London School of Economics (LSE)

China has the longest continually recorded history in the premodern world. For economic historians, it makes sense to begin with the formation of China’s national economy in the wake of China’s unification in 221 BC under the Qin. The year 1800 AD coincides with the beginning of the end for China’s premodern era, which was hastened by the First Opium War (1839–42). Hence, the time span of this article is two millennia.

Empire-building

Evidence indicates that there was a sharp difference in the economy between China’s pre-imperial era (until 220 BC) and its imperial era. There can be little doubt that the establishment of the Empire of China (to avoid the term of “the Chinese Empire” as it was not always an empire by and for the Chinese) served as a demarcation line in the history of the East Asian Mainland.

The empire was a result of historical contingency rather than inevitability. First of all, before the unification, China’s multiple units successfully accommodated a mixed economy of commerce, farming, handicrafts and pastoralism. Internal competition also allowed science and technology as well as literature and art to thrive on the East Asian Mainland. This was known as “a hundred flowers blossoming” (baijia zhengming, literally “a grand song contest with one hundred contenders”). Feudalism was widely practiced. Unifying such diverse economic and political units incurred inevitably huge social costs. Secondly, the winner of the bloody war on the East Asian Mainland, the Qin Dukedom and then the Qin Kingdom (840–222 BC), was not for a long time a rich or strong unit during the Spring and Autumn Period (840–476 BC) and the following Warring States Period (475–222 BC). It was only during the last three decades of the Warring States Period that the Qin eventually managed to overpower its rivals by force and consequently unified China. Moreover, although it unified China, the Qin was the worse-managed dynasty in the entire history of China: it crumbled after only fifteen years. So, it was not an easy birth; and the empire system was in serious jeopardy from the start. The main justification of China’s unification seems to have been a geopolitical reason, hence an external reason – the nomadic threat from the steppes (Deng 1999).

Nevertheless, empire-building in China marked a major discontinuity in history. Under the Western Han (206 BC– 24 AD), the successor of the Qin, empire-building not only sharply reduced internal competition among various political and economic centers on the East Asian Mainland, it also remolded the previous political and economic systems into a more integrated and more homogeneous type characterized by a package of an imperial bureaucracy under a fiscal state hand in hand with an economy under agricultural dominance. With such a package imposed by empire-builders, the economy deviated from its mixed norm. Feudalism lost its footing in China. This fundamentally changed the growth and development trajectory of China for the rest of the imperial period until c. 1800.

It is fair to state that private landholding property rights, including free-holding (dominant in North China over the long run) and lease-holding (paralleled with freeholding in South China during the post-Southern Song, i.e. 1279–1840) in imperial China laid the very corner stone of the empire’s economy since the Qin unification. Chinese laws clearly defined and protected such rights. In return, the imperial state had the mandate to tax the population of whom the vast majority (some 80 percent of the total population) were peasants. The state also depended on the rural population for army recruits. Peasants on the other hand regularly acted as the main force to populate newly captured areas along the empire’s long frontiers. Such a symbiotic relationship between the imperial state and China’s population was crystallized by a mutually beneficial state-peasant alliance in the long run. China’s lasting Confucian learning and Confucian meritocracy served as a social bonding agent for the alliance.

It was such an alliance that formed the foundations of China’s political economy which in turn created a centripetal force to hold the empire together against the restoration of feudalism and political decentralization (Deng 1999). It also served as a constant drive for China’s geographic expansion and an effective force against run-away proto-industrialization, commercialization and urbanization. So, to a great extent, China’s political economy was circumscribed by this alliance. Occasionally, this state-peasant alliance did break up and political and economic turmoil followed. The ultimate internal cause for the break-up was excessive rent-seeking by the state, seen as a deviation from the Confucian norm. It was often the peasantry that reversed this deviation and put society back on its track by the way of armed mass rebellions which replaced the old regime with a new one. This pattern is known, superficially, as the “dynastic cycle” of China.

The Empire’s Expansion

China’s fiscal state and landholding peasantry both had strong incentives and tendencies to increase the land territory of the empire. This was simply because more land meant more resource endowments for the peasantry and more tax revenue for the state. The Chinese non-feudal equal-inheritance practice perpetuated such incentives and tendencies at the grass roots level: unless more and more land was brought in for farming, the Chinese farms faced the constant problem of a shrinking size. Not surprisingly, the empire gradually expanded in all directions from its hub along the Yellow River in the north. It colonized the “near south” (around the Yangtze Valley) and to the West (oases along the Silk Road) during the Western Han (206 BC – 24 AD). It reached the “far south,” including part of modern-day Vietnam, under the Tang (618–907). The Ming (1368–1644) annexed off-shore Taiwan. The Qing (1644–1911) doubled China’s territory by going further in China’s “far north” and “far west” (Deng 1993: xxiii). At each step of this internal colonization, landholding peasants, shoulder to shoulder with the Chinese army and bureaucrats, duplicated the cells of China’s agricultural economy. The state often provided emigrant farmers who resettled in new regions with material and finance aid, typically free passages, seed and basic farming tools and tax holidays. The geographic expansion of the empire stopped only at the point when it reached the physical limits for farming.

So, in essence, the expansion of the Chinese empire was the result of dynamics of the Chinese institutions characterized by a fiscal state and a landholding peasantry, as this pattern suited well with China’s landholding property rights and non-feudal equal-inheritance practice. Thus, one of the two growth dimensions of the Chinese agricultural sector was this extensive pattern in geographic terms.

Agrarian Success

In this context, the success of the geographic expansion of the Chinese empire was at the same time a success in the growth of the Chinese agricultural sector. Firstly, regardless of its ten main soil types, the empire’s territory was converted to a huge farming zone. Secondly, the agricultural sector was by far the single most important source of employment for the majority Chinese. Thirdly, taxes from the agricultural sector made up the lion’s share of the state’s revenue.

Private property rights over land also created incentives for the ordinary farmers to produce more and better. In doing so, the agricultural total factor productivity increased. Growth became intensive. This was the other dimension in the Chinese agricultural sector. It is not so surprising that premodern China had at least three main “green revolutions.” The first such green revolution, the dry farming type, appeared in the Western Han Period (206 BC–24 AD) with the aggressive introduction of iron ploughs in the north by the state (Bray 1984). The result was an increase in the agricultural total factor productivity as land was better and more efficiently tilled and more marginal regions were brought under cultivation. The second green revolution took place during the Northern Song (960–1127) with the state promotion of early-ripening rice in the south (Ho 1956). This ushered in the era of multiple cropping in the empire. The third green revolution occurred during the late Ming throughout mid-Qing Period (Ming: 1368–1644; Qing: 1644–1911) with the spread of the “New World crops,” namely maize and sweet potatoes and the re-introduction of early-ripening rice (Deng 1993: ch. 3). The New World crops helped to convert more marginal land into farming areas. Earlier, under the Yuan, cotton was deliberately introduced by the Mongols as a substitute for silk in the Chinese consumption of clothing to save the silk for the Mongols’ international trade. All these green revolutions had high participation rates in the general population.

These green revolutions significantly and permanently changed China’s economic landscape. It was not a sheer accident that China’s population growth became particularly strong during and shortly after these revolutions (Deng 2003).

Markets and the Market Economy

With a fiscal state which taxed the economy and spent its revenue in the economy and with a high-yield agriculture which produced a constant surplus, the market economy developed in premodern China. By the end of the Qing, as much as one-third of China’s post-tax agricultural output was subject to market exchange (Perkins 1969: 115; Myers 1970: 12–13). If ten percent is taken as the norm for the tax rate born by the agricultural sector, the aggregate surplus of the agricultural sector was likely to be some forty percent of its total output. This magnitude of agricultural surplus was the foundation of growth and development of other sectors/activities in the economy.

Monetization in China had the same life span as the empire itself. The state mints mass-produced coins on a regular basis for the domestic economy and beyond. Due to the lack of monetary metals, token currencies made of cloth or paper were used on large scales, especially during the Song and Yuan periods (Northern Song: 960–1127; Southern Song: 1127–1279; Yuan: 1279–1368). Consequently, inflations resulted. Perhaps the most spectacular market phenomenon was China’s persistent importation of foreign silver from the fifteenth to nineteenth centuries during the Ming-Qing Period. It has been estimated that a total of one-third of silver output from the New World ended up in China, not to mention the amount imported from neighboring Japan (Flynn and Giráldez 1995). The imported silver consequently made China a silver-standard economy, eventually causing a price revolution after the market was saturated with foreign silver which in turn led to devaluation of the currency (Deng 1997: Appendix C).

Rudimentary credit systems, often of the short-term type, also appeared in China. Houses and farming land were often used as collateral to raise money. But there is no sign that there was a significant reduction of business risks for the creditor. Frequent community and/or state interference with contracts by blocking land transfers from debtors to creditors was counter-productive. So, to a great extent, China’s customary economy and command economy overruled the market one.

The nature of this surplus-based market exchange determined the multi-layered structure of the Chinese domestic market. At the grass-roots level, the market was localized, decentralized and democratic (Skinner 1964–5). This was highly compatible with the de facto village autonomy across the empire, as the imperial administration stopped at the county level (with a total number of roughly 1,000–1,500 such counties in all under the Qing). At the top of the market structure, the state controlled to a great extent some “key commodities” including salt (as during Ming and Qing), wine and iron and steel (as under the Han). Foreign trade was customarily under the state monopoly or partial monopoly, too. This left a limited platform for professional merchants to operate, a factor that ultimately determined the weakness of merchants’ influence in the economy and state politics.

So, paradoxically, China had a long history of market activities but a weak merchant class tradition. China’s social mobility and meritocracy, the antitheses of a feudal aristocracy, directed the talent and wealth to officialdom (Ho 1962; Rawski 1979). The existence of factor markets for land also allowed merchants to join the landholding class. Both undermined the rise of the merchant class.

Handicrafts and Urbanization

The sheer quantities of China’s handicrafts were impressive. It has been estimated that in the early nineteenth century, as much as one-third of the world’s total manufactures were produced by China (Kennedy 1987: 149; Huntington 1996: 86). In terms of ceramics and silk, China was able to supply the outside world almost single-handedly at times. Asia was traditionally China’s selling market for paper, stationary and cooking pots. All these are highly consistent with China’s intake of silver during the same period.

However, the growth in China’s handicrafts and urbanization was a function of the surpluses produced from the agricultural sector. This judgment is based on (1) the fact that not until the end of the Qing Period did China begin importing moderate quantities of foodstuffs from outside world to help feed the population; and (2) the fact that the handicraft sector never challenged agricultural dominance in the economy despite a symbiotic relationship between them.

By the same token, urbanization rarely exceeded ten percent of the total population although large urban centers were established. For example, during the Song, the northern capital Kaifeng (of the Northern Song) and southern capital Hangzhou (of the Southern Song) had 1.4 million and one million inhabitants, respectively (Jones et al. 1993: ch. 9). In addition, it was common that urban residents also had one foot in the rural sector due to private landholding property rights.

Science and Technology

In the context of China’s high yield agriculture (hence surpluses in the economy which were translated into leisure time for other pursuits) and Confucian meritocracy (hence a continued over-supply of the literate vis-à-vis the openings in officialdom and persistent record keeping by the premodern standards) (Chang 1962: ch. 1; Deng 1993: Appendix 1), China became one of the hotbeds of scientific discoveries and technological development of the premodern world (Needham 1954–95). It is commonly agreed that China led the world in science and technology from about the tenth century to about the fifteenth century.

The Chinese sciences and technologies were concentrated in several fields, mainly material production, transport, weaponry and medicine. A common feature of all Chinese discoveries was their trial-and-error basis and incremental improvement. Here, China’s continued history and large population became an advantage. However, this trial-and-error approach had its developmental ceiling. And, incremental improvement faced diminishing returns (Elvin 1973: ch. 17). So, although China once led the world, it was unable to realize what is known as the “Scientific Revolution” whose origin may well have been oriental/Chinese (Hobson 2004).

Living Standards

It has been argued that in the Ming-Qing Period the standards of living reached and stayed at a high level, comparable with the most wealthy parts of Western Europe by 1800 in material terms (Pomeranz 2000) and perhaps in education as well (Rawski 1979). Although the evidence is not conclusive, the claims certainly are compatible with China’s wealth in the context of (1) the rationality of private property rights-led growth, (2) total factor productivity growth associated with China’s green revolutions from the Han to the Ming-Qing and the economic revolution under the Song, and (3) China’s export capacity (hence China’s surplus output) and China’s silver imports (hence the purchasing power of China’s surplus).

Debates about China’s Long-term Economic History

The pivotal point of the debate about China’s long-term economic history has been why and how China did not go any further from its premodern achievements. Opinions have been divided and the debate goes on (Deng 2000). Within the wide spectrum of views, some are regarded as Eurocentric; some, Sinocentric (Hobson 2004). But a great many are neither, using some universally applicable criteria such as factor productivity (labor, land and capital), economic optimization/maximization, organizational efficiency, and externalities.

In a nutshell, the debate is whether to view China as a bottle “half empty” (hence China did not realize its full growth potential by the post-Renaissance Western European standard) or “half full” (hence China over-performed by the premodern world standard). In any case, China was “extra-ordinary” either in terms of its outstanding performance for a premodern civilization or in terms of its shortfall for modern growth despite its possession of many favorable preconditions to do so.

The utility of China’s premodern history is indeed indispensable in the understanding of how a dominant traditional economy (in terms of its sheer size and longevity) perpetuated and how the modern economy emerged in the world history.

References

Bray, Francesca. “Section 41: Agriculture.” In Science and Civilisation in China, edited by Joseph Needham, Volume 6. Cambridge: Cambridge University Press, 1984.

Chang, Chung-li. The Income of the Chinese Gentry. Seattle: University of Washington Press, 1962.

Deng, Gang. Chinese Maritime Activities and Socio-economic Consequences, c. 2100 BC – 1900 AD. Westport, CT: Greenwood Publishing, 1997.

Deng, Gang. Development versus Stagnation: Technological Continuity and Agricultural Progress in Premodern China. Westport, CT: Greenwood Publishing, 1993.

Deng, Gang. The Premodern Chinese Economy – Structural Equilibrium and Capitalist Sterility. London: Routledge, 1999.

Deng, K. G. “A Critical Survey of Recent Research in of Chinese Economic History.” Economic History Review 53, no. 1 (2000): 1–28.

Deng, K. G. “Fact or Fiction? Re-Examination of Chinese Premodern Population Statistics.” Economic History Department Working Papers no. 68, London School of Economics, 2003.

Elvin, Mark. The Pattern of the Chinese Past. Stanford: Stanford University Press, 1973.

Flynn, D. O. and Giráldez, Arturo. “Born with a ‘Silver Spoon’: The Origin of World Trade.” Journal of World History 6 no. 2 (1995): 201–21.

Ho, Ping-ti. “Early-Ripening Rice in Chinese History.” Economic History Review Ser. 2 (1956): 200–18.

Ho, Ping-ti. The Ladder of Success in Imperial China: Aspects of Social Mobility, 1368–1911. New York: Columbia University Press, 1962.

Hobson, J. M. The Eastern Origins of Western Civilisation. Cambridge: Cambridge University Press, 2004.

Huntington, S. P. The Clash of Civilisations and the Remaking of World Order. New York: Simon and Schuster, 1996.

Jones, E. L., Lionel Frost and Colin White. Coming Full Circle: An Economic History of the Pacific Rim. Melbourne and Oxford: Oxford University Press, 1993.

Kennedy, Paul. The Rise and Fall of the Great Powers. New York: Random House, 1987.

Myers, R. H. The Chinese Peasant Economy: Agricultural Development in Hopei and Shangtung, 1890–1949. Cambridge, MA: Harvard University Press, 1970.

Needham, Joseph, editor. Science and Civilisation in China. Cambridge: Cambridge University Press, 1954–2000.

Perkins, Dwight. Agricultural Development in China, 1368–1968. Edinburgh: Edinburgh University Press, 1969.

Pomeranz, Kenneth. The Great Divergence: Europe, China and the Making of the Modern World Economy. Princeton: Princeton University Press, 2000.

Rawski, E. S. Education and Popular Literacy in Ch’ing China. Ann Arbor: University of Michigan Press, 1979.

Skinner, G. W. “Marketing and Social Structure in Rural China.” Journal of Asian Studies 24 (1964–65): 3–44, 195–228, 363–400.

Citation: Deng, Kent. “Economic History of Premodern China”. EH.Net Encyclopedia, edited by Robert Whaples. November 7, 2004. URL
http://eh.net/encyclopedia/economic-history-of-premodern-china-from-221-bc-to-c-1800-ad/

Child Labor during the British Industrial Revolution

Carolyn Tuttle, Lake Forest College

During the late eighteenth and early nineteenth centuries Great Britain became the first country to industrialize. Because of this, it was also the first country where the nature of children’s work changed so dramatically that child labor became seen as a social problem and a political issue.

This article examines the historical debate about child labor in Britain, Britain’s political response to problems with child labor, quantitative evidence about child labor during the 1800s, and economic explanations of the practice of child labor.

The Historical Debate about Child Labor in Britain

Child Labor before Industrialization

Children of poor and working-class families had worked for centuries before industrialization – helping around the house or assisting in the family’s enterprise when they were able. The practice of putting children to work was first documented in the Medieval era when fathers had their children spin thread for them to weave on the loom. Children performed a variety of tasks that were auxiliary to their parents but critical to the family economy. The family’s household needs determined the family’s supply of labor and “the interdependence of work and residence, of household labor needs, subsidence requirements, and family relationships constituted the ‘family economy'” [Tilly and Scott (1978, 12)].

Definitions of Child Labor

The term “child labor” generally refers to children who work to produce a good or a service which can be sold for money in the marketplace regardless of whether or not they are paid for their work. A “child” is usually defined as a person who is dependent upon other individuals (parents, relatives, or government officials) for his or her livelihood. The exact ages of “childhood” differ by country and time period.

Preindustrial Jobs

Children who lived on farms worked with the animals or in the fields planting seeds, pulling weeds and picking the ripe crop. Ann Kussmaul’s (1981) research uncovered a high percentage of youths working as servants in husbandry in the sixteenth century. Boys looked after the draught animals, cattle and sheep while girls milked the cows and cared for the chickens. Children who worked in homes were either apprentices, chimney sweeps, domestic servants, or assistants in the family business. As apprentices, children lived and worked with their master who established a workshop in his home or attached to the back of his cottage. The children received training in the trade instead of wages. Once they became fairly skilled in the trade they became journeymen. By the time they reached the age of twenty-one, most could start their own business because they had become highly skilled masters. Both parents and children considered this a fair arrangement unless the master was abusive. The infamous chimney sweeps, however, had apprenticeships considered especially harmful and exploitative. Boys as young as four would work for a master sweep who would send them up the narrow chimneys of British homes to scrape the soot off the sides. The first labor law passed in Britain to protect children from poor working conditions, the Act of 1788, attempted to improve the plight of these “climbing boys.” Around age twelve many girls left home to become domestic servants in the homes of artisans, traders, shopkeepers and manufacturers. They received a low wage, and room and board in exchange for doing household chores (cleaning, cooking, caring for children and shopping).

Children who were employed as assistants in domestic production (or what is also called the cottage industry) were in the best situation because they worked at home for their parents. Children who were helpers in the family business received training in a trade and their work directly increased the productivity of the family and hence the family’s income. Girls helped with dressmaking, hat making and button making while boys assisted with shoemaking, pottery making and horse shoeing. Although hours varied from trade to trade and family to family, children usually worked twelve hours per day with time out for meals and tea. These hours, moreover, were not regular over the year or consistent from day-to-day. The weather and family events affected the number of hours in a month children worked. This form of child labor was not viewed by society as cruel or abusive but was accepted as necessary for the survival of the family and development of the child.

Early Industrial Work

Once the first rural textile mills were built (1769) and child apprentices were hired as primary workers, the connotation of “child labor” began to change. Charles Dickens called these places of work the “dark satanic mills” and E. P. Thompson described them as “places of sexual license, foul language, cruelty, violent accidents, and alien manners” (1966, 307). Although long hours had been the custom for agricultural and domestic workers for generations, the factory system was criticized for strict discipline, harsh punishment, unhealthy working conditions, low wages, and inflexible work hours. The factory depersonalized the employer-employee relationship and was attacked for stripping the worker’s freedom, dignity and creativity. These child apprentices were paupers taken from orphanages and workhouses and were housed, clothed and fed but received no wages for their long day of work in the mill. A conservative estimate is that around 1784 one-third of the total workers in country mills were apprentices and that their numbers reached 80 to 90% in some individual mills (Collier, 1964). Despite the First Factory Act of 1802 (which attempted to improve the conditions of parish apprentices), several mill owners were in the same situation as Sir Robert Peel and Samuel Greg who solved their labor shortage by employing parish apprentices.

After the invention and adoption of Watt’s steam engine, mills no longer had to locate near water and rely on apprenticed orphans – hundreds of factory towns and villages developed in Lancashire, Manchester, Yorkshire and Cheshire. The factory owners began to hire children from poor and working-class families to work in these factories preparing and spinning cotton, flax, wool and silk.

The Child Labor Debate

What happened to children within these factory walls became a matter of intense social and political debate that continues today. Pessimists such as Alfred (1857), Engels (1926), Marx (1909), and Webb and Webb (1898) argued that children worked under deplorable conditions and were being exploited by the industrialists. A picture was painted of the “dark satanic mill” where children as young as five and six years old worked for twelve to sixteen hours a day, six days a week without recess for meals in hot, stuffy, poorly lit, overcrowded factories to earn as little as four shillings per week. Reformers called for child labor laws and after considerable debate, Parliament took action and set up a Royal Commission of Inquiry into children’s employment. Optimists, on the other hand, argued that the employment of children in these factories was beneficial to the child, family and country and that the conditions were no worse than they had been on farms, in cottages or up chimneys. Ure (1835) and Clapham (1926) argued that the work was easy for children and helped them make a necessary contribution to their family’s income. Many factory owners claimed that employing children was necessary for production to run smoothly and for their products to remain competitive. John Wesley, the founder of Methodism, recommended child labor as a means of preventing youthful idleness and vice. Ivy Pinchbeck (1930) pointed out, moreover, that working hours and conditions had been as bad in the older domestic industries as they were in the industrial factories.

Factory Acts

Although the debate over whether children were exploited during the British Industrial Revolution continues today [see Nardinelli (1988) and Tuttle (1998)], Parliament passed several child labor laws after hearing the evidence collected. The three laws which most impacted the employment of children in the textile industry were the Cotton Factories Regulation Act of 1819 (which set the minimum working age at 9 and maximum working hours at 12), the Regulation of Child Labor Law of 1833 (which established paid inspectors to enforce the laws) and the Ten Hours Bill of 1847 (which limited working hours to 10 for children and women).

The Extent of Child Labor

The significance of child labor during the Industrial Revolution was attached to both the changes in the nature of child labor and the extent to which children were employed in the factories. Cunningham (1990) argues that the idleness of children was more a problem during the Industrial Revolution than the exploitation resulting from employment. He examines the Report on the Poor Laws in 1834 and finds that in parish after parish there was very little employment for children. In contrast, Cruickshank (1981), Hammond and Hammond (1937), Nardinelli (1990), Redford (1926), Rule (1981), and Tuttle (1999) claim that a large number of children were employed in the textile factories. These two seemingly contradictory claims can be reconciled because the labor market for child labor was not a national market. Instead, child labor was a regional phenomenon where a high incidence of child labor existed in the manufacturing districts while a low incidence of children were employed in rural and farming districts.

Since the first reliable British Census that inquired about children’s work was in 1841, it is impossible to compare the number of children employed on the farms and in cottage industry with the number of children employed in the factories during the heart of the British industrial revolution. It is possible, however, to get a sense of how many children were employed by the industries considered the “leaders” of the Industrial Revolution – textiles and coal mining. Although there is still not a consensus on the degree to which industrial manufacturers depended on child labor, research by several economic historians have uncovered several facts.

Estimates of Child Labor in Textiles

Using data from an early British Parliamentary Report (1819[HL.24]CX), Freuenberger, Mather and Nardinelli concluded that “children formed a substantial part of the labor force” in the textile mills (1984, 1087). They calculated that while only 4.5% of the cotton workers were under 10, 54.5% were under the age of 19 – confirmation that the employment of children and youths was pervasive in cotton textile factories (1984, 1087). Tuttle’s research using a later British Parliamentary Report (1834(167)XIX) shows this trend continued. She calculated that children under 13 comprised roughly 10 to 20 % of the work forces in the cotton, wool, flax, and silk mills in 1833. The employment of youths between the age of 13 and 18 was higher than for younger children, comprising roughly 23 to 57% of the work forces in cotton, wool, flax, and silk mills. Cruickshank also confirms that the contribution of children to textile work forces was significant. She showed that the growth of the factory system meant that from one-sixth to one-fifth of the total work force in the textile towns in 1833 were children under 14. There were 4,000 children in the mills of Manchester; 1,600 in Stockport; 1,500 in Bolton and 1,300 in Hyde (1981, 51).

The employment of children in textile factories continued to be high until mid-nineteenth century. According to the British Census, in 1841 the three most common occupations of boys were Agricultural Labourer, Domestic Servant and Cotton Manufacture with 196,640; 90,464 and 44,833 boys under 20 employed, respectively. Similarly for girls the three most common occupations include Cotton Manufacture. In 1841, 346,079 girls were Domestic Servants; 62,131 were employed in Cotton Manufacture and 22,174 were Dress-makers. By 1851 the three most common occupations for boys under 15 were Agricultural Labourer (82,259), Messenger (43,922) and Cotton Manufacture (33,228) and for girls it was Domestic Servant (58,933), Cotton Manufacture (37,058) and Indoor Farm Servant (12,809) (1852-53[1691-I]LXXXVIII, pt.1). It is clear from these findings that children made up a large portion of the work force in textile mills during the nineteenth century. Using returns from the Factory Inspectors, S. J. Chapman’s (1904) calculations reveal that the percentage of child operatives under 13 had a downward trend for the first half of the century from 13.4% in 1835 to 4.7% in 1838 to 5.8% in 1847 and 4.6% by 1850 and then rose again to 6.5% in 1856, 8.8% in 1867, 10.4% in 1869 and 9.6% in 1870 (1904, 112).

Estimates of Child Labor in Mining

Children and youth also comprised a relatively large proportion of the work forces in coal and metal mines in Britain. In 1842, the proportion of the work forces that were children and youth in coal and metal mines ranged from 19 to 40%. A larger proportion of the work forces of coal mines used child labor underground while more children were found on the surface of metal mines “dressing the ores” (a process of separating the ore from the dirt and rock). By 1842 one-third of the underground work force of coal mines was under the age of 18 and one-fourth of the work force of metal mines were children and youth (1842[380]XV). In 1851 children and youth (under 20) comprised 30% of the total population of coal miners in Great Britain. After the Mining Act of 1842 was passed which prohibited girls and women from working in mines, fewer children worked in mines. The Reports on Sessions 1847-48 and 1849 Mining Districts I (1847-48[993]XXVI and 1849[1109]XXII) and The Reports on Sessions 1850 and 1857-58 Mining Districts II (1850[1248]XXIII and 1857-58[2424]XXXII) contain statements from mining commissioners that the number of young children employed underground had diminished.

In 1838, Jenkin (1927) estimates that roughly 5,000 children were employed in the metal mines of Cornwall and by 1842 the returns from The First Report show as many as 5,378 children and youth worked in the mines. In 1838 Lemon collected data from 124 tin, copper and lead mines in Cornwall and found that 85% employed children. In the 105 mines that employed child labor, children comprised from as little as 2% to as much as 50% of the work force with a mean of 20% (Lemon, 1838). According to Jenkin the employment of children in copper and tin mines in Cornwall began to decline by 1870 (1927, 309).

Explanations for Child Labor

The Supply of Child Labor

Given the role of child labor in the British Industrial Revolution, many economic historians have tried to explain why child labor became so prevalent. A competitive model of the labor market for children has been used to examine the factors that influenced the demand for children by employers and the supply of children from families. The majority of scholars argue that it was the plentiful supply of children that increased employment in industrial work places turning child labor into a social problem. The most common explanation for the increase in supply is poverty – the family sent their children to work because they desperately needed the income. Another common explanation is that work was a traditional and customary component of ordinary people’s lives. Parents had worked when they were young and required their children to do the same. The prevailing view of childhood for the working-class was that children were considered “little adults” and were expected to contribute to the family’s income or enterprise. Other less commonly argued sources of an increase in the supply of child labor were that parents either sent their children to work because they were greedy and wanted more income to spend on themselves or that children wanted out of the house because their parents were emotionally and physically abusive. Whatever the reason for the increase in supply, scholars agree that since mandatory schooling laws were not passed until 1876, even well-intentioned parents had few alternatives.

The Demand for Child Labor

Other compelling explanations argue that it was demand, not supply, that increased the use of child labor during the Industrial Revolution. One explanation came from the industrialists and factory owners – children were a cheap source of labor that allowed them to stay competitive. Managers and overseers saw other advantages to hiring children and pointed out that children were ideal factory workers because they were obedient, submissive, likely to respond to punishment and unlikely to form unions. In addition, since the machines had reduced many procedures to simple one-step tasks, unskilled workers could replace skilled workers. Finally, a few scholars argue that the nimble fingers, small stature and suppleness of children were especially suited to the new machinery and work situations. They argue children had a comparative advantage with the machines that were small and built low to the ground as well as in the narrow underground tunnels of coal and metal mines. The Industrial Revolution, in this case, increased the demand for child labor by creating work situations where they could be very productive.

Influence of Child Labor Laws

Whether it was an increase in demand or an increase in supply, the argument that child labor laws were not considered much of a deterrent to employers or families is fairly convincing. Since fines were not large and enforcement was not strict, the implicit tax placed on the employer or family was quite low in comparison to the wages or profits the children generated [Nardinelli (1980)]. On the other hand, some scholars believe that the laws reduced the number of younger children working and reduced labor hours in general [Chapman (1904) and Plener (1873)].

Despite the laws there were still many children and youth employed in textiles and mining by mid-century. Booth calculated there were still 58,900 boys and 82,600 girls under 15 employed in textiles and dyeing in 1881. In mining the number did not show a steady decline during this period, but by 1881 there were 30,400 boys under 15 still employed and 500 girls under 15. See below.

Table 1: Child Employment, 1851-1881

Industry & Age Cohort 1851 1861 1871 1881
Mining
Males under 15
37,300 45,100 43,100 30,400
Females under 15 1,400 500 900 500
Males 15-20 50,100 65,300 74,900 87,300
Females over 15 5,400 4,900 5,300 5,700
Total under 15 as
% of work force
13% 12% 10% 6%
Textiles and Dyeing
Males under 15
93,800 80,700 78,500 58,900
Females under 15 147,700 115,700 119,800 82,600
Males 15-20 92,600 92,600 90,500 93,200
Females over 15 780,900 739,300 729,700 699,900
Total under 15 as
% of work force
15% 19% 14% 11%

Source: Booth (1886, 353-399).

Explanations for the Decline in Child Labor

There are many opinions regarding the reason(s) for the diminished role of child labor in these industries. Social historians believe it was the rise of the domestic ideology of the father as breadwinner and the mother as housewife, that was imbedded in the upper and middle classes and spread to the working-class. Economic historians argue it was the rise in the standard of living that accompanied the Industrial Revolution that allowed parents to keep their children home. Although mandatory schooling laws did not play a role because they were so late, other scholars argue that families started showing an interest in education and began sending their children to school voluntarily. Finally, others claim that it was the advances in technology and the new heavier and more complicated machinery, which required the strength of skilled adult males, that lead to the decline in child labor in Great Britain. Although child labor has become a fading memory for Britons, it still remains a social problem and political issue for developing countries today.

References

Alfred (Samuel Kydd). The History of the Factory Movement. London: Simpkin, Marshall, and Co., 1857.

Booth, C. “On the Occupations of the People of the United Kingdom, 1801-81.” Journal of the Royal Statistical Society (J.S.S.) XLIX (1886): 314-436.

Chapman, S. J. The Lancashire Cotton Industry. Manchester: Manchester University Publications, 1904.

Clapham, Sir John. An Economic History of Modern Britain. Vol. I and II. Cambridge: Cambridge University Press, 1926.

Collier, Francis. The Family Economy of the Working Classes in the Cotton Industry, 1784-1833. Manchester: Manchester University Press, 1964.

Cruickshank, Marjorie. Children and Industry. Manchester: Manchester University Press, 1981.

Cunningham, Hugh. “The Employment and Unemployment of Children in England, c. 1680-1851.” Past and Present 126 (1990): 115-150.

Engels, Frederick. The Condition of the Working Class in England. Translated by the Institute of Marxism-Leninism, Moscow. London: E. J. Hobsbaum, 1969[1926].

Freudenberger, Herman, Francis J. Mather, and Clark Nardinelli. “A New Look at the Early Factory Labour Force.” Journal of Economic History 44 (1984): 1085-90.

Hammond, J. L. and Barbara Hammond. The Town Labourer, 1760-1832. New York: A Doubleday Anchor Book, 1937.

House of Commons Papers (British Parliamentary Papers):
1833(450)XX Factories, employment of children. R. Com. 1st rep.
1833(519)XXI Factories, employment of children. R. Com. 2nd rep.
1834(44)XXVII Administration and Operation of Poor Laws, App. A, pt.1.
1834(44)XXXVI Administration and Operation of Poor Laws. App. B.2, pts. III,IV,V.
1834 (167)XX Factories, employment of children. Supplementary Report.
1842[380]XV Children’s employment (mines). R. Com. 1st rep.
1847-48[993]XXVI Mines and Collieries, Mining Districts. Commissioner’s rep.
1849[1109]XXII Mines and Collieries, Mining Districts. Commissioner’s rep.
1850[1248]XXIII Mining Districts. Commissioner’s rep.
1857-58[2424]XXXII Mines and Minerals. Commissioner’s rep.

House of Lords Papers:
1819(24)CX

Jenkin, A. K. Hamilton. The Cornish Miner: An Account of His Life Above and Underground From Early Times. London: George Allen and Unwin, Ltd., 1927.

Kussmaul, Ann. A General View of the Rural Economy of England, 1538-1840. Cambridge: Cambridge University Press, 1990.

Lemon, Sir Charles. “The Statistics of the Copper Mines of Cornwall.” Journal of the Royal Statistical Society I (1838): 65-84.

Marx. Karl. Capital. vol. I. Chicago: Charles H. Kerr & Company, 1909.

Nardinelli, Clark. Child Labor and the Industrial Revolution. Bloomington: Indiana University Press, 1990.

Nardinelli, Clark. “Were Children Exploited During the Industrial Revolution?” Research in Economic History 2 (1988): 243-276.

Nardinelli, Clark. “Child Labor and the Factory Acts.” Journal of Economic History. 40, no. 4 (1980): 739-755.

Pinchbeck, Ivy. Women Workers and the Industrial Revolution, 1750-1800. London: George Routledge and Sons, 1930.

Plener, Ernst Elder Von. English Factory Legislation. London: Chapman and Hall, 1873.

Redford, Arthur. Labour Migration in England, 1800-1850. Manchester: Manchester University Press, 1926.

Rule, John. The Experience of Labour in Eighteenth Century English Industry. New York: St. Martin’s Press, 1981.

Thompson, E. P. The Making of the English Working Class. New York: Vintage Books, 1966.

Tilly, L. A. and Scott, J. W. Women, Work and Family. New York: Holt, Rinehart, and Winston, 1978.

Tuttle, Carolyn. “A Revival of the Pessimist View: Child Labor and the Industrial Revolution.” Research in Economic History 18 (1998): 53-82.

Tuttle, Carolyn. Hard at Work in Factories and Mines: The Economics of Child Labor During the British Industrial Revolution. Oxford: Westview Press, 1999.

Ure, Andrew. The Philosophy of Manufactures. London, 1835.

Webb, Sidney and Webb, Beatrice. Problems of Modern Industry. London: Longmanns, Green, 1898.

Citation: Tuttle, Carolyn. “Child Labor during the British Industrial Revolution”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL
http://eh.net/encyclopedia/child-labor-during-the-british-industrial-revolution/

The Use of Quantitative Micro-data in Canadian Economic History: A Brief Survey

Livio Di Matteo, Lakehead University

Introduction1

From a macro perspective, Canadian quantitative economic history is concerned with the collection and construction of historical time series data as well as the study of the performance of broad economic aggregates over time.2 The micro dimension of quantitative economic history focuses on individual and sector responses to economic phenomena.3 In particular, micro economic history is marked by the collection and analysis of data sets rooted in individual economic and social behavior. This approach uses primary historical records like census rolls, probate records, assessment rolls, land records, parish records and company records, to construct sets of socio-economic data used to examine the social and economic characteristics and behavior of those individuals and their society, both cross-sectionally and over time.

The expansion of historical micro-data studies in Canada has been a function of academic demand and supply factors. On the demand side, there has been a desire for more explicit use of economic and social theory in history and micro-data studies that make use of available records on individuals appeal to historians interested in understanding aggregate trends and reaching the micro-underpinnings of the larger macroeconomic and social relationships. For example, in Canada, the late nineteenth century was a period of intermittent economic growth and analyzing how that growth record affected different groups in society requires studies that disaggregate the population into sub-groups. One way of doing this that became attractive in the 1960’s was to collect micro-data samples from relevant census, assessment or probate records.

On the supply side, computers have lowered research costs, making the analysis of large data sets feasible and cost-effective. The proliferation of low cost personal computers, statistical packages and data spread-sheets has led to another revolution in micro-data analysis, as computers are now routinely taken into archives so that data collection, input and analysis can proceed even more efficiently.

In addition, studies using historical micro-data are an area where economic historians trained either as economists or historians have been able to find common ground.4 Many of the pioneering micro-data projects in Canada were conducted by historians with some training in quantitative techniques, much of which was acquired “on the job” by intellectual interest and excitement, rather than as graduate school training. Historians and economists are united by their common analysis of primary micro-data sources and their choice of sophisticated computer equipment, linkage software and statistical packages.

Background to Historical Micro-data Studies in Canadian Economic History

The early stage of historical micro-data projects in Canada attempted to systematically collect and analyze data on a large scale. Many of these micro-data projects crossed the lines between social and economic history, as well as demographic history in the case of French Canada. Path-breaking work by American scholars such as Lee Soltow (1971), Stephan Thernstrom (1973) and Alice Hanson Jones (1980) was an important influence on Canadian work. Their work on wealth and social structure and mobility using census and probate data drew attention to the extent of mobility — geographic, economic and social — that existed in pre-twentieth-century America.

However, Canadian historical micro-data work has been quite distinct from that of the United States, reflecting its separate tradition in economic history. Canada’s history is one of centralized penetration from the east via the Great Lakes-St. Lawrence waterway and the presence of two founding “nations” of European settlers – English and French – which led to strong Protestant and Roman Catholic traditions. Indeed, there was nearly 100 percent membership in the Roman Catholic Church for francophone Quebeckers for much of Canada’s history. As well, there is an economic reliance on natural resources, and a sparse population spread along an east-west corridor in isolated regions that have made Canada’s economic history, politics and institutions quite different from the United States.

The United States, from its early natural resource staples origins, developed a large, integrated internal market that was relatively independent of external economic forces, at least compared with Canada, and this shifted research topics away from trade and towards domestic resource allocation issues. At the level of historical micro-data, American scholars have had access to national micro-data samples for some time, which has not been the case in Canada until recently. Most of the early studies in Canadian micro-data were regional or urban samples drawn from manuscript sources and there has been little work since at a national level using micro-data sources. However, the strong role of the state in Canada has meant a particular richness to those sources that can be accessed and even the Census contains some personal details not available in the U.S. Census, such as religious affiliation. Moreover, earnings data are available in the Canadian census starting some forty years earlier than the United States.

Canadian micro-data studies have examined industry, fertility, urban and rural life, wages and labor markets, women’s work and roles in the economy, immigration and wealth. The data sources include census, probate records, assessment rolls, legal records and contracts, and are used by historians, economists, geographers, sociologists and demographers to study economic history.5 Very often, the primary sources are untapped and there can be substantial gaps in their coverage due to uneven preservation.

A Survey of Micro-data Studies

Early Years in English Canada

The fruits of early work in English Canada were books and papers by Frank Denton and Peter George (1970, 1973), Michael Katz (1975) and David Gagan (1981), among others.6 The Denton and George paper examined the influences on family size and school attendance in Wentworth County, Ontario, using the 1871 Census of Canada manuscripts. But it was Katz and Gagan’s work that generated greater attention among historians. Katz’s Hamilton Project used census, assessment rolls, city directories and other assorted micro-records to describe patterns of life in mid-nineteenth century Hamilton. Gagan’s Peel County Project was a comprehensive social and economic study of Peel County, Ontario, again using a variety of individual records including probate. These studies stimulated discussion and controversy about nineteenth-century wealth, inheritance patterns, and family size and structure.

The Demographic Tradition in French Canada

In French Canada, the pioneering work was the Saguenay Project organized by Gerard Bouchard (1977, 1983, 1992, 1993, 1996, 1998). Beginning in the 1970’s, a large effort has been expended to create a computerized genealogical and demographic data base for the Saguenay and Charlevoix regions of Quebec going back well into the nineteenth century. This data set, known now as the Balsac Register, contains data on 600,000 individuals (140,000 couples) and 2.4 million events (e.g. births, deaths, gender, etc…) with enormous social scientific and human genetic possibilities. The material gathered has been used to examine fertility, marriage patterns, inheritance, agricultural production and literacy, as well as genetic predisposition towards disease and formed the basis for a book spanning the history of population and families in the Saguenay over the period 1858 to 1971.

French Canada has a strong tradition of historical micro-data research rooted in demographic analysis.7 Another project underway since 1969 and associated with Bertrand Desjardins, Hubert Charbonneau, Jacques Légaré and Yves Landry is Le Programme de recherche en démographie historique (P.R.D.H) at the University of Montréal (Charbonneau, 1988; Landry, 1993; Desjardins, 1993). The database will eventually contain details on a million persons and their life events in Quebec between 1608 and 1850.

Industrial Studies

Only for the 1871 census have all of the schedules survived and the industrial schedules of that census have been made machine-readable (Bloomfield, 1986; Borsa and Inwood, 1993). Kris Inwood and Phyllis Wagg (1993) have used the census manuscript industrial schedules to examine the survival of handloom weaving in rural Canada circa 1870 (Inwood and Wagg, 1993). A total of 2,830 records were examined and data on average product, capital and month’s activity utilized. The results show that the demand for woolen homespun was income sensitive and that patterns of weaving by men and women differed with male-headed firms working a greater number of months during the year and more likely to have a second worker.

More recently, using a combination of aggregate capital market data and firm-level data for a sample of Canadian and American steel producers, Ian Keay and Angela Redish (2004) analyze the relationships between capital costs, financial structure, and domestic capital market characteristics. They find that national capital market characteristics and firm specific characteristics were important determinants of twentieth-century U.S. and Canadian steel firms’ financing decisions. Keay (2000) uses information from firms’ balance sheets and income accounts, and industry-specific prices to calculate labor, capital, intermediate input and total factor productivities for a sample of 39 Canadian and 39 American manufacturing firms in nine industries. The firm-level data also allow for the construction of nation, industry and time consistent series, including capital and value added. Inwood and Keay (2005) use establishment-level data describing manufacturers located in 128 border and near-border counties in Michigan, New York, Ohio, Pennsylvania, and Ontario to calculate Canadian relative to U.S. total factor productivity ratios for 25 industries. Their results illustrate that the average U.S. establishment was approximately 7% more efficient than its Canadian counterpart in 1870/71.

Population, Demographics & Fertility

Marvin McInnis (1977) assembled a body of census data on childbearing and other aspects of Upper Canadian households in 1861 and produced a sample of 1200 farm households that was used to examine the relationship between child-bearing and land availability. He found that an abundance of nearby uncultivated land did affect the probability of there being young children in the household but the magnitude of the influence was small. Moreover, the strongest result was that fertility fell as larger cities developed sufficiently close by for there to be a real influence by urban life and culture.

Eric Moore and Brian Osborne (1987) have examined the socio-economic differentials of marital fertility in Kingston. They related religion, birthplace, and age of mother, ethnic origin and occupational status to changes in fertility between 1861and 1881, using a data set of approximately 3000 observations taken from the manuscript census. Their choice of variables allows for the examination of the impact of both economic factors, as well as the importance of cultural attributes. William Marr (1992) took the first reasonably large sample of farm households (2,656) from the 1851-52 Census of Canada West and examined the determinants of fertility. He found fertility differences between older and more newly settled regions were influenced by land availability at the farm level but farm location, with respect to the extent of agricultural development, did not affect fertility when age, birthplace and religion were held constant. Michael Wayne (1998) uses the 1861 Census of Canada to look at the black population of Canada on the eve of the American Civil War. Meanwhile, George Emery (1993) helps provide an assessment of the comprehensiveness and accuracy of aggregate vital statistics in Ontario between 1869 and 1952 by looking at the process of recording vital statistics. Emery and Kevin McQuillan (1988) use case studies to examine mortality in nineteenth-century Ingersoll, Ontario.

Urban and Rural Life

A number of studies have examined urban and rural life. Bettina Bradbury (1984) has analyzed the census manuscripts of two working class Montreal wards, Ste. Anne and St. Jacques, for the years 1861, 1871 and 1881. Random samples of 1/10 of the households in these parts of Montreal were taken for a sample of nearly 11,000 individuals over three decades. The data were used to examine women and wage labor in Montreal. The evidence is that men were the primary wage earners but the wife’s contribution to the family economy was not so much her own wage labor, which was infrequent, but in organizing the economic life of the household and finding alternate sources of support.

Bettina Bradbury, Peter Gossage, Evelyn Kolish and Alan Stewart (1993) and Gossage (1991) have examined marriage contracts in Montreal over the period 1820-1840 and found that, over time, the use of marriage contracts changed, becoming a tool of a propertied minority. As well, a growing proportion of contract signers chose to keep the property of spouses separate rather than “in community.” The movement towards separation was most likely to be found among the wealthy where separate property offered advantages, especially to those engaged in commerce during harsh economic times. Gillian Hamilton (1999) looks at prenuptial contracting behavior in early nineteenth-century Quebec to explore property rights within families and finds that couples signing contracts tended to choose joint ownership of property when wives were particularly important to the household.

Chad Gaffield (1979, 1983, 1987) has examined social, family and economic life in the Eastern Ontario counties of Prescott-Russell, Alfred and Caledonia using aggregate census, as well as manuscript data for the period 1851-1881.8 He has applied the material to studying rural schooling and the economic structure of farm families and found systematic differences between the marriage patterns of Anglophones and Francophone with Francophone tending to marry at a younger average age. Also, land shortages and the diminishing forest frontier created economic difficulties that led to reduced family sizes by 1881. Gaffield’s most significant current research project is his leadership of the Canadian Century Research Infrastructure (CCRI) initiative, one of the country’s largest research projects. The CCRI is creating cross-indexed databases from a century’s worth of national census information, enabling unprecedented understanding of the making of modern Canada. This effort will eventually lead to an integrated set of micro-data resources at a national level comparable to what currently exist for the United States.9

Business Records

Company and business records have also been used as a source of micro-data and insight into economic history. Gillian Hamilton has conducted a number of studies examining contracts, property rights and labor markets in pre-twentieth century Canada. Hamilton (1996, 2000) examines the nature of apprenticing arrangements in Montreal around the turn of the nineteenth century, using apprenticeship contracts from a larger body of notarial records found in Quebec. The principal question addressed is what determined apprenticeship length and when the decline of the institution began? Hamilton finds that the characteristics of both masters and their boys were important and that masters often relied on probationary periods to better gauge a boy’s worth before signing a contract. Probations, all else equal, were associated with shorter contracts.

Ann Carlos and Frank Lewis (1998, 1999, 2001, 2002) access Hudson Bay Company fur trading records to study property rights, competition, and depletion in the eighteenth-century Canadian fur trade and their work represents an important foray into Canadian aboriginal economic history by studying role of aboriginals as consumers. Doug McCalla (2005, 2005, 2001) uses store records from Upper Canada to examine and understand consumer purchases in the early nineteenth century and gain insight into material culture. Barton Hamilton and Mary MacKinnon (1996) use the Canadian Pacific Railway records to study changes between 1903 and 1938 in the composition of job separations, and the probability of separation. The proportion of voluntary departures fell by more than half after World War I. Independent competing risk, piecewise-constant hazard functions for the probabilities of quits and layoffs are estimated. Changes in workforce composition lengthened the average worker’s spell, but a worker with any given set of characteristics was much more likely to be laid off after 1921, although many of these layoffs were only temporary.

MacKinnon (1997) taps into the CPR data again with a constructed sample of 9000 employees hired before 1945 that includes 700 pensioners and finds features of the CPR pension plan are consistent with economic explanations regarding the role of pensions. Long, continuous periods of service were likely to be rewarded and employees in the most responsible positions generally had higher pensions.

MacKinnon (1996) complements published Canadian nominal wage data by constructing a new hourly wage series, developed from firm records, for machinists, helpers, and laborers employed by the Canadian Pacific Railway between 1900 and 1930. This new evidence suggests that real wage growth in Canada was faster than previously believed, and that there were substantial changes in wage inequality. In another contribution, MacKinnon (1990) studies unemployment relief in Canada by examining relief policies and recipients and contrasting the Canadian situation with unemployment insurance in Britain. She finds demographic factors important in explaining who went on relief, with older workers, and those with large families most likely to be on relief for sustained periods. Another unique contribution to historical labor studies is Michael Huberman and Denise Young (1999). They examine a set of individual strike data of 1554 strikes for Canada from 1901 to 1914 and conclude that having international unions did not weaken Canada’s union movement and that they became part of Canada’s industrial relations framework.

The 1891 and 1901 Census

An ongoing project is the 1891 Census of Canada Project at the University of Guelph under Director Kris Inwood, which is making the information of this census available to the research public in a digitized sample of individual records from the 1891 census. The project is hosted by the University of Guelph, with support from the Canadian Foundation for Innovation, the Ontario Innovation Trust and private sector partners. Phase 1 (Ontario) of the project began during the winter of 2003 in association with the College of Arts Canada Research Chair in Rural History. The Ontario project continues until 2007. Phase II began in 2005; it extends data collection to the rest of the country and also creates an integrated national sample. The database includes information returned on a randomly selected 5% of the enumerators’ manuscript pages with each page containing information describing twenty-five people. An additional 5% of census pages for western Canada and several large cities augment the basic sample. Ultimately the database will contain records for more than 350,000 people, bearing in mind that the population of Canada in 1891 was 3.8 million.

The release of the 1901 Census of Canada manuscript census has also spawned numerous micro-data studies. Peter Baskerville and Eric Sager (1995, 1998) have used the 1901 Census to examine unemployment and the work force in late Victorian Canada.10 Baskerville (2001a,b) uses the 1901 census to examine the practice of boarding in Victorian Canada while in another study he uses the 1901 census to examine wealth and religion. Kenneth Sylvester (2001) uses the 1901 census to examine ethnicity and landholding. Alan Green and Mary MacKinnon (2001) use a new sample of individual-level data compiled from the manuscript returns of the 1901 Census of Canada to examine the assimilation of male wage-earning immigrants (mainly from the UK) in Montreal and Toronto. Unlike studies of post-World War II immigrants to Canada, and some recent studies of nineteenth-century immigration to the United States, they find slow assimilation to the earnings levels of native-born English mother-tongue Canadians. Green, MacKinnon and Chris Minns (2005) use 1901 census data to demonstrate that Anglophones and Francophone had very different personal characteristics, so that movement to the west was rarely economically attractive for Francophone. However, large-scale migration into New England fitted French Canadians’ demographic and human capital profile.

Wealth and Inequality

Recent years have also seen the emergence of a body of literature by several contributors on wealth accumulation and distribution in nineteenth-century Canada. This work has provided quantitative measurements of the degree of inequality in wealth holding, as well as its evolution over time. Gilles Paquet and Jean-Pierre Wallot (1976, 1986) have examined the net personal wealth of wealth holders using “les inventaires après déces” (inventories taken after death) in Quebec during the late eighteenth and early nineteenth century. They have suggested that the habitant was indeed a rational economic agent who chose land as a form of wealth not because of inherent conservatism but because information and transactions costs hindered the accumulation of financial assets.

A. Gordon Darroch (1983a, 1983b) has utilized municipal assessment rolls to study wealth inequality in Toronto during the late nineteenth century. Darroch found that inequality among assessed families was such that the top one-fifth of assessed families held at least 65% of all assessed wealth and the poorest 40% never more than 8%, even though inequality did decline between 1871 and 1899. Darroch and Michael Ornstein (1980, 1984) used the 1871 Census to examine ethnicity, occupational structure and family life cycles in Canada. Darroch and Soltow (1992, 1994) research property holding in Ontario using 5,669 individuals the 1871 census manuscripts and find “deep and abiding structures of inequality” accompanied by opportunities for mobility.

Lars Osberg and Fazley Siddiq (1988, 1993) and Siddiq (1988) have examined wealth inequality in Nova Scotia using probated estates from 1871 and 1899. They found a slight shift towards greater inequality in wealth over time and concluded that the prosperity of the 1850-1875 period in Nova Scotia benefited primarily the Halifax- based merchant class. Higher levels of wealth were associated with being a merchant and with living in Halifax, as opposed to the rest of the province. Siddiq and Julian Gwyn (1992) used probate inventories from 1851 and 1871 to study wealth over the period. They again document a greater trend towards inequality, accompanied by rising wealth. In addition, Peter Wardhas collected a set of 196 Nova Scotia probate records for Lunenburg County spanning 1808-1922, as well as a set of poll tax records for the same location between 1791 and 1795.11

Livio Di Matteo and Peter George (1992, 1998) have examined wealth distribution in late nineteenth century Ontario using probate records and assessment roll data for Wentworth County for the years 1872, 1882, 1892 and 1902. They find a rise in average wealth levels up until 1892 and a decline from 1892 to 1902. Whereas the rise in wealth from 1872 to 1892 appears to have accompanied by a trend towards greater equality in wealth distribution, the period 1892 to 1902 marked a return to greater inequality. Di Matteo (1996, 1997, 1998, 2001) uses a set of 3,515 probated decedents for all of Ontario in 1892 to examine the determinants of wealth holding, the wealth of the Irish, inequality and life cycle accumulation. Di Matteo and Herb Emery (2002) use the 1892 Ontario data to examine life insurance holding and the extent of self-insurance as wealth rises. Di Matteo (2004, 2006) uses a newly constructed micro-data set for the Thunder Bay District from 1885-1920 consisting of 1,293 probated decedents to examine wealth and inequality during Canada’s wheat boom era. Di Matteo is currently using Ontario probated decedents from 1902 linked to the 1901 census and combined with previous data from 1892 to examine the impact of religious affiliation on wealth holding.

Wealth and property holding among women has also been a specific topic of research.12 Peter Baskerville (1999) uses probate data to examine wealth holding by women in the cities of Victoria and Hamilton between 1880 and 1901 and finds that they were substantial property owners. The holding of wealth by women in the wake of property legislation is studied by Inwood and Sue Ingram (2000) and Inwood and Sarah Van Sligtenhorst (2004). Their work chronicles the increase in female property holding in the wake of Canadian property law changes in the late nineteenth-century, Inwood and Richard Reid (2001) also use the Canadian Census to examine the relationship between gender and occupational identity.

Conclusion

The flurry of recent activity in Canadian quantitative economic history using census and probate data bodes well for the future. Even the National Archives of Canada has now made digital images of census forms available online as well as other primary records.13 Moreover, projects such as the CCRI and the 1891 Census Project hold the promise of new, integrated data sources for future research on national as opposed to regional micro-data questions. We will be able to see the extent of regional economic development, earnings and convergence at a regional level and from a national perspective. Access to the 1911 and future access to the 1921 Census of Canada will also provide fertile areas for research and discovery. The period between 1900 and 1921, spanning the wheat boom and the First World War, is particularly important as it coincides with Canadian industrialization, rapid economic growth and the further expansion of wealth and income at the individual level. Moreover, the access to new samples of micro data may also help shed light on aboriginal economic history during the nineteenth and early twentieth century, as well as the economic progress of women.14 In particular, the economic history of Canada’s aboriginal peoples after the decline of the fur trade and during Canada’s industrialization is an area where micro-data might be useful in illustrating economic trends and conditions.15

References:

Baskerville, Peter A. “Familiar Strangers: Urban Families with Boarders in Canada, 1901.” Social Science History 25, no. 3 (2001): 321-46.

Baskerville, Peter. “Did Religion Matter? Religion and Wealth in Urban Canada at the Turn of the Twentieth Century: An Exploratory Study.” Histoire sociale-Social History XXXIV, no. 67 (2001): 61-96.

Baskerville, Peter A. and Eric W. Sager. “Finding the Work Force in the 1901 Census of Canada.” Histoire sociale-Social History XXVIII, no. 56 (1995): 521-40.

Baskerville, Peter A., and Eric W. Sager. Unwilling Idlers: The Urban Unemployed and Their Families in Late Victorian Canada. Toronto: University of Toronto Press, 1998

Baskerville, Peter A. “Women and Investment in Late-Nineteenth Century Urban Canada: Victoria and Hamilton, 1880-1901.” Canadian Historical Review 80, no. 2 (1999): 191-218.

Borsa, Joan, and Kris Inwood. Codebook and Interpretation Manual for the 1870-71 Canadian Industrial Database. Guelph, 1993.

Bouchard, Gerard. “Introduction à l’étude de la société saguenayenne aux XIXe et XXe siècles.” Revue d’histoire de l’Amérique française 31, no. 1 (1977): 3-27.

Bouchard, Gerard. “Les systèmes de transmission des avoirs familiaux et le cycle de la société rurale au Québec, du XVIIe au XXe siècle.” Histoire sociale-Social History XVI, no. 31 (1983): 35-60.

Bouchard, Gerard. “Les fichiers-réseaux de population: Un retour à l’individualité.” Histoire sociale-Social History XXI, no. 42 (1988): 287-94.

Bouchard, Gerard and Regis Thibeault. “Change and Continuity in Saguenay Agriculture: The Evolution of Production and Yields (1852-1971).” In Canadian Papers in Rural History, Vol. VIII, edited Donald H. Akenson, 231-59. Gananoque, ON: Langdale Press, 1992.

Bouchard, Gerard. “Computerized Family Reconstitution and the Measure of Literacy. Presentation of a New Index.” History and Computing 5, no 1 (1993): 12-24.

Bouchard, Gerard. Quelques arpents d’Amérique: Population, économie, famille au Saguenay, 1838-1971. Montreal: Boréal, 1996.

Bouchard, Gerard. “Economic Inequalities in Saguenay Society, 1879-1949: A Descriptive Analysis.” Canadian Historical Review 79, no. 4 (1998): 660-90.

Bourbeau, Robert, and Jacques Légaré. Évolution de la mortalité au Canada et au Québec 1831-1931. Montreal: Les Presses de l’Université de Montréal, 1982.

Bradbury, Bettina. “Women and Wage Labour in a Period of Transition: Montreal, 1861-1881.” Histoire sociale-Social History XVII (1984): 115-31.

Bradbury, Bettina, Peter Gossage, Evelyn Kolish, and Alan Stewart. “Property and Marriage: The Law and the Practice in Early Nineteenth-Century Montreal.” Histoire sociale-Social History XXVI, no. 51 (1993): 9-40.

Carlos, Ann, and Frank Lewis. “Property Rights and Competition in the Depletion of the Beaver: Native Americans and the Hudson’s Bay Company, 1700-1763.” In The Other Side of the Frontier: Economic Explanations into Native American History, edited by Linda Barrington. Boulder, CO: Westview Press, 1998.

Carlos, Ann, and Frank Lewis. “Property Rights, Competition, and Depletion in the Eighteenth-century Canadian Fur Trade: The Role of the European Market.” Canadian Journal of Economics 32, no. 3 (1999): 705-28.

Carlos, Ann, and Frank Lewis. “Marketing in the Land of Hudson Bay: Indian Consumers and the Hudson’s Bay Company, 1670-1770.” Enterprise and Society 3 (2002): 285-317.

Carlos, Ann and Frank Lewis. “Trade, Consumption, and the Native Economy: Lessons from York Factory, Hudson Bay.” Journal of Economic History 61, no. 4 (2001): 1037-64.

Charbonneau, Hubert. “Le registre de population du Québec ancien Bilan de vingt annés de recherches.” Histoire sociale-Social History XXI, no. 42 (1988): 295-99.

Darroch, A. Gordon. “Occupational Structure, Assessed Wealth and Homeowning during Toronto’s Early Industrialization, 1861-1899.” Histoire sociale-Social History XVI (1983): 381-419.

Darroch, A. Gordon. “Early Industrialization and Inequality in Toronto, 1861-1899.” Labour/Le Travailleur 11 (1983): 31-61.

Darroch, A. Gordon. “A Study of Census Manuscript Data for Central Ontario, 1861-1871: Reflections on a Project and On Historical Archives.” Histoire sociale-Social History XXI, no. 42 (1988): 304-11.

Darroch, A. Gordon, and Michael Ornstein. “Ethnicity and Occupational Structure in Canada in 1871: The Vertical Mosaic in Historical Perspective.” Canadian Historical Review 61 (1980): 305-33.

Darroch, A. Gordon, and Michael Ornstein. “Family Coresidence in Canada in 1871: Family Life Cycles, Occupations and Networks of Mutual Aid.” Canadian Historical Association Historical Papers (1984): 30-55.

Darroch, A. Gordon, and Lee Soltow. “Inequality in Landed Wealth in Nineteenth-Century Ontario: Structure and Access.” Canadian Review of Sociology and Anthropology 29 (1992): 167-200.

Darroch, A. Gordon, and Lee Soltow. Property and Inequality in Victorian Ontario: Structural Patterns and Cultural Communities in the 1871 Census. Toronto: University of Toronto Press, 1994.

Denton, Frank T., and Peter George. “An Explanatory Statistical Analysis of Some Socio-economic Characteristics of Families in Hamilton, Ontario, 1871.” Histoire sociale-Social History 5 (1970): 16-44.

Denton, Frank T., and Peter George. “The Influence of Socio-Economic Variables on Family Size in Wentworth County, Ontario, 1871: A Statistical Analysis of Historical Micro-data.” Review of Canadian Sociology and Anthropology 10 (1973): 334-45.

Di Matteo, Livio. “Wealth and Inequality on Ontario’s Northwestern Frontier: Evidence from Probate.” Histoire sociale-Social History XXXVIII, no. 75, (2006): 79-104.

Di Matteo, Livio. “Boom and Bust, 1885-1920: Regional Wealth Evidence from Probate Records.” Australian Economic History Review 44, no. 1 (2004): 52-78.

Di Matteo, Livio. “Patterns of Inequality in Late Nineteenth-Century Ontario: Evidence from Census-Linked Probate Data.” Social Science History 25, no. 3 (2001): 347-80.

Di Matteo, Livio. “Wealth Accumulation and the Life Cycle in Economic History: Implications of Alternative Approaches to Micro-Data.” Explorations in Economic History 35 (1998): 296-324.

Di Matteo, Livio. “The Determinants of Wealth and Asset Holding in Nineteenth Century Canada: Evidence from Micro-data.” Journal of Economic History 57, no. 4 (1997): 907-34.

Di Matteo, Livio. “The Wealth of the Irish in Nineteenth-Century Ontario.” Social Science History 20, no. 2 (1996): 209-34.

Di Matteo, Livio, and J.C. Herbert Emery. “Wealth and the Demand for Life Insurance: Evidence from Ontario, 1892.” Explorations in Economic History 39, no. 4 (2002): 446-69.

Di Matteo, Livio, and Peter George. “Patterns and Determinants of Wealth among Probated Decedents in Wentworth County, Ontario, 1872-1902.” Histoire sociale-Social History XXXI, no. 61(1998): 1-34.

Di Matteo, Livio, and Peter George. “Canadian Wealth Inequality in the Late Nineteenth Century: A Study of Wentworth County, Ontario, 1872-1902.” Canadian Historical Review LXXIII, no. 4 (1992): 453-83.

Emery, George N. Facts of Life: The Social Construction of Vital Statistics, Ontario, 1869-1952. Montreal: McGill-Queen’s University Press, 1993.

Emery, George, and Kevin McQuillan. “A Case Study Approach to Ontario Mortality History: The Example of Ingersoll, 1881-1971.” Canadian Studies in Population 15, (1988): 135-58.

Ens, Gerhard. Homeland to Hinterland: The Changing Worlds of the Red River Metis in the Nineteenth Century. Toronto: University of Toronto Press, 1996.

Gaffield, Chad. “Canadian Families in Cultural Context: Hypotheses from the Mid-Nineteenth Century.” Historical Papers, Canadian Historical Association (1979): 48-70.

Gaffield, Chad. “Schooling, the Economy and Rural Society in Nineteenth-Century Ontario.” in Childhood and Family in Canadian History, edited by Joy Parr. Toronto: McClelland and Stewart (1983): 69-92.

Gaffield, Chad. _Language, Schooling and Cultural Conflict: The Origins of the French-Language Controversy in Ontario.” Kingston and Montreal: McGill-Queen’s, 1987.

Gaffield, Chad. “Machines and Minds: Historians and the Emerging Collaboration.” Histoire sociale-Social History XXI, no. 42 (1988): 312-17.

Gagan, David. Hopeful Travellers: Families, Land and Social Change in Mid-Victorian Peel County, Canada West. Toronto: University of Toronto Press, 1981.

Gagan, David. “Some Comments on the Canadian Experience with Historical Databases.” Histoire sociale-Social History XXI, no. 42 (1988): 300-03.

Gossage, Peter. “Family Formation and Age at Marriage at Saint-Hyacinthe, Quebec, 1854-1891.” Histoire sociale-Social History XXIV, no. 47 (1991): 61-84.

Green, Alan, Mary Mackinnon and Chris Minns. “Conspicuous by Their Absence: French Canadians and the Settlement of the Canadian West.” Journal of Economic History 65, no. 3 (2005): 822-49.

Green, Alan, and Mary MacKinnon. “The Slow Assimilation of British Immigrants in Canada: Evidence from Montreal and Toronto, 1901.”Explorations in Economic History 38, no. 3 (2001): 315-38

Green, Alan G., and Malcolm C. Urquhart. “New Estimates of Output Growth in Canada: Measurement and Interpretation.” In Perspectives on Canadian Economic History, edited by Douglas McCalla, 182-199. Toronto: Copp Clark Pitman Ltd., 1987.

Gwyn, Julian, and Fazley K. Siddiq. “Wealth Distribution in Nova Scotia during the Confederation Era, 1851 and 1871.” Canadian Historical Review LXXIII, no. 4 (1992): 435-52.

Hamilton Barton, and Mary MacKinnon. “Quits and Layoffs in Early Twentieth Century Labour Markets.” Explorations in Economic History 21 (1996): 346-66.

Hamilton, Gillian. “The Decline of Apprenticeship in North America: Evidence from Montreal.” Journal of Economic History 60, no. 3 (2000): 627-64.

Hamilton, Gillian. “Property Rights and Transaction Costs in Marriage: Evidence from Prenuptial Contracts.” Journal of Economic History 59, no. 1 (1999): 68-103.

Hamilton, Gillian. “The Market for Montreal Apprentices: Contract Length and Information.”Explorations in Economic History 33, no 4 (1996): 496-523

Hamilton, Michelle, and Kris Inwood. “The Identification of the Aboriginal Population in the 1891 Census of Canada.” Manuscript, University of Guelph, 2006.

Henripin, Jacques. Tendances at facteurs de la fécondité au Canada Bureau fédéral de la Statistique. Ottawa: Bureau fe?de?ral de la statistique, 1968.

Huberman, Michael, and Denise Young. “Cross-Border Unions: Internationals in Canada, 1901-1914.” Explorations in Economic History 36 (1999): 204-31.

Igartua Jose E. “Les bases de donnés historiques: L’expérience canadienne depuis quinze ans – Introduction.” Histoire sociale-Social History XXI, no. 42 (1988): 283-86.

Inwood, Kris, and Phyllis Wagg. “The Survival of Handloom Weaving in Rural Canada circa 1870.” Journal of Economic History 53 (1993): 346-58.

Inwood, Kris, and Sue Ingram, “The Impact of Married Women’s Property Legislation in Victorian Ontario.” Dalhousie Law Journal 23, no. 2 (2000): 405-49.

Inwood, Kris, and Sarah Van Sligtenhorst. “The Social Consequences of Legal Reform: Women and Property in a Canadian Community.” Continuity and Change 19 no. 1 (2004): 165-97.

Inwood, Kris, and Richard Reid. “Gender and Occupational Identity in a Canadian Census.” Historical Methods 32, no. 2 (2001): 57-70.

Inwood, Kris, and Kevin James. “The 1891 Census of Canada.” Cahiers québécois de démographie, forthcoming.

Inwood, Kris, and Ian.Keay. “Bigger Establishments in Thicker Markets: Can We Explain Early Productivity Differentials between Canada and the United States.” Canadian Journal of Economics 38, no. 4 (2005): 1327-63.

Jones, Alice Hanson. Wealth of a Nation to Be: The American Colonies on the Eve of the Revolution. New York: Columbia Press, 1980.

Katz, Michael B. _The People of Hamilton, Canada West: Family and Class in a Mid-nineteenth-century City_. Cambridge: Harvard University Press, 1975.

Keay, Ian. “Canadian Manufacturers’ Relative Productivity Performance: 1907-1990.” Canadian Journal of Economics 44, no. 4 (2000): 1049-68.

Keay, Ian, and Angela Redish. “The Micro-economic Effects of Financial Market Structure: Evidence from Twentieth -century North American Steel Firms.” Explorations in Economic History 41, no. 4 (2004): 377-403.

Landry, Yves. “Fertility in France and New France: The Distinguishing Characteristics of Canadian Behaviour in the Seventeenth and Eighteenth Centuries.” Social Science History 17, no. 4 (1993): 577-92.

Mackinnon, Mary. “Relief Not Insurance: Canadian Unemployment Relief in the 1930s.”Explorations in Economic History 27, no. 1 (1990): 46-83

Mackinnon, Mary. “New Evidence on Canadian Wage Rates, 1900-1930.”

Canadian Journal of Economics XXIX, no.1 (1996): 114-31.

MacKinnon, Mary. “Providing for Faithful Servants: Pensions at the Canadian Pacific Railway.” Social Science History 21, no. 1 (1997): 59-83.

Marr, William. “Micro and Macro Land Availability as a Determinant of Human Fertility in Rural Canada West, 1851.” Social Science History 16 (1992): 583-90.

McCalla, Doug. “Upper Canadians and Their Guns: An Exploration via Country Store Accounts (1808-61).” Ontario History 97 (2005): 121-37.

McCalla, Doug. “A World without Chocolate: Grocery Purchases at Some Upper Canadian Country Stores, 1808-61.” Agricultural History 79 (2005): 147-72.

McCalla, Doug. “Textile Purchases by Some Ordinary Upper Canadians, 1808-1862.” Material History Review 53, (2001): 4-27.

McInnis, Marvin. “Childbearing and Land Availability: Some Evidence from Individual Household Data.” In Population Patterns in the Past, edited by Ronald Demos Lee, 201-27. New York: Academic Press, 1977.

Moore, Eric G., and Brian S. Osborne. “Marital Fertility in Kingston, 1861-1881: A Study of Socio-economic Differentials.” Histoire sociale-Social History XX (1987): 9-27.

Muise, Del. “The Industrial Context of Inequality: Female Participation in Nova Scotia’s Paid Workforce, 1871-1921.” Acadiensis XX, no. 2 (1991).

Myers, Sharon. “‘Not to Be Ranked as Women’: Female Industrial Workers in Halifax at the Turn of the Twentieth Century.” In Separate Spheres: Women’s Worlds in the Nineteenth-Century Maritimes, edited by Janet Guildford and Suzanne Morton, 161-83. Fredericton: Acadiensis Press, 1994.

Osberg, Lars, and Fazley Siddiq. “The Acquisition of Wealth in Nova Scotia in the Late Nineteenth Century.” Research in Economic Inequality 4 (1993): 181-202.

Osberg, Lars, and Fazley Siddiq. “The Inequality of Wealth in Britain’s North American Colonies: The Importance of the Relatively Poor.” Review of Income and Wealth 34 (1988): 143-63.

Paquet, Gilles, and Jean-Pierre Wallot. “Les Inventaires après décès à Montréal au tournant du XIXe siècle: preliminaires à une analyse.” Revue d’histoire de l’Amérique française 30 (1976): 163-221.

Paquet, Gilles, and Jean-Pierre Wallot. “Stratégie Foncière de l’Habitant: Québec (1790-1835).” Revue d’histoire de l’Amérique française 39 (1986): 551-81.

Seager, Allen, and Adele Perry. “Mining the Connections: Class, Ethnicity and Gender in Nanaimo, British Columbia, 1891.” Histoire sociale/Social History 30 , no. 59 (1997): 55-76.

Siddiq, Fazley K. “The Size Distribution of Probate Wealth Holdings in Nova Scotia in the Late Nineteenth Century.” Acadiensis 18 (1988): 136-47.

Soltow, Lee. Patterns of Wealthholding in Wisconsin since 1850. Madison: University of Wisconsin Press, 1971.

Sylvester, Kenneth Michael. “All Things Being Equal: Land Ownership and Ethnicity in Rural Canada, 1901.” Histoire sociale-Social History XXXIV, no. 67 (2001): 35-59.

Thernstrom, Stephan. The Other Bostonians: Poverty and Progress in the American Metropolis, 1880-1970. Cambridge: Harvard University Press, 1973.

Urquhart, Malcolm C. Gross National Product, Canada, 1870-1926: The Derivation of the Estimates. Montreal: McGill-Queens, 1993.

Urquhart, Malcolm C. “New Estimates of Gross National Product Canada, 1870-1926: Some Implications for Canadian Development.” In Long Term Factors in American Economic Growth, edited by Stanley L. Engerman and Robert E. Gallman, 9-94. Chicago: University of Chicago Press (1986).

Wayne, Michael. “The Black Population of Canada West on the Eve of the American Civil War: A Reassessment Based on the Manuscript Census of 1861.” In A Nation of Immigrants: Women, Workers and Communities in Canadian History, edited by Franca Iacovetta, Paula Draper and Robert Ventresca. Toronto: University of Toronto Press, 1998.

Footnotes

1 The helpful comments of Herb Emery, Mary MacKinnon and Kris Inwood on earlier drafts are acknowledged.

2 See especially Mac Urquhart’s spearheading of the major efforts in national income and output estimates. (Urquhart, 1986, 1993)

3 “Individual response” means by individuals, households and firms.

4 See Gaffield (1988) and Igartua (1988).

5 The Conference on the Use of Census Manuscripts for Historical Research held at Guelph in March 1993 was an example of the interdisciplinary nature of historical micro-data research. The conference was sponsored by the Canadian Committee on History and Computing, the Social Sciences and Humanities Research Council of Canada and the University of Guelph. The conference was organized by economist Kris Inwood and historian Richard Reid and featured presentations by historians, economists, demographers, sociologists and anthropologists.

6 The Denton/George project had its origins in a proposal to the Second Conference on Quantitative Research in Canadian Economic History in 1967 that a sampling of the Canadian census be undertaken. Denton and George drew a sample from the manuscript census returns for individuals for 1871 that had recently been made available, and reported their preliminary findings to the Fourth Conference in March, 1970 in a paper that was published shortly afterwards in Histoire sociale/Social History (1970). Mac Urquhart’s role here must be acknowledged. He and Ken Buckley were insistent that a sampling of Census manuscripts would be an important venture for the conference members to initiate.

7 Also, sources such as the aggregate census have been used to examine fertility by Henripin (1968) and mortality by Bourbeau and Legaré (1982)).

8 Chad Gaffield, Peter Baskerville and Alan Artibise were also involved in the creation of a machine-readable listing of archival sources on Vancouver Island known as the Vancouver Islands Project (Gaffield, 1988, 313).

9 See Chad Gaffield, “Ethics, Technology and Confidential Research Data: The Case of the Canadian Century Research Infrastructure Project,” paper presented to the World History Conference, Sydney, July 3-9, 2005.

10 Baskerville and Sager have been involved in the Canadian Families Project. See “The Canadian Families Project”, a special issue of the journal Historical Methods, 33 no. 4 (2000).

11 See Don Paterson’s Economic and Social History Data Base at the University of British Columbia at http://www2.arts.ubc.ca/econsochistory/data/data_list.html

12 Examples of other aspects of gender and economic status in a regional context ar e covered by Muise (1991), Myers (1994) and Seager and Perry (1997).

13 See http://www.collectionscanada.ca/genealogy/022-500-e.html

14 See for example the work by Gerhard Ens (1996) on the Red River Metis.

15 Hamilton and Inwood (2006) have begun research into identifying the aboriginal population in the 1891 Census of Canada.

Citation: Di Matteo, Livio. “The Use of Quantitative Micro-data in Canadian Economic History: A Brief Survey”. EH.Net Encyclopedia, edited by Robert Whaples. January 27, 2007. URL
http://eh.net/encyclopedia/the-use-of-quantitative-micro-data-in-canadian-economic-history-a-brief-survey/